
Chinese AI startup MiniMax, based in Shanghai, has introduced a new M2.5 language model in two variants, positioning it as near top-tier in capability while dramatically undercutting current frontier AI pricing.
The company is pitching M2.5 as a way to make powerful AI so inexpensive that usage costs become almost an afterthought, particularly for enterprises that want AI systems to handle long, complex tasks rather than quick question-and-answer chats.
MiniMax claims M2.5 can deliver performance that rivals leading models from Google and Anthropic, especially in “agentic” contexts where AI tools are asked to perform multi-step work on behalf of a user. According to the company, this comes with a cost reduction of up to 95% compared with typical frontier systems.
That pricing shift is central to MiniMax’s pitch. Over the past few years, accessing the highest-performing models has often felt like contracting a top-tier consultant: impressive results, but with constant attention to usage and token counts. MiniMax argues that M2.5 changes that equation by pushing the marginal cost of heavy AI use down far enough that developers and businesses can stop budgeting every token.
The new model is available in two variants, including an M2.5 “Lightning” option, and is being offered through MiniMax’s own API and partner platforms. The company presents this as a way for customers to tap into near-frontier intelligence at a fraction of what they might expect to pay elsewhere.
MiniMax frames M2.5 as part of a broader transition from AI as a conversational interface to AI as a true digital worker. Rather than simply answering questions or drafting short snippets of content, the model is designed to support longer-running, tool-using agents that can execute concrete tasks.
To make the model more suitable for professional workloads, MiniMax says it collaborated with senior practitioners across finance, law and social sciences. The goal, according to the company, was to ensure M2.5 could meet the expectations and standards of people doing real-world knowledge work in those fields.
This orientation toward practical, task-focused deployment ties back to MiniMax’s pricing strategy. If AI usage becomes cheap enough, the company argues, developers will be more inclined to design agents that can spend hours coding, researching or organising complex projects autonomously, without cost overruns becoming a primary concern.
MiniMax has also described M2.5 as “open source” in public messaging. However, as of now, the model weights and implementation code have not been released, and the specific license and terms have not been disclosed. For the moment, the emphasis is on access through MiniMax’s own interfaces and partners rather than full public release.
What M2.5 clearly signals is a competitive push around the economics of high-end AI: if near-frontier models can be delivered at a small fraction of incumbent prices, the balance between experimentation, deployment and cost management for AI-driven applications could start to look very different.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







