MiniMaxAI/MiniMax-M2.5
MiniMax-M2.5 is the February 2026 successor to M2.1 — a 229B-parameter MoE (10B active) reasoning model that ships in thinking mode by default, delivering high-accuracy coding, office automation, and summarisation across a 196K context window.
api_example.sh
Technical Specifications
Model Architecture & Performance
Pricing
Pay-per-use, no commitments
API Reference
Complete parameter documentation
| Parameter | Type | Default | Description |
|---|---|---|---|
| stream | boolean | true | Enable streaming responses for real-time output. |
| temperature | number | 1 | Controls randomness. |
| max_tokens | number | 8192 | Maximum number of tokens to generate. |
| top_p | number | 0.95 | Controls nucleus sampling. |
Explore the full request and response schema in our external API documentation
Performance
Strengths & considerations
| Strengths | Considerations |
|---|---|
| Thinking-first alignment provides dependable multi-step reasoning without extra prompting 229B MoE with only 10B active parameters keeps costs competitive while matching frontier accuracy 196,608 token context supports multi-document orchestration and long-running tool sessions Reinforcement learning across hundreds of thousands of agentic scenarios strengthens planning | Thinking traces increase latency versus instant-response chat models High-memory requirements (~457GB bf16) make self-hosting challenging Modified MIT license imposes attribution and policy compliance obligations |
Use cases
Recommended applications for this model
Enterprise
Platform Integration
Docker Support
Official Docker images for containerized deployments
Kubernetes Ready
Production-grade KBS manifests and Helm charts
SDK Libraries
Official SDKs for Python, Javascript, Go, and Java
Don't let your AI control you. Control your AI the Qubrid way!
Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.
"Qubrid enabled us to deploy production AI agents with reliable tool-calling and step tracing. We now ship agents faster with full visibility into every decision and API call."
AI Agents Team
Agent Systems & Orchestration
