MiniMaxAI/MiniMax-M2.5 logo

MiniMaxAI/MiniMax-M2.5

MiniMax-M2.5 is the February 2026 successor to M2.1 — a 229B-parameter MoE (10B active) reasoning model that ships in thinking mode by default, delivering high-accuracy coding, office automation, and summarisation across a 196K context window.

MiniMax Chat 200K Tokens
Get API Key
Try in Playground
Free Trial Credit On first TopUp of minimum $5
$1.00

api_example.sh

curl -X POST "https://platform.qubrid.com/v1/chat/completions" \
  -H "Authorization: Bearer QUBRID_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "MiniMaxAI/MiniMax-M2.5",
  "messages": [
    {
      "role": "user",
      "content": "Explain quantum computing in simple terms"
    }
  ],
  "temperature": 1,
  "max_tokens": 8192,
  "stream": true,
  "top_p": 0.95
}'

Technical Specifications

Model Architecture & Performance

Variant Thinking (standard)
Model Size 230B params (10B active)
Context Length 200K Tokens
Quantization bf16 / GPTQ
Tokens/sec 50
Architecture Sparse MoE Transformer with 23:1 sparsity, Muon optimizer, and upgraded long-context attention
Precision bf16 (INT4/GPTQ deployment options)
License Modified MIT License
Release Date February 12, 2026
Developers MiniMax

Pricing

Pay-per-use, no commitments

Input Tokens $0.30/1M Tokens
Output Tokens $1.20/1M Tokens

API Reference

Complete parameter documentation

Parameter Type Default Description
stream boolean true Enable streaming responses for real-time output.
temperature number 1 Controls randomness.
max_tokens number 8192 Maximum number of tokens to generate.
top_p number 0.95 Controls nucleus sampling.

Explore the full request and response schema in our external API documentation

Performance

Strengths & considerations

Strengths Considerations
Thinking-first alignment provides dependable multi-step reasoning without extra prompting
229B MoE with only 10B active parameters keeps costs competitive while matching frontier accuracy
196,608 token context supports multi-document orchestration and long-running tool sessions
Reinforcement learning across hundreds of thousands of agentic scenarios strengthens planning
Thinking traces increase latency versus instant-response chat models
High-memory requirements (~457GB bf16) make self-hosting challenging
Modified MIT license imposes attribution and policy compliance obligations

Use cases

Recommended applications for this model

Enterprise coding copilots that refactor large monorepos with persistent chain-of-thought traces
Financial and legal summarisation workloads that need dependable reasoning over 100K+ token dossiers
Agentic office automation flows coordinating email, spreadsheet, and document operations

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid enabled us to deploy production AI agents with reliable tool-calling and step tracing. We now ship agents faster with full visibility into every decision and API call."

AI Agents Team

Agent Systems & Orchestration