MiniMaxAI/MiniMax-M2.1

MiniMax-M2.1 is a SOTA open-source coding and agentic model with 230B total parameters and only 10B active per token (23:1 sparsity ratio). Released December 2025, it achieves 74% on SWE-bench Verified — competitive with Claude Sonnet 4.5 — at a fraction of the cost. It excels in multilingual code development (Python, Java, Go, Rust, C++, TypeScript, Kotlin), long-horizon agentic workflows, and office automation.

MiniMax Chat 200K Tokens
Get API Key
Try in Playground
Free Trial Credit No Credit Card Required
$1.00

api_example.sh

curl -X POST "https://platform.qubrid.com/v1/chat/completions" \
  -H "Authorization: Bearer QUBRID_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "MiniMaxAI/MiniMax-M2.1",
  "messages": [
    {
      "role": "user",
      "content": "Explain quantum computing in simple terms"
    }
  ],
  "temperature": 1,
  "max_tokens": 8192,
  "stream": true,
  "top_p": 0.95
}'

Technical Specifications

Model Architecture & Performance

Variant Instruct
Model Size 230B params (10B active)
Context Length 200K Tokens
Quantization FP8
Tokens/Second 120
Architecture Sparse Mixture-of-Experts (MoE) Transformer with 230B total and 10B active parameters
Precision FP8
License Modified MIT License
Release Date December 2025
Developers MiniMax

Pricing

Pay-per-use, no commitments

Input Tokens $0.0003/1K Tokens
Output Tokens $0.0012/1K Tokens

API Reference

Complete parameter documentation

Parameter Type Default Description
stream boolean true Enable streaming responses for real-time output.
temperature number 1 Recommended at 1.0 for best performance.
max_tokens number 8192 Maximum number of tokens the model can generate.
top_p number 0.95 Controls nucleus sampling.
top_k number 40 Limits token sampling to top-k tokens.

Explore the full request and response schema in our external API documentation

Performance

Strengths & considerations

Strengths Considerations
74% SWE-bench Verified (competitive with Claude Sonnet 4.5)
230B MoE with only 10B active — extreme efficiency
200K context window
Best-in-class polyglot coding (Java, Go, Rust, etc.)
FP8 native quantization
Open weights for local deployment
Less reliable than frontier closed models for deep debugging
Sparse activation may miss niche language idioms
Very large model size requires multi-GPU setup

Use cases

Recommended applications for this model

Multilingual software development
Long-horizon agentic coding
Code review & optimization
Full-stack app generation
Office automation workflows
Complex multi-step tool use

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid's medical OCR and research parsing cut our document extraction time in half. We now have traceable pipelines and reproducible outputs that meet our compliance requirements."

Clinical AI Team

Research & Clinical Intelligence