moonshotai/Kimi-K2.5

Kimi K2.5 is Moonshot AI's most powerful open-source model to date — a native multimodal agentic model built through continual pretraining on 15 trillion mixed visual and text tokens atop Kimi-K2-Base. With 1T total parameters (32B active), it seamlessly integrates vision, language, and advanced agentic capabilities including an Agent Swarm paradigm that coordinates up to 100 parallel sub-agents, reducing execution time by 4.5x on parallelizable tasks.

Moonshot AI Vision 256K Tokens
Get API Key
Try in Playground
Free Trial Credit No Credit Card Required
$1.00

api_example.sh

curl -X POST "https://platform.qubrid.com/v1/chat/completions" \
  -H "Authorization: Bearer QUBRID_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "moonshotai/Kimi-K2.5",
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "What is in this image? Describe the main elements."
        },
        {
          "type": "image_url",
          "image_url": {
            "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
          }
        }
      ]
    }
  ],
  "max_tokens": 16384,
  "temperature": 1,
  "stream": true,
  "top_p": 0.95
}'

Technical Specifications

Model Architecture & Performance

Variant Multimodal Agentic
Model Size 1T params (32B active)
Context Length 256K Tokens
Quantization INT4 (QAT)
Tokens/Second 50
Architecture Sparse MoE Transformer — 1T total / 32B active, 61 layers, 384 experts (8 selected per token), MLA attention, SwiGLU; native vision encoder with spatial-temporal pooling
Precision INT4 (QAT)
License Modified MIT License
Release Date January 2026
Developers Moonshot AI

Pricing

Pay-per-use, no commitments

Input Tokens $0.0006/1K Tokens
Output Tokens $0.003/1K Tokens

API Reference

Complete parameter documentation

Parameter Type Default Description
stream boolean true Enable streaming responses for real-time output.
temperature number 1 Recommended 1.0 for Thinking mode, 0.6 for Instant mode.
max_tokens number 16384 Maximum number of tokens to generate.
top_p number 0.95 Controls nucleus sampling.
thinking_mode select thinking Thinking mode enables deep reasoning traces. Instant mode provides fast direct responses.

Explore the full request and response schema in our external API documentation

Performance

Strengths & considerations

Strengths Considerations
Native multimodal — trained jointly on 15T vision+text tokens
Agent Swarm coordinates up to 100 parallel sub-agents
4.5x execution time reduction on parallelizable tasks
1T MoE with 32B active per token
76.8% SWE-bench Verified
50.2% HLE (Humanity's Last Exam) at 76% lower cost than Claude Opus 4.5
256K context window
Supports Thinking and Instant modes
1T parameter model requires significant infrastructure
Video input is experimental
630GB full model size
Agent Swarm requires official API for full functionality

Use cases

Recommended applications for this model

Native multimodal agent workflows
Visual code generation (UI/video to code)
Agent Swarm for complex parallel tasks
Advanced web development with vision
Multimodal research and analysis
Image/video-to-code translation

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid's medical OCR and research parsing cut our document extraction time in half. We now have traceable pipelines and reproducible outputs that meet our compliance requirements."

Clinical AI Team

Research & Clinical Intelligence