moonshotai/Kimi-K2-Instruct logo

moonshotai/Kimi-K2-Instruct

Kimi K2-Instruct is Moonshot AI's reflex-grade 1T-parameter MoE model (32B active) tuned for low-latency instruction following, tool use, and multilingual chat across a 128K context window.

Moonshot AI Chat 256K Tokens
Get API Key
Try in Playground
Free Trial Credit On first TopUp of minimum $5
$1.00

api_example.sh

curl -X POST "https://platform.qubrid.com/v1/chat/completions" \
  -H "Authorization: Bearer QUBRID_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "moonshotai/Kimi-K2-Instruct",
  "messages": [
    {
      "role": "user",
      "content": "Explain quantum computing in simple terms"
    }
  ],
  "temperature": 0.7,
  "max_tokens": 8192,
  "stream": true,
  "top_p": 1
}'

Technical Specifications

Model Architecture & Performance

Variant Instruct (Reflex)
Model Size 1T params (32B active)
Context Length 256K Tokens
Quantization INT4 / FP8
Tokens/sec 45
Architecture Sparse MoE Transformer with 384 experts (top-8 routing), Muon optimizer fine-tuning, and MLA-style attention
Precision INT4 (deployment) / FP8 training
License Modified MIT License
Release Date July 2025
Developers Moonshot AI

Pricing

Pay-per-use, no commitments

Input Tokens $0.50/1M Tokens
Output Tokens $2.40/1M Tokens

API Reference

Complete parameter documentation

Parameter Type Default Description
stream boolean true Enable streaming responses for real-time output.
temperature number 0.7 Controls randomness.
max_tokens number 8192 Maximum number of tokens to generate.
top_p number 1 Controls nucleus sampling.
enable_thinking boolean false Toggle chain-of-thought reasoning mode. Set temperature=1.0 when enabled.

Explore the full request and response schema in our external API documentation

Performance

Strengths & considerations

Strengths Considerations
1T MoE with 32B active parameters provides frontier quality while maintaining fast reflex responses
Trained on 15.5T multilingual and tool-use tokens for strong cross-lingual coverage
128K context window supports long documents and code bases
Optimised for low-latency deployments with INT4/FP8 serving and Muon optimizer fine-tuning
Does not expose explicit thinking traces; deep reasoning workflows require K2.5 or Thinking variants
Full checkpoint remains very large, demanding multi-GPU infrastructure for self-hosting
Limited native multimodal support compared with K2.5 Vision/Agent models

Use cases

Recommended applications for this model

General-purpose chat and support agents that need fast, fluent responses without exposing chain-of-thought
Coding copilots that draft and review code with tight latency budgets
Multilingual content generation and summarisation for global product teams

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid enabled us to deploy production AI agents with reliable tool-calling and step tracing. We now ship agents faster with full visibility into every decision and API call."

AI Agents Team

Agent Systems & Orchestration