moonshotai/Kimi-K2-Instruct
Kimi K2-Instruct is Moonshot AI's reflex-grade 1T-parameter MoE model (32B active) tuned for low-latency instruction following, tool use, and multilingual chat across a 128K context window.
api_example.sh
Technical Specifications
Model Architecture & Performance
Pricing
Pay-per-use, no commitments
API Reference
Complete parameter documentation
| Parameter | Type | Default | Description |
|---|---|---|---|
| stream | boolean | true | Enable streaming responses for real-time output. |
| temperature | number | 0.7 | Controls randomness. |
| max_tokens | number | 8192 | Maximum number of tokens to generate. |
| top_p | number | 1 | Controls nucleus sampling. |
| enable_thinking | boolean | false | Toggle chain-of-thought reasoning mode. Set temperature=1.0 when enabled. |
Explore the full request and response schema in our external API documentation
Performance
Strengths & considerations
| Strengths | Considerations |
|---|---|
| 1T MoE with 32B active parameters provides frontier quality while maintaining fast reflex responses Trained on 15.5T multilingual and tool-use tokens for strong cross-lingual coverage 128K context window supports long documents and code bases Optimised for low-latency deployments with INT4/FP8 serving and Muon optimizer fine-tuning | Does not expose explicit thinking traces; deep reasoning workflows require K2.5 or Thinking variants Full checkpoint remains very large, demanding multi-GPU infrastructure for self-hosting Limited native multimodal support compared with K2.5 Vision/Agent models |
Use cases
Recommended applications for this model
Enterprise
Platform Integration
Docker Support
Official Docker images for containerized deployments
Kubernetes Ready
Production-grade KBS manifests and Helm charts
SDK Libraries
Official SDKs for Python, Javascript, Go, and Java
Don't let your AI control you. Control your AI the Qubrid way!
Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.
"Qubrid enabled us to deploy production AI agents with reliable tool-calling and step tracing. We now ship agents faster with full visibility into every decision and API call."
AI Agents Team
Agent Systems & Orchestration
