Qwen/Qwen3.6-35B-A3B
Qwen3.6-35B-A3B is an efficient MoE variant in the Qwen 3.6 family aimed at strong multimodal reasoning and cost-effective deployment.
api_example.sh
Pricing
Pay-per-use, no commitments
Technical Specifications
Model Architecture & Performance
API Reference
Complete parameter documentation
| Parameter | Type | Default | Description |
|---|---|---|---|
| stream | boolean | true | Enable streaming responses for real-time output. |
| temperature | number | 0.6 | Use 0.6 for non-thinking mode, 1.0 for thinking/reasoning mode. |
| max_tokens | number | 8192 | Maximum number of tokens to generate. |
| top_p | number | 0.95 | Nucleus sampling parameter. |
| top_k | number | 20 | Limits token sampling to top-k candidates. |
| enable_thinking | boolean | false | Toggle chain-of-thought reasoning. Set temperature=1.0 when enabled. |
Explore the full request and response schema in our external API documentation
Performance
Strengths & considerations
| Strengths | Considerations |
|---|---|
| MoE efficiency profile with strong capability-per-cost Supports multimodal inputs and reasoning-heavy workloads Thinking mode available for deeper analysis Long context support for enterprise use-cases Open-source model family ecosystem Good performance/latency balance | Thinking mode can increase response latency and verbosity MoE routing may add overhead in some scenarios Peak quality depends on prompt and parameter tuning Very large contexts may increase inference cost |
Use cases
Recommended applications for this model
Enterprise
Platform Integration
Docker Support
Official Docker images for containerized deployments
Kubernetes Ready
Production-grade KBS manifests and Helm charts
SDK Libraries
Official SDKs for Python, Javascript, Go, and Java
Don't let your AI control you. Control your AI the Qubrid way!
Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.
"Qubrid enabled us to deploy production AI agents with reliable tool-calling and step tracing. We now ship agents faster with full visibility into every decision and API call."
AI Agents Team
Agent Systems & Orchestration
