Qwen/Qwen3.5-35B-A3B
Qwen3.5-35B-A3B is the breakout model of the Qwen3.5 Medium Series and arguably the biggest efficiency breakthrough in recent open-source AI. Despite having only 3B active parameters per token (8.6% of total), it outperforms the previous generation's 235B model on most benchmarks, as well as GPT-5 mini and Claude Sonnet 4.5 on knowledge (MMMLU) and visual reasoning (MMMU-Pro). It runs on an 8GB GPU and supports 256K context natively.
api_example.sh
Pricing
Pay-per-use, no commitments
Technical Specifications
Model Architecture & Performance
API Reference
Complete parameter documentation
| Parameter | Type | Default | Description |
|---|---|---|---|
| stream | boolean | true | Enable streaming responses for real-time output. |
| temperature | number | 0.6 | Use 0.6 for non-thinking mode, 1.0 for thinking/reasoning mode. |
| max_tokens | number | 8192 | Maximum number of tokens to generate. |
| top_p | number | 0.95 | Nucleus sampling parameter. |
| top_k | number | 20 | Limits token sampling to top-k candidates. |
| enable_thinking | boolean | false | Toggle chain-of-thought reasoning. Set temperature=1.0 when enabled. |
Explore the full request and response schema in our external API documentation
Performance
Strengths & considerations
| Strengths | Considerations |
|---|---|
| Beats Qwen3-235B-A22B with only 3B active params — historic efficiency Outperforms GPT-5 mini and Claude Sonnet 4.5 on MMMLU and MMMU-Pro 35B total / 3B active — 256 experts, 8 routed + 1 shared per token Runs on 8GB GPU (4-bit quantization), 22GB Mac M-series 256K context natively, extensible to 1M tokens Near-lossless 4-bit quantization Apache 2.0 license — fully open source | MoE routing overhead vs dense models on short contexts 4-bit quantization needed for edge/consumer deployment Thinking mode generates verbose traces that increase latency Requires framework support for hybrid DeltaNet attention |
Use cases
Recommended applications for this model
Build with Qwen/Qwen3.5-35B-A3B faster
Get deployment recipes, benchmark alerts, and GPU pricing updates for Qwen/Qwen3.5-35B-A3B (Qwen3.5 35b A3b) and other vision models straight from the Qubrid team.
Enterprise
Platform Integration
Docker Support
Official Docker images for containerized deployments
Kubernetes Ready
Production-grade KBS manifests and Helm charts
SDK Libraries
Official SDKs for Python, Javascript, Go, and Java
Don't let your AI control you. Control your AI the Qubrid way!
Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.
"Qubrid's medical OCR and research parsing cut our document extraction time in half. We now have traceable pipelines and reproducible outputs that meet our compliance requirements."
Clinical AI Team
Research & Clinical Intelligence
