Qwen/Qwen3.5-122B-A10B

Qwen3.5-122B-A10B is the most powerful open-source model in the Qwen3.5 Medium Series. With 122B total parameters and 10B active per token across a 48-layer hybrid architecture, it delivers the strongest knowledge, vision, and function-calling performance in the medium class — scoring 86.6% on GPQA Diamond (beating GPT-5 mini's 82.8%), 72.2% on BFCL-V4 tool calling (vs GPT-5 mini's 55.5%), 92.1% on OCRBench, and 83.9% on MMMU. Supports text, image, and video input natively via early fusion.

Alibaba Cloud (Qwen) Vision 256K Tokens (up to 1M)
Get API Key
Try in Playground
Free Trial Credit No Credit Card Required
$1.00

api_example.sh

curl -X POST "https://platform.qubrid.com/v1/chat/completions" \
  -H "Authorization: Bearer QUBRID_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "Qwen/Qwen3.5-122B-A10B",
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "What is in this image? Describe the main elements."
        },
        {
          "type": "image_url",
          "image_url": {
            "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
          }
        }
      ]
    }
  ],
  "max_tokens": 16384,
  "temperature": 1,
  "stream": true,
  "top_p": 0.95,
  "presence_penalty": 1.5
}'

Technical Specifications

Model Architecture & Performance

Variant Instruct
Model Size 122B params (10B active)
Context Length 256K Tokens (up to 1M)
Quantization bf16 / FP8
Tokens/Second 159
Architecture Hybrid Gated DeltaNet + Sparse MoE Transformer — 48 layers, 16 DeltaNet-attention cycles (3:1 ratio), 256 experts (10B active per token), early fusion multimodal vision encoder, MTP speculative decoding
Precision bf16 (FP8 quantized variants available)
License Apache 2.0
Release Date February 24, 2026
Developers Alibaba Cloud (QwenLM)

Pricing

Pay-per-use, no commitments

Input Tokens $0.0004/1K Tokens
Output Tokens $0.0032/1K Tokens

API Reference

Complete parameter documentation

Parameter Type Default Description
stream boolean true Enable streaming responses for real-time output.
temperature number 1 Recommended 1.0 for thinking mode. Use 0.6–0.7 for non-thinking tasks.
max_tokens number 16384 Maximum tokens to generate. Thinking mode may require higher values.
top_p number 0.95 Nucleus sampling parameter.
top_k number 20 Limits token sampling to top-k candidates.
presence_penalty number 1.5 Reduces repetition in longer outputs. Recommended 1.5 for this model.
enable_thinking boolean true Toggle chain-of-thought reasoning. Enables deep problem solving at the cost of higher latency.

Explore the full request and response schema in our external API documentation

Performance

Strengths & considerations

Strengths Considerations
86.6% GPQA Diamond — beats GPT-5 mini (82.8%) by 4 points
72.2% BFCL-V4 function calling — 30% ahead of GPT-5 mini (55.5%)
92.1% OCRBench and 89.8% OmniDocBench — best open-weight document model
70.4% ScreenSpot Pro — 2x Claude Sonnet 4.5 (36.2%) on GUI automation
48-layer hybrid DeltaNet architecture for deep reasoning
122B total / 10B active — excellent efficiency for capability tier
Native multimodal: text + image + video via early fusion
Apache 2.0 license
Requires 244GB VRAM at bf16 (3–4× A100 80GB); 60–70GB at 4-bit
Thinking mode is verbose — generates 91M+ tokens in benchmarks
Higher cost per token than 35B-A3B sibling
Longer TTFT (~1.03s) vs smaller models

Use cases

Recommended applications for this model

Advanced multimodal reasoning (text + image + video)
Enterprise-grade document understanding and OCR
Complex agentic workflows with function calling
Long-horizon planning and analysis (256K context)
GUI automation (ScreenSpot Pro: 70.4 vs Claude Sonnet 4.5: 36.2)
Scientific and research-grade problem solving
RAG over massive document repositories

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid helped us turn a collection of AI scripts into structured production workflows. We now have better reliability, visibility, and control over every run."

AI Infrastructure Team

Automation & Orchestration