Qwen/Qwen3.5-Flash

Qwen3.5-Flash is the production-hosted, closed-source API version of the Qwen3.5-35B-A3B model, served via Alibaba Cloud Model Studio. It delivers frontier-adjacent intelligence at roughly 1/13th the cost of Claude Sonnet 4.6, with a 1M token context window, native tool calling, built-in web search, and code interpreter support. Responses are 6x faster than Claude Sonnet 4.6 with competitive quality on agentic benchmarks.

Alibaba Cloud (Qwen) Vision 1M Tokens (API) / 256K Tokens (self-hosted base)
Get API Key
Try in Playground
Free Trial Credit No Credit Card Required
$1.00

api_example.sh

curl -X POST "https://platform.qubrid.com/v1/chat/completions" \
  -H "Authorization: Bearer QUBRID_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "Qwen/Qwen3.5-Flash",
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "What is in this image? Describe the main elements."
        },
        {
          "type": "image_url",
          "image_url": {
            "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
          }
        }
      ]
    }
  ],
  "max_tokens": 8192,
  "temperature": 0.6,
  "stream": true,
  "top_p": 0.95
}'

Technical Specifications

Model Architecture & Performance

Variant Flash (Hosted API)
Model Size 35B params (3B active) — hosted
Context Length 1M Tokens (API) / 256K Tokens (self-hosted base)
Quantization Proprietary
Tokens/Second 300
Architecture Hybrid Gated DeltaNet + Sparse MoE (hosted / proprietary serving infrastructure)
Precision Proprietary (served via Alibaba Cloud inference infrastructure)
License Proprietary — Alibaba Cloud Model Studio API only
Release Date February 2026
Developers Alibaba Cloud (QwenLM)

Pricing

Pay-per-use, no commitments

Input Tokens $0.0001/1K Tokens
Output Tokens $0.0004/1K Tokens

API Reference

Complete parameter documentation

Parameter Type Default Description
stream boolean true Enable streaming responses for real-time output.
temperature number 0.6 Controls randomness. Use 0.6 for non-thinking tasks, 1.0 for thinking/reasoning tasks.
max_tokens number 8192 Maximum number of tokens the model can generate.
top_p number 0.95 Controls nucleus sampling for more predictable output.
top_k number 20 Limits token sampling to top-k candidates.
enable_thinking boolean false Toggle chain-of-thought reasoning mode. Use temperature=1.0 when thinking is enabled.

Explore the full request and response schema in our external API documentation

Performance

Strengths & considerations

Strengths Considerations
1M token context window — no RAG chunking needed
1/13th the cost of Claude Sonnet 4.6 ($0.10/M input)
6x faster response than Claude Sonnet 4.6
Built-in official tools: web search, code interpreter
Supports Thinking and non-Thinking modes
Native function calling and structured output
Closed-source — no self-hosting or weight access
Maps to 35B-A3B capability tier, not the full 397B
Requires Alibaba Cloud API access
Production features like 1M context only available on hosted API

Use cases

Recommended applications for this model

High-volume agentic workflows
Cost-efficient RAG pipelines
Real-time chatbots and assistants
Code generation and review
Tool-calling automation
Large-document analysis

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid's medical OCR and research parsing cut our document extraction time in half. We now have traceable pipelines and reproducible outputs that meet our compliance requirements."

Clinical AI Team

Research & Clinical Intelligence