Qwen/Qwen3-VL-235B-A22B-Thinking logo

Qwen/Qwen3-VL-235B-A22B-Thinking

Qwen3-VL-235B-A22B-Thinking is the most powerful vision-language model in the Qwen series. With 235B total parameters (22B active per token) and deep reasoning capabilities in thinking mode, it excels in multimodal STEM/math reasoning, visual agent tasks, GUI automation, spatial perception, long video comprehension, and multilingual OCR across 32 languages.

Alibaba (Cloud) Vision 256K Tokens (up to 1M)
Get API Key
Deposit $5 to get started Unlock API access and start running inference right away. See how many million tokens $5 gets you

api_example.sh

curl -X POST "https://platform.qubrid.com/v1/chat/completions" \
  -H "Authorization: Bearer QUBRID_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "Qwen/Qwen3-VL-235B-A22B-Thinking",
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "What is in this image? Describe the main elements."
        },
        {
          "type": "image_url",
          "image_url": {
            "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
          }
        }
      ]
    }
  ],
  "max_tokens": 4096,
  "temperature": 0.7,
  "stream": true,
  "top_p": 0.9
}'

Pricing

Pay-per-use, no commitments

Input Tokens $0.40/1M Tokens
Output Tokens $4.00/1M Tokens

Technical Specifications

Model Architecture & Performance

Variant Thinking
Model Size 235B params (22B active)
Context Length 256K Tokens (up to 1M)
Quantization bf16 / FP8
Tokens/sec 60
Architecture MoE Transformer with ViT visual encoder, Interleaved-MRoPE, DeepStack feature fusion
Precision bf16 / FP8
License Apache 2.0
Release Date September 2025
Developers Alibaba Cloud (QwenLM)

API Reference

Complete parameter documentation

Parameter Type Default Description
stream boolean true Enable streaming responses for real-time output.
temperature number 0.7 Controls randomness in output.
max_tokens number 4096 Maximum tokens to generate.
top_p number 0.9 Controls nucleus sampling.

Explore the full request and response schema in our external API documentation

Performance

Strengths & considerations

Strengths Considerations
Thinking mode for deep reasoning
Native 256K context (expandable to 1M)
DeepStack multi-level ViT feature fusion
Interleaved-MRoPE for video temporal reasoning
Rivals Gemini 2.5 Pro on perception benchmarks
Very high GPU memory requirements
Slower due to thinking mode overhead
Not suitable for real-time low-latency tasks

Use cases

Recommended applications for this model

Visual STEM/Math reasoning
GUI automation & visual agents
Multimodal coding from images/video
Long video understanding
Multilingual OCR (32 languages)
3D grounding & spatial reasoning

Build with Qwen/Qwen3-VL-235B-A22B-Thinking faster

Get deployment recipes, benchmark alerts, and GPU pricing updates for Qwen/Qwen3-VL-235B-A22B-Thinking (Qwen3 Vl 235b A22b Thinking) and other vision models straight from the Qubrid team.

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid's medical OCR and research parsing cut our document extraction time in half. We now have traceable pipelines and reproducible outputs that meet our compliance requirements."

Clinical AI Team

Research & Clinical Intelligence