Qwen/Qwen3.5-Plus logo

Qwen/Qwen3.5-Plus

Qwen3.5-Plus is the production-hosted API version of Qwen3.5-397B-A17B — Alibaba's flagship multimodal MoE model with 397B total and 17B active parameters. Served via Alibaba Cloud Model Studio, it delivers frontier-class reasoning, coding, and vision at a fraction of GPT-5 and Claude Opus cost. Features a 1M token context window, native tool calling, built-in web search and code interpreter, and supports both Thinking and non-Thinking modes.

Alibaba (Cloud) Vision 1M Tokens (API) / 262K Tokens (self-hosted base)
Get API Key
Try in Playground
Free Trial Credit On first TopUp of minimum $5
$1.00

api_example.sh

curl -X POST "https://platform.qubrid.com/v1/chat/completions" \
  -H "Authorization: Bearer QUBRID_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "Qwen/Qwen3.5-397B-A17B",
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "What is in this image? Describe the main elements."
        },
        {
          "type": "image_url",
          "image_url": {
            "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
          }
        }
      ]
    }
  ],
  "max_tokens": 8192,
  "temperature": 0.6,
  "stream": true,
  "top_p": 0.95
}'

Technical Specifications

Model Architecture & Performance

Variant Plus (Hosted API)
Model Size 397B params (17B active) — hosted
Context Length 1M Tokens (API) / 262K Tokens (self-hosted base)
Quantization Proprietary
Tokens/sec 80
Architecture Hybrid Gated DeltaNet + Sparse MoE (397B total / 17B active, hosted via Alibaba Cloud inference infrastructure)
Precision Proprietary (served via Alibaba Cloud inference infrastructure)
License Proprietary — Alibaba Cloud Model Studio API only
Release Date February 2026
Developers Alibaba Cloud (QwenLM)

Pricing

Pay-per-use, no commitments

Input Tokens $0.40/1M Tokens
Output Tokens $2.40/1M Tokens
Cached Input Tokens $0.50/1M Tokens

API Reference

Complete parameter documentation

Parameter Type Default Description
stream boolean true Enable streaming responses for real-time output.
temperature number 0.6 Controls randomness. Use 0.6 for non-thinking tasks, 1.0 for thinking/reasoning tasks.
max_tokens number 8192 Maximum number of tokens the model can generate. Hard cap is 65,536 tokens.
top_p number 0.95 Controls nucleus sampling for more predictable output.
top_k number 20 Limits token sampling to top-k candidates.
enable_thinking boolean false Toggle chain-of-thought reasoning mode. Use temperature=1.0 when thinking is enabled.

Explore the full request and response schema in our external API documentation

Performance

Strengths & considerations

Strengths Considerations
1M token context window out of the box
397B total params (17B active) — flagship intelligence tier
Thinking + non-Thinking modes
Built-in official tools: web search, code interpreter
Native function calling and structured output
Competitive with GPT-5 and Claude Opus class on reasoning and coding benchmarks
Early-fusion vision-language backbone — unified text + image understanding
Closed-source — no self-hosting or weight access via this endpoint
Max output capped at 65,536 tokens
1M context and built-in tools only available on hosted API
Requires Alibaba Cloud API access

Use cases

Recommended applications for this model

Frontier-quality agentic workflows
Complex multi-step reasoning and planning
Long-document analysis (up to 1M tokens)
Advanced code generation and debugging
Multimodal understanding (images, PDFs, docs)
Tool-calling and structured output pipelines

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid enabled us to deploy production AI agents with reliable tool-calling and step tracing. We now ship agents faster with full visibility into every decision and API call."

AI Agents Team

Agent Systems & Orchestration