Qwen/Qwen3.6-27B logo

Qwen/Qwen3.6-27B

Qwen3.6-27B is a medium-tier Qwen 3.6 vision-language model for multimodal chat and reasoning workloads across text, image, and video inputs.

Alibaba (Cloud) Vision 256K Tokens (up to 1M)
Get API Key
Deposit $5 to get started Unlock API access and start running inference right away. See how many million tokens $5 gets you

api_example.sh

curl -X POST "https://platform.qubrid.com/v1/chat/completions" \
  -H "Authorization: Bearer QUBRID_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "Qwen/Qwen3.6-27B",
  "messages": [
    {
      "role": "user",
      "content": [
        {
          "type": "text",
          "text": "What is in this image? Describe the main elements."
        },
        {
          "type": "image_url",
          "image_url": {
            "url": "https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg"
          }
        }
      ]
    }
  ],
  "max_tokens": 8192,
  "temperature": 0.6,
  "stream": true,
  "top_p": 0.95
}'

Pricing

Pay-per-use, no commitments

Input Tokens $0.60/1M Tokens
Output Tokens $3.60/1M Tokens
Cached Input Tokens $0.00/1M Tokens

Technical Specifications

Model Architecture & Performance

Variant Instruct
Model Size 27B params
Context Length 256K Tokens (up to 1M)
Quantization bf16
Tokens/sec 100
Architecture Qwen 3.6 transformer architecture for multimodal reasoning and instruction following
Precision bf16
License Apache 2.0
Release Date 2026
Developers Alibaba Cloud (QwenLM)

API Reference

Complete parameter documentation

Parameter Type Default Description
stream boolean true Enable streaming responses for real-time output.
temperature number 0.6 Use 0.6 for non-thinking tasks, 1.0 for thinking/reasoning tasks.
max_tokens number 8192 Maximum number of tokens to generate.
top_p number 0.95 Nucleus sampling parameter.
top_k number 20 Limits token sampling to top-k candidates.
enable_thinking boolean false Toggle chain-of-thought reasoning mode. Set temperature=1.0 when enabled.

Explore the full request and response schema in our external API documentation

Performance

Strengths & considerations

Strengths Considerations
Strong multimodal capability for text, image, and video inputs
Balanced latency and quality for production chat workloads
Thinking mode support for deeper reasoning
Long-context support for complex tasks
Open-source model family compatibility
Broad multilingual support
Thinking mode increases latency and output verbosity
Large multimodal inputs can increase runtime cost
May trail larger models on peak benchmark tasks
Throughput depends on deployment and hardware profile

Use cases

Recommended applications for this model

Agentic coding and software development
Multimodal chat (text, images, video)
Complex reasoning and analysis
Long-context document processing
Enterprise assistants and workflows
Fine-tuning for specialized domains

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid AI reduced our document processing time by over 60% and significantly improved retrieval accuracy across our RAG workflows."

Enterprise AI Team

Document Intelligence Platform