zai-org/GLM-5 logo

zai-org/GLM-5

GLM-5 is Zhipu AI's February 2026 flagship — a 744B-parameter sparse MoE (40B active) with Interleaved (deep) thinking that fuses DeepSeek Sparse Attention and Multi-Token Prediction for frontier reasoning over a 200K-token window.

Z.ai (Zhipu AI) Chat 200K Tokens
Get API Key
Try in Playground
Free Trial Credit On first TopUp of minimum $5
$1.00

api_example.sh

curl -X POST "https://platform.qubrid.com/v1/chat/completions" \
  -H "Authorization: Bearer QUBRID_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "zai-org/GLM-5",
  "messages": [
    {
      "role": "user",
      "content": "Explain quantum computing in simple terms"
    }
  ],
  "temperature": 0.7,
  "max_tokens": 4096,
  "stream": true,
  "top_p": 1
}'

Technical Specifications

Model Architecture & Performance

Variant Interleaved Thinking
Model Size 744B params (40B active)
Context Length 200K Tokens
Quantization FP8 / BF16
Tokens/sec 55
Architecture Sparse MoE Transformer with DeepSeek Sparse Attention, 256 experts (top-8 routing) and Multi-Token Prediction heads
Precision FP8 with BF16 KV cache
License MIT License
Release Date February 2026
Developers Z.ai / Zhipu AI

Pricing

Pay-per-use, no commitments

Input Tokens $0.80/1M Tokens
Output Tokens $3.13/1M Tokens

API Reference

Complete parameter documentation

Parameter Type Default Description
stream boolean true Enable streaming responses for real-time output.
temperature number 0.7 Controls randomness.
max_tokens number 4096 Maximum number of tokens to generate.
top_p number 1 Controls nucleus sampling.
enable_thinking boolean false Toggle chain-of-thought reasoning mode. Set temperature=1.0 when enabled.

Explore the full request and response schema in our external API documentation

Performance

Strengths & considerations

Strengths Considerations
744B MoE with 40B active parameters delivers frontier-level quality with efficient routing
Interleaved/Deep Thinking keeps intermediate reasoning while exposing a toggle to control verbosity
200K token window supports persistent context across large codebases or knowledge stores
Trained on 28.5T tokens with upgraded tool streaming and multi-agent orchestration support
Full checkpoint requires ~1.65TB of GPU memory for 200K context, limiting on-prem deployments
Interleaved thinking increases latency and token usage when enabled
Higher power and networking demand compared with slimmer GLM-4.x releases

Use cases

Recommended applications for this model

Long-horizon software engineering agents coordinating multi-stage tool calls and preserved thinking traces
Enterprise copilots drafting technical designs or policy documents that exceed 100K tokens
Multilingual research assistants orchestrating retrieval, planning, and execution across agent workers

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid AI reduced our document processing time by over 60% and significantly improved retrieval accuracy across our RAG workflows."

Enterprise AI Team

Document Intelligence Platform