GLM 5

GLM 5 API

Released February 2026200K Tokens context744B params (40B active) parameters

Documentation

GLM 5 API enables Long-horizon software engineering agents coordinating multi-stage tool calls and preserved thinking traces, Enterprise copilots drafting technical designs or policy documents that exceed 100K tokens, and Multilingual research assistants orchestrating retrieval, planning, and execution across agent workers. GLM-5 is Zhipu AI's February 2026 flagship — a 744B-parameter sparse MoE (40B active) with Interleaved (deep) thinking that fuses DeepSeek Sparse Attention and Multi-Token Prediction for frontier reasoning over a 200K-token window. Standout strengths include 744B MoE with 40B active parameters delivers frontier-level quality with efficient routing and Interleaved/Deep Thinking keeps intermediate reasoning while exposing a toggle to control verbosity. It is optimized for production agent and assistant workloads where response quality, latency, and predictable operating cost all matter.

from openai import OpenAI # Initialize the OpenAI client with Qubrid base URL client = OpenAI( base_url="https://platform.qubrid.com/v1", api_key="QUBRID_API_KEY", ) stream = client.chat.completions.create( model="zai-org/GLM-5", messages=[ { "role": "user", "content": "Explain quantum computing in simple terms" } ], max_tokens=4096, temperature=0.7, top_p=1, stream=True ) for chunk in stream: if chunk.choices and chunk.choices[0].delta.content: print(chunk.choices[0].delta.content, end="", flush=True) print("\n")

Serverless

API access

INPUT$0.80 /1M
OUTPUT$3.13 /1M
Deploy using API

Dedicated

Cloud GPU VM

Price starts at$1.25 / GPU/ hr
Deploy with GPU VM

Interactive

Playground

INPUT$0.80 /1M
OUTPUT$3.13 /1M
Chat in Playground

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid helped us turn a collection of AI scripts into structured production workflows. We now have better reliability, visibility, and control over every run."

AI Infrastructure Team

Automation & Orchestration