zai-org/GLM-4.7-FP8
GLM-4.7 is Z.ai's (formerly Zhipu AI) new-generation flagship model with 355B total parameters and 32B activated per forward pass. It introduces Interleaved Thinking, Preserved Thinking, and Turn-level Thinking — enabling models to reason before actions and maintain coherent state across long coding sessions. Achieves 95.7% on AIME 2025, 73.8% on SWE-bench, and 87.4% on τ²-Bench.
api_example.sh
Technical Specifications
Model Architecture & Performance
Pricing
Pay-per-use, no commitments
API Reference
Complete parameter documentation
| Parameter | Type | Default | Description |
|---|---|---|---|
| stream | boolean | true | Enable streaming responses for real-time output. |
| temperature | number | 0.6 | Controls randomness. Lower values recommended for reasoning and coding. |
| max_tokens | number | 4096 | Maximum number of tokens to generate. |
| top_p | number | 1 | Controls nucleus sampling. |
| enable_thinking | boolean | true | Enable Interleaved Thinking mode. The model thinks before every response and tool call for improved accuracy. |
Explore the full request and response schema in our external API documentation
Performance
Strengths & considerations
| Strengths | Considerations |
|---|---|
| Interleaved Thinking for reasoning before every action Preserved Thinking retains reasoning across coding sessions Turn-level control over thinking per request 355B MoE with 32B active — frontier reasoning at low cost SOTA mathematical performance (95.7% AIME 2025) Open-source with commercial use permitted | Very large model requires significant infrastructure FP8 inference requires natively supporting hardware Thinking mode increases latency |
Use cases
Recommended applications for this model
Enterprise
Platform Integration
Docker Support
Official Docker images for containerized deployments
Kubernetes Ready
Production-grade KBS manifests and Helm charts
SDK Libraries
Official SDKs for Python, Javascript, Go, and Java
Don't let your AI control you. Control your AI the Qubrid way!
Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.
"Qubrid enabled us to deploy production AI agents with reliable tool-calling and step tracing. We now ship agents faster with full visibility into every decision and API call."
AI Agents Team
Agent Systems & Orchestration
