google/gemini-2.5-flash
Gemini 2.5 Flash is a cost-efficient multimodal vision model designed for high-volume, low-latency tasks with strong long-context support.
api_example.sh
Technical Specifications
Model Architecture & Performance
Pricing
Pay-per-use, no commitments
API Reference
Complete parameter documentation
| Parameter | Type | Default | Description |
|---|---|---|---|
| stream | boolean | true | Enable streaming responses for real-time output. |
| temperature | number | 0.2 | Controls randomness. Higher values can increase creativity. |
| max_tokens | number | 8192 | Maximum number of tokens to generate in the response. |
| top_p | number | 1 | Nucleus sampling: considers tokens with top_p probability mass. |
| reasoning_effort | select | medium | Adjusts the depth of reasoning and problem-solving effort (quality vs latency). |
Explore the full request and response schema in our external API documentation
Performance
Strengths & considerations
| Strengths | Considerations |
|---|---|
| Large context window (up to 1M input tokens) Flash-tier pricing and efficient inference Supports function calling and structured outputs | May underperform Pro variants on very complex reasoning |
Use cases
Recommended applications for this model
Enterprise
Platform Integration
Docker Support
Official Docker images for containerized deployments
Kubernetes Ready
Production-grade KBS manifests and Helm charts
SDK Libraries
Official SDKs for Python, Javascript, Go, and Java
Don't let your AI control you. Control your AI the Qubrid way!
Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.
"Qubrid AI reduced our document processing time by over 60% and significantly improved retrieval accuracy across our RAG workflows."
Enterprise AI Team
Document Intelligence Platform
