nvidia/Nemotron-3-Nano-Omni logo

nvidia/Nemotron-3-Nano-Omni

Nemotron-3-Nano-Omni is NVIDIA's base omni model focused on efficient text reasoning and agentic workflows, with built-in tool calling and reasoning support.

NVIDIA Chat 300K Tokens
Get API Key
Deposit $5 to get started Unlock API access and start running inference right away. See how many million tokens $5 gets you

api_example.sh

curl -X POST "https://platform.qubrid.com/v1/chat/completions" \
  -H "Authorization: Bearer QUBRID_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
  "model": "nvidia/Nemotron-3-Nano-Omni",
  "messages": [
    {
      "role": "user",
      "content": "Explain quantum computing in simple terms"
    }
  ],
  "temperature": 0.3,
  "max_tokens": 8192,
  "stream": true,
  "top_p": 1
}'

Pricing

Pay-per-use, no commitments

Input Tokens $0.06/1M Tokens
Output Tokens $0.24/1M Tokens

Technical Specifications

Model Architecture & Performance

Variant Base
Model Size Undisclosed
Context Length 300K Tokens
Quantization FP8
Tokens/sec 90
Architecture NVIDIA Nemotron-3 Nano Omni architecture
Precision FP8
License nvidia-open-model-license
Release Date 2026
Developers NVIDIA

API Reference

Complete parameter documentation

Parameter Type Default Description
stream boolean true Enable streaming responses for real-time output.
temperature number 0.3 Controls randomness. Higher values mean more creative but less predictable output.
max_tokens number 8192 Maximum number of tokens to generate in the response.
top_p number 1 Nucleus sampling: considers tokens with top_p probability mass.
enable_thinking boolean true Enable chain-of-thought reasoning traces before final response.

Explore the full request and response schema in our external API documentation

Performance

Strengths & considerations

Strengths Considerations
Low-cost token pricing for high-volume workloads
Reasoning and tool calling available
90 tokens/sec throughput profile
FP8 quantization for efficient inference
Responses API compatibility
Configured as text-to-text in this endpoint
Region-specific availability (eu-north1 in source panel)

Use cases

Recommended applications for this model

Agentic assistants and workflow automation
Reasoning-heavy chat applications
Tool-calling and function-driven flows
Long-context text understanding up to 300K tokens

Enterprise
Platform Integration

Docker

Docker Support

Official Docker images for containerized deployments

Kubernetes

Kubernetes Ready

Production-grade KBS manifests and Helm charts

SDK

SDK Libraries

Official SDKs for Python, Javascript, Go, and Java

Don't let your AI control you. Control your AI the Qubrid way!

Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.

"Qubrid's medical OCR and research parsing cut our document extraction time in half. We now have traceable pipelines and reproducible outputs that meet our compliance requirements."

Clinical AI Team

Research & Clinical Intelligence