ComfyUI v0.3.50
ComfyUI is a node-based interface and inference engine for generative AI. Users can combine various AI models and operations through nodes to achieve highly customizable and controllable content generation
Launch pre-configured AI environments in minutes. Pick a template, deploy, and start building.

Search among 17 templates
ComfyUI is a node-based interface and inference engine for generative AI. Users can combine various AI models and operations through nodes to achieve highly customizable and controllable content generation
ComfyUI is a node-based interface and inference engine for generative AI. Users can combine various AI models and operations through nodes to achieve highly customizable and controllable content generation
DeepSeek R1 671B is one of the largest-scale open LLMs, optimized for reasoning-heavy workloads and inference research. This package includes the Open WebUI interface, enabling direct chat and interaction from your browser.
Gemma 3 is a family of large language models from Google designed for efficiency and accuracy. This image provides the 27B parameter variant. It comes with the Open WebUI interface, so you can run and interact with the model directly from your browser.
GPT OSS is an open-source variant for experimenting with GPT-style models. It is optimized to run with Ollama integration and can be extended to support custom LLMs. This package comes with the Open WebUI interface, giving you a ready-to-use browser-based environment to interact with the model.
Langflow is a powerful and intuitive platform designed for building, iterating, and deploying AI applications. Leveraging a visual interface, users can effortlessly create flows by dragging and connecting components, making AI app development accessible and efficient.
LLaMA 3.1 is Meta’s advanced open large language model series. This variant serves the 70B parameter model and comes packaged with Open WebUI, letting you interact with the model from an elegant browser-based chat interface.
n8n is a workflow automation platform that gives technical teams the flexibility of code with the speed of no-code. With 400+ integrations, native AI capabilities, and a fair-code license, n8n lets you build powerful automations while maintaining full control over your data and deployments.
PyTorch is a GPU accelerated tensor computational framework. Functionality can be extended with common Python libraries such as NumPy and SciPy. Automatic differentiation is done with a tape-based system at the functional and neural network layer levels.
PyTorch is a GPU accelerated tensor computational framework. Functionality can be extended with common Python libraries such as NumPy and SciPy. Automatic differentiation is done with a tape-based system at the functional and neural network layer levels.
Open WebUI is an elegant and extensible browser-based interface for managing and interacting with large language models via Ollama. It simplifies the deployment and usage of powerful models like Qwen, all from your browser.
TensorFlow is an open source platform for machine learning. It provides comprehensive tools and libraries in a flexible architecture allowing easy deployment across a variety of platforms and devices.
TensorFlow is an open source platform for machine learning. It provides comprehensive tools and libraries in a flexible architecture allowing easy deployment across a variety of platforms and devices.
Ubuntu is a Debian-based Linux operating system that runs from the desktop to the cloud, to all your internet connected things. It is the world's most popular open-source OS.
Ubuntu is a Debian-based Linux operating system that runs from the desktop to the cloud, to all your internet connected things. It is the world's most popular open-source OS.
vscode is a GPU accelerated tensor computational framework. Functionality can be extended with common Python libraries such as NumPy and SciPy. Automatic differentiation is done with a tape-based system at the functional and neural network layer levels.
vscode is a GPU accelerated tensor computational framework. Functionality can be extended with common Python libraries such as NumPy and SciPy. Automatic differentiation is done with a tape-based system at the functional and neural network layer levels.
Have questions? Want to Partner with us? Looking for larger deployments or custom fine-tuning? Let's collaborate on the right setup for your workloads.
"Qubrid AI reduced our document processing time by over 60% and significantly improved retrieval accuracy across our RAG workflows."
Enterprise AI Team
Document Intelligence Platform