Docs / AI & Machine Learning / How to Install and Use ComfyUI for AI Image Generation

How to Install and Use ComfyUI for AI Image Generation

By Admin · Mar 2, 2026 · Updated Apr 23, 2026 · 30 views · 3 min read

How to Install and Use ComfyUI for AI Image Generation

ComfyUI is a powerful, node-based interface for Stable Diffusion that gives you complete control over the image generation pipeline. Unlike simpler UIs, ComfyUI lets you visually build complex workflows with branching, ControlNet, upscaling, and more — all running on your own Breeze.

Prerequisites

  • A Breeze instance with a GPU (8+ GB VRAM recommended) or a high-RAM CPU instance
  • Ubuntu 22.04 or later
  • Python 3.10 or later
  • At least 20 GB of free disk space

Installing ComfyUI

Clone the repository and set up the environment:

sudo apt update && sudo apt install -y git python3 python3-pip python3-venv
cd ~
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
python3 -m venv venv
source venv/bin/activate

Install PyTorch and ComfyUI requirements:

# For GPU (NVIDIA CUDA)
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121
# For CPU only
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu

pip install -r requirements.txt

Adding Model Checkpoints

Download Stable Diffusion checkpoints into the models directory:

cd ~/ComfyUI/models/checkpoints
wget -O sd_xl_base_1.0.safetensors "https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0/resolve/main/sd_xl_base_1.0.safetensors"

You can also add VAE models, LoRA adapters, and ControlNet models to their respective directories under models/.

Launching ComfyUI

cd ~/ComfyUI
source venv/bin/activate
python main.py --listen 0.0.0.0 --port 8188

For CPU-only instances add the --cpu flag. Access the interface at http://your-breeze-ip:8188.

Understanding the Node-Based Workflow

ComfyUI represents the image generation pipeline as a graph of connected nodes:

  • Load Checkpoint — loads your model file
  • CLIP Text Encode — converts your text prompt into embeddings
  • KSampler — the core diffusion sampling node with steps, CFG, and sampler settings
  • VAE Decode — converts the latent output into a visible image
  • Save Image — writes the result to disk

Connect these nodes by dragging wires between their inputs and outputs. The default workflow provides a working starting point that you can customize.

Installing Custom Nodes

ComfyUI has a rich ecosystem of community custom nodes. Install the ComfyUI Manager for easy management:

cd ~/ComfyUI/custom_nodes
git clone https://github.com/ltdrdata/ComfyUI-Manager.git

After restarting ComfyUI, the Manager menu appears in the interface, letting you browse and install additional nodes for upscaling, face restoration, animation, and more.

Running as a Background Service

[Unit]
Description=ComfyUI Image Generation
After=network.target

[Service]
User=deploy
WorkingDirectory=/home/deploy/ComfyUI
ExecStart=/home/deploy/ComfyUI/venv/bin/python main.py --listen 0.0.0.0 --port 8188
Restart=always
Environment=PYTHONUNBUFFERED=1

[Install]
WantedBy=multi-user.target

Enable with sudo systemctl enable --now comfyui so it starts automatically when your Breeze boots.

Was this article helpful?