← Back to Marketplace

Ollama

AI & ML Run large language models locally on your server

Ollama makes it easy to run open-source LLMs like Llama 3, Mistral, Gemma, and DeepSeek on your own server. Full privacy, no API costs, no rate limits.

Includes a REST API compatible with the OpenAI format. Pull and run models with a single command. Pairs perfectly with Open WebUI for a ChatGPT-like interface.

What's Included

  • Pre-configured Ollama installation
  • Docker containerized for reliability
  • Caddy reverse proxy (automatic HTTPS-ready)
  • UFW firewall pre-configured
  • Full root SSH access
  • Ubuntu 24.04 LTS base

Minimum Requirements

Plan Kaze 4
CPU 4 vCPU
RAM 8 GB
Disk 120 GB NVMe
From $12/mo

Ollama pre-installed on a Kaze 4 Breeze or higher

Deploy Now
  • App included free with Breeze
  • 1-click deployment
  • Ready in ~5 minutes
  • Cancel anytime