Docs / AI & Machine Learning / Jan.ai Local AI Chat Application

Jan.ai Local AI Chat Application

By Admin · Mar 15, 2026 · Updated Apr 24, 2026 · 316 views · 2 min read

What is Jan.ai?

Jan is an open-source desktop application for running AI models locally. It provides a ChatGPT-like interface that runs entirely on your hardware, supporting models from Hugging Face, Ollama integration, and API compatibility with OpenAI format.

Installation on Server

# Jan can run as a headless server
# Install Node.js runtime
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt install -y nodejs

# Install Jan (AppImage or Docker)
# For headless server mode, use the API server:
docker run -d --name jan \
    -p 1337:1337 \
    -v /opt/jan/models:/app/models \
    --gpus all \
    --restart unless-stopped \
    janhq/jan:latest

Model Management

# Download models directly in Jan
# Popular models:
# - Llama 3 8B (general purpose)
# - Mistral 7B (fast, efficient)
# - Phi-3 (small but capable)
# - CodeLlama (code generation)

# Or use Ollama as backend:
# Jan Settings > Models > Ollama Integration
# Jan will use models from your Ollama installation

OpenAI-Compatible API

# Jan exposes an OpenAI-compatible API
curl http://localhost:1337/v1/chat/completions \
    -H "Content-Type: application/json" \
    -d '{
        "model": "llama3-8b",
        "messages": [{"role": "user", "content": "Hello!"}],
        "max_tokens": 500
    }'

Features

  • Run models 100% locally with GPU acceleration
  • GGUF model support for efficient inference
  • OpenAI-compatible API for integration
  • Conversation history and management
  • Custom system prompts and presets
  • No data sent to external servers

Was this article helpful?