Docs / Networking / How to Set Up a Load Balancer with Nginx

How to Set Up a Load Balancer with Nginx

By Admin · Mar 2, 2026 · Updated Apr 23, 2026 · 23 views · 3 min read

How to Set Up a Load Balancer with Nginx

Nginx is not just a web server — it is also a powerful Layer 7 load balancer. Running Nginx as a load balancer on a dedicated Breeze instance distributes incoming traffic across multiple backend servers, improving reliability, scalability, and response times.

Installing Nginx

sudo apt update
sudo apt install -y nginx
sudo systemctl enable --now nginx

Basic HTTP Load Balancing

Configure Nginx as a reverse proxy load balancer by editing /etc/nginx/conf.d/loadbalancer.conf:

upstream backend_pool {
    server 10.0.0.11:8080;
    server 10.0.0.12:8080;
    server 10.0.0.13:8080;
}

server {
    listen 80;
    server_name app.example.com;

    location / {
        proxy_pass http://backend_pool;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_connect_timeout 5s;
        proxy_read_timeout 30s;
    }
}

Load Balancing Methods

Nginx supports several load balancing algorithms:

# Round Robin (default) - distributes requests sequentially
upstream backend_pool {
    server 10.0.0.11:8080;
    server 10.0.0.12:8080;
}

# Least Connections - sends to the server with fewest active connections
upstream backend_pool {
    least_conn;
    server 10.0.0.11:8080;
    server 10.0.0.12:8080;
}

# IP Hash - consistent hashing based on client IP (session persistence)
upstream backend_pool {
    ip_hash;
    server 10.0.0.11:8080;
    server 10.0.0.12:8080;
}

# Weighted - assigns more traffic to higher-capacity servers
upstream backend_pool {
    server 10.0.0.11:8080 weight=3;
    server 10.0.0.12:8080 weight=1;
}

Health Checks

Configure passive health checks to automatically remove failed backends:

upstream backend_pool {
    server 10.0.0.11:8080 max_fails=3 fail_timeout=30s;
    server 10.0.0.12:8080 max_fails=3 fail_timeout=30s;
    server 10.0.0.13:8080 backup;
}

The max_fails parameter sets how many consecutive failures mark a server as unavailable. The fail_timeout defines how long it stays marked as down. The backup directive marks a server as a standby that only receives traffic when all primary servers are unavailable.

SSL Termination

Terminate SSL at the load balancer to offload encryption from backend servers:

server {
    listen 443 ssl http2;
    server_name app.example.com;

    ssl_certificate /etc/ssl/certs/app.pem;
    ssl_certificate_key /etc/ssl/private/app.key;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256;
    ssl_prefer_server_ciphers on;

    location / {
        proxy_pass http://backend_pool;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
    }
}

server {
    listen 80;
    server_name app.example.com;
    return 301 https://$host$request_uri;
}

WebSocket Support

Enable WebSocket proxying for real-time applications:

location /ws/ {
    proxy_pass http://backend_pool;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
    proxy_read_timeout 86400s;
}

Monitoring and Testing

Verify your configuration and reload:

sudo nginx -t
sudo systemctl reload nginx

Enable the stub status module for basic monitoring:

location /nginx_status {
    stub_status;
    allow 127.0.0.1;
    deny all;
}

Test load distribution by making multiple requests and observing which backend responds, using custom response headers or access logs on each backend Breeze.

Was this article helpful?