Docs / Monitoring & Logging / How to Set Up Loki for Log Aggregation

How to Set Up Loki for Log Aggregation

By Admin · Mar 2, 2026 · Updated Apr 25, 2026 · 26 views · 3 min read

How to Set Up Loki for Log Aggregation

Loki is a horizontally scalable, highly available log aggregation system designed by the Grafana team. Unlike traditional log solutions, Loki indexes only metadata (labels) rather than full log content, making it lightweight and cost-effective for your Breeze server.

Architecture Overview

  • Loki — the log storage and query engine
  • Promtail — the agent that ships logs to Loki
  • Grafana — the visualization layer for exploring logs

Deploying the Loki Stack with Docker Compose

services:
  loki:
    image: grafana/loki:latest
    ports:
      - "3100:3100"
    volumes:
      - ./loki-config.yaml:/etc/loki/local-config.yaml
      - loki_data:/loki
    command: -config.file=/etc/loki/local-config.yaml
    restart: unless-stopped

  promtail:
    image: grafana/promtail:latest
    volumes:
      - ./promtail-config.yaml:/etc/promtail/config.yaml
      - /var/log:/var/log:ro
      - /var/lib/docker/containers:/var/lib/docker/containers:ro
    command: -config.file=/etc/promtail/config.yaml
    restart: unless-stopped

  grafana:
    image: grafana/grafana:latest
    ports:
      - "3000:3000"
    environment:
      GF_SECURITY_ADMIN_PASSWORD: SecureGrafanaPass
    volumes:
      - grafana_data:/var/lib/grafana
    restart: unless-stopped

volumes:
  loki_data:
  grafana_data:

Loki Configuration

Create loki-config.yaml with storage and retention settings:

auth_enabled: false

server:
  http_listen_port: 3100

common:
  ring:
    instance_addr: 127.0.0.1
    kvstore:
      store: inmemory
  replication_factor: 1
  path_prefix: /loki

schema_config:
  configs:
    - from: 2024-01-01
      store: tsdb
      object_store: filesystem
      schema: v13
      index:
        prefix: index_
        period: 24h

storage_config:
  filesystem:
    directory: /loki/chunks

limits_config:
  retention_period: 30d
  max_query_length: 721h

compactor:
  working_directory: /loki/compactor
  compaction_interval: 10m
  retention_enabled: true
  delete_request_store: filesystem

Promtail Configuration

Create promtail-config.yaml to collect system and Docker logs:

server:
  http_listen_port: 9080

positions:
  filename: /tmp/positions.yaml

clients:
  - url: http://loki:3100/loki/api/v1/push

scrape_configs:
  - job_name: system
    static_configs:
      - targets: [localhost]
        labels:
          job: syslog
          host: breeze-server
          __path__: /var/log/syslog

  - job_name: nginx
    static_configs:
      - targets: [localhost]
        labels:
          job: nginx
          host: breeze-server
          __path__: /var/log/nginx/*.log

  - job_name: docker
    static_configs:
      - targets: [localhost]
        labels:
          job: docker
          __path__: /var/lib/docker/containers/*/*-json.log
    pipeline_stages:
      - docker: {}
      - json:
          expressions:
            stream: stream
            tag: attrs.tag

Connecting Grafana to Loki

  1. Open Grafana at http://your-breeze-ip:3000
  2. Go to Connections → Data Sources → Add data source
  3. Select Loki and set the URL to http://loki:3100
  4. Click Save & test

Querying Logs with LogQL

# All logs from nginx
{job="nginx"}

# Filter for error lines
{job="syslog"} |= "error"

# Regex pattern matching
{job="nginx"} |~ "status=(4|5)\\d{2}"

# Count errors per minute
rate({job="nginx"} |= "500" [1m])

Best Practices

  • Use meaningful labels but keep cardinality low for efficient indexing
  • Set retention periods to manage disk usage on your Breeze server
  • Use LogQL pipeline stages to parse and structure log lines
  • Create Grafana dashboards combining Loki logs with Prometheus metrics
  • Monitor Loki's own resource usage as log volume grows

Was this article helpful?