Docs / Monitoring & Logging / Graylog for Enterprise Log Management

Graylog for Enterprise Log Management

By Admin · Mar 15, 2026 · Updated Apr 23, 2026 · 233 views · 3 min read

Graylog is an open-source log management platform that collects, indexes, and analyzes log data from across your infrastructure. It provides powerful search, dashboards, alerting, and pipeline processing for enterprise-scale log management. This guide covers deploying Graylog and configuring it for VPS and application log aggregation.

Installation with Docker

services:
  mongodb:
    image: mongo:7.0
    volumes:
      - mongo_data:/data/db

  opensearch:
    image: opensearchproject/opensearch:2.12.0
    environment:
      - discovery.type=single-node
      - plugins.security.disabled=true
      - OPENSEARCH_JAVA_OPTS=-Xms1g -Xmx1g
    volumes:
      - os_data:/usr/share/opensearch/data

  graylog:
    image: graylog/graylog:6.0
    depends_on:
      - mongodb
      - opensearch
    environment:
      - GRAYLOG_PASSWORD_SECRET=your-password-secret-min-16-chars
      - GRAYLOG_ROOT_PASSWORD_SHA2=sha256-hash-of-admin-password
      - GRAYLOG_HTTP_EXTERNAL_URI=http://graylog.example.com:9000/
      - GRAYLOG_MONGODB_URI=mongodb://mongodb:27017/graylog
      - GRAYLOG_ELASTICSEARCH_HOSTS=http://opensearch:9200
    ports:
      - "9000:9000"       # Web UI
      - "1514:1514"       # Syslog TCP
      - "1514:1514/udp"   # Syslog UDP
      - "12201:12201"     # GELF TCP
      - "12201:12201/udp" # GELF UDP
    volumes:
      - graylog_data:/usr/share/graylog/data

volumes:
  mongo_data:
  os_data:
  graylog_data:
# Generate password hash
echo -n "YourAdminPassword" | sha256sum | awk '{print $1}'

# Generate secret
openssl rand -hex 32

Configuring Inputs

In Graylog web UI (System → Inputs), create inputs to receive logs:

  • Syslog UDP/TCP (port 1514) — receive standard syslog from servers and network devices
  • GELF UDP/TCP (port 12201) — Graylog Extended Log Format for structured logging
  • Beats (port 5044) — receive from Filebeat/Winlogbeat
  • Raw/Plaintext (custom port) — for applications with custom log formats

Sending Logs from Servers

# rsyslog (send all logs to Graylog)
# /etc/rsyslog.d/graylog.conf
*.* @graylog-server:1514;RSYSLOG_SyslogProtocol23Format

# Docker GELF driver
docker run -d --log-driver=gelf --log-opt gelf-address=udp://graylog:12201 --log-opt tag="myapp" nginx

# Filebeat
# filebeat.yml
output.logstash:
  hosts: ["graylog:5044"]
filebeat.inputs:
  - type: log
    paths:
      - /var/log/nginx/access.log
    fields:
      source: nginx

Search Queries

# Full-text search
error

# Field-specific search
source:nginx AND http_status:500

# Range queries
response_time:>1000

# Wildcards
message:*timeout*

# Combine with boolean operators
source:api AND (level:ERROR OR level:FATAL) AND NOT message:healthcheck

# Time-based
timestamp:[2025-01-15 TO 2025-01-16]

# Regex
message:/Connection refused .*/

Processing Pipelines

// Pipeline rule: Extract fields from nginx access logs
rule "parse nginx access log"
when
    has_field("source") AND to_string($message.source) == "nginx"
then
    let parsed = grok("%{COMBINEDAPACHELOG}", to_string($message.message));
    set_field("client_ip", parsed.clientip);
    set_field("http_method", parsed.verb);
    set_field("request_path", parsed.request);
    set_field("http_status", to_long(parsed.response));
    set_field("response_bytes", to_long(parsed.bytes));
end

// Pipeline rule: Enrich with GeoIP
rule "geoip lookup"
when
    has_field("client_ip")
then
    let geo = lookup("geoip", to_string($message.client_ip));
    set_field("client_country", geo["country_name"]);
    set_field("client_city", geo["city_name"]);
end

Alerting

# In Graylog: Alerts → Event Definitions → Create

# Example alert: More than 50 errors in 5 minutes
# Condition: count() > 50
# Search: level:ERROR
# Time range: 5 minutes
# Notifications: Email, Slack, HTTP webhook

Best Practices

  • Use GELF format for application logs — it supports structured fields natively
  • Create processing pipelines to parse and enrich logs at ingest time
  • Set index retention policies based on storage capacity and compliance requirements
  • Use streams to separate log sources for different retention and access policies
  • Size OpenSearch memory appropriately — at least 50% of data volume in heap
  • Create dashboards for each team (operations, development, security)

Was this article helpful?