Grafana Alloy (formerly Grafana Agent) is an OpenTelemetry-compatible observability agent that collects metrics, logs, traces, and profiles from your infrastructure and sends them to Grafana Cloud, Prometheus, Loki, and Tempo. It replaces the need for multiple collection agents with a single, configurable pipeline. This guide covers deployment and configuration for VPS monitoring.
Installation
# Ubuntu/Debian
wget -q -O - https://apt.grafana.com/gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/grafana.gpg
echo "deb [signed-by=/etc/apt/keyrings/grafana.gpg] https://apt.grafana.com stable main" | sudo tee /etc/apt/sources.list.d/grafana.list
sudo apt update && sudo apt install alloy
# Start
sudo systemctl enable --now alloy
Configuration
// /etc/alloy/config.alloy
// Scrape node metrics
prometheus.exporter.unix "default" {
include_exporter_metrics = true
}
prometheus.scrape "node" {
targets = prometheus.exporter.unix.default.targets
forward_to = [prometheus.remote_write.default.receiver]
scrape_interval = "15s"
}
// Send metrics to Prometheus/Grafana Cloud
prometheus.remote_write "default" {
endpoint {
url = "https://prometheus-prod.grafana.net/api/prom/push"
basic_auth {
username = "123456"
password = "your-api-key"
}
}
}
// Collect logs
loki.source.journal "systemd" {
forward_to = [loki.write.default.receiver]
labels = {
job = "systemd-journal",
instance = constants.hostname,
}
}
loki.write "default" {
endpoint {
url = "https://logs-prod.grafana.net/loki/api/v1/push"
basic_auth {
username = "123456"
password = "your-api-key"
}
}
}
Scraping Application Metrics
// Scrape Prometheus endpoints from your applications
prometheus.scrape "applications" {
targets = [
{"__address__" = "localhost:8080", "job" = "myapp"},
{"__address__" = "localhost:9090", "job" = "prometheus"},
]
forward_to = [prometheus.remote_write.default.receiver]
}
Docker Container Metrics
// Discover and scrape Docker containers
discovery.docker "containers" {
host = "unix:///var/run/docker.sock"
}
prometheus.scrape "docker" {
targets = discovery.docker.containers.targets
forward_to = [prometheus.remote_write.default.receiver]
}
Best Practices
- Use Alloy instead of running separate node_exporter, promtail, and OTEL collector
- Set appropriate scrape intervals — 15-30 seconds for most metrics
- Use relabeling to add environment and service labels for filtering
- Monitor Alloy itself — it exposes metrics on port 12345 by default
- Use the built-in UI at
http://localhost:12345to debug pipeline configuration