Docs / Monitoring & Logging / Monitoring MySQL with Grafana and Prometheus

Monitoring MySQL with Grafana and Prometheus

By Admin · Mar 15, 2026 · Updated Apr 25, 2026 · 338 views · 3 min read

Monitoring MySQL with Prometheus and Grafana provides deep visibility into database performance, query execution, replication status, and resource utilization. This guide covers setting up the MySQL exporter, essential metrics to monitor, and building effective Grafana dashboards for MySQL.

Install MySQL Exporter

# Create monitoring user in MySQL
CREATE USER 'exporter'@'localhost' IDENTIFIED BY 'ExporterPass123!' WITH MAX_USER_CONNECTIONS 3;
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'localhost';
FLUSH PRIVILEGES;

# Install mysqld_exporter
wget https://github.com/prometheus/mysqld_exporter/releases/download/v0.15.1/mysqld_exporter-0.15.1.linux-amd64.tar.gz
tar xzf mysqld_exporter-0.15.1.linux-amd64.tar.gz
sudo cp mysqld_exporter-0.15.1.linux-amd64/mysqld_exporter /usr/local/bin/

# Create config
cat > /etc/.mysqld_exporter.cnf  0.8

# InnoDB buffer pool hit ratio
(mysql_global_status_innodb_buffer_pool_read_requests - mysql_global_status_innodb_buffer_pool_reads) / mysql_global_status_innodb_buffer_pool_read_requests * 100
# Should be > 99%

# InnoDB row operations
rate(mysql_global_status_innodb_row_ops_total[5m])

# Table locks
rate(mysql_global_status_table_locks_waited[5m])

# Replication lag (for replicas)
mysql_slave_status_seconds_behind_master

# Binary log size
mysql_global_status_binlog_cache_disk_use

# Temporary tables on disk
rate(mysql_global_status_created_tmp_disk_tables[5m]) / rate(mysql_global_status_created_tmp_tables[5m]) * 100

Grafana Dashboard

# Import the official MySQL dashboard
# Grafana → Dashboards → Import → ID: 14057 (MySQL Overview)
# Or ID: 7362 (MySQL - Percona style)

# Custom panels:

# QPS by type
rate(mysql_global_status_commands_total{command=~"select|insert|update|delete"}[5m])

# Connection utilization gauge
mysql_global_status_threads_connected / mysql_global_variables_max_connections * 100

# InnoDB buffer pool usage
mysql_global_status_innodb_buffer_pool_bytes_data / (mysql_global_status_innodb_buffer_pool_bytes_data + mysql_global_status_innodb_buffer_pool_bytes_free) * 100

Alert Rules

# prometheus/alerts/mysql.yml
groups:
  - name: mysql
    rules:
      - alert: MySQLDown
        expr: mysql_up == 0
        for: 1m
        labels:
          severity: critical

      - alert: MySQLHighConnections
        expr: mysql_global_status_threads_connected / mysql_global_variables_max_connections > 0.8
        for: 5m
        labels:
          severity: warning

      - alert: MySQLSlowQueries
        expr: rate(mysql_global_status_slow_queries[5m]) > 0.1
        for: 10m
        labels:
          severity: warning

      - alert: MySQLReplicationLag
        expr: mysql_slave_status_seconds_behind_master > 30
        for: 5m
        labels:
          severity: critical

      - alert: MySQLInnoDBBufferPoolLow
        expr: (mysql_global_status_innodb_buffer_pool_read_requests - mysql_global_status_innodb_buffer_pool_reads) / mysql_global_status_innodb_buffer_pool_read_requests * 100 < 95
        for: 15m
        labels:
          severity: warning

Best Practices

  • Monitor InnoDB buffer pool hit ratio — below 99% indicates insufficient buffer pool size
  • Track connection usage percentage, not just count — alert before reaching max_connections
  • Monitor replication lag on all replicas — even brief lag can indicate problems
  • Use the --collect.info_schema.processlist flag to see active queries
  • Set up slow query monitoring with appropriate thresholds for your workload
  • Create separate dashboards for primary and replica servers

Was this article helpful?