Read replicas are copies of your primary database that handle read-only queries, distributing the read workload and improving application performance. This guide covers setting up read replicas for MySQL, PostgreSQL, and practical patterns for routing read traffic in your application.
Why Read Replicas?
- Scale reads horizontally — distribute SELECT queries across multiple servers
- Reduce primary load — offload reporting, analytics, and search queries
- Geographic performance — place replicas closer to users in different regions
- High availability — promote a replica if the primary fails
MySQL Replication Setup
Configure the Primary
# /etc/mysql/mysql.conf.d/mysqld.cnf on PRIMARY
[mysqld]
server-id = 1
log-bin = mysql-bin
binlog-format = ROW
gtid-mode = ON
enforce-gtid-consistency = ON
binlog-expire-logs-seconds = 604800 # 7 days
sync-binlog = 1
-- Create replication user
CREATE USER 'replicator'@'%' IDENTIFIED BY 'ReplicaPass123!';
GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'%';
FLUSH PRIVILEGES;
Configure the Replica
# /etc/mysql/mysql.conf.d/mysqld.cnf on REPLICA
[mysqld]
server-id = 2
relay-log = relay-bin
read-only = ON
super-read-only = ON
gtid-mode = ON
enforce-gtid-consistency = ON
replica-parallel-workers = 4
replica-preserve-commit-order = ON
-- On the replica, configure and start replication
CHANGE REPLICATION SOURCE TO
SOURCE_HOST = 'primary-ip',
SOURCE_USER = 'replicator',
SOURCE_PASSWORD = 'ReplicaPass123!',
SOURCE_AUTO_POSITION = 1;
START REPLICA;
SHOW REPLICA STATUS\G
PostgreSQL Streaming Replication
Configure the Primary
# postgresql.conf on PRIMARY
wal_level = replica
max_wal_senders = 10
wal_keep_size = 1GB
hot_standby = on
# pg_hba.conf
host replication replicator replica-ip/32 scram-sha-256
-- Create replication user
CREATE USER replicator WITH REPLICATION PASSWORD 'ReplicaPass123!';
Set Up the Replica
# Stop PostgreSQL on replica
sudo systemctl stop postgresql
# Clear data directory and clone from primary
sudo rm -rf /var/lib/postgresql/16/main/*
sudo -u postgres pg_basebackup -h primary-ip -U replicator -D /var/lib/postgresql/16/main -Fp -Xs -P -R
# The -R flag creates standby.signal and sets primary_conninfo
# Start the replica
sudo systemctl start postgresql
# Verify replication
sudo -u postgres psql -c "SELECT * FROM pg_stat_replication;" # On primary
sudo -u postgres psql -c "SELECT pg_is_in_recovery();" # On replica (returns true)
Application-Level Read Routing
PHP Implementation
class DatabaseManager {
private PDO $primary;
private array $replicas = [];
private int $replicaIndex = 0;
public function __construct(array $config) {
$this->primary = new PDO(
"mysql:host={$config['primary']['host']};dbname={$config['database']}",
$config['primary']['user'],
$config['primary']['password'],
[PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION]
);
foreach ($config['replicas'] as $replica) {
$this->replicas[] = new PDO(
"mysql:host={$replica['host']};dbname={$config['database']}",
$replica['user'],
$replica['password'],
[PDO::ATTR_ERRMODE => PDO::ERRMODE_EXCEPTION]
);
}
}
public function write(): PDO {
return $this->primary;
}
public function read(): PDO {
if (empty($this->replicas)) {
return $this->primary;
}
// Round-robin selection
$pdo = $this->replicas[$this->replicaIndex % count($this->replicas)];
$this->replicaIndex++;
return $pdo;
}
}
// Usage
$db = new DatabaseManager($config);
// Writes go to primary
$db->write()->prepare("INSERT INTO orders ...")->execute([...]);
// Reads go to replica
$results = $db->read()->query("SELECT * FROM orders WHERE ...")->fetchAll();
Node.js Implementation
const mysql = require('mysql2/promise');
const primary = mysql.createPool({
host: 'primary-ip',
user: 'app',
password: 'pass',
database: 'myapp',
connectionLimit: 20
});
const replicas = [
mysql.createPool({ host: 'replica1-ip', user: 'app', password: 'pass', database: 'myapp', connectionLimit: 20 }),
mysql.createPool({ host: 'replica2-ip', user: 'app', password: 'pass', database: 'myapp', connectionLimit: 20 }),
];
let replicaIdx = 0;
function getReadPool() {
const pool = replicas[replicaIdx % replicas.length];
replicaIdx++;
return pool;
}
// Write to primary
await primary.execute('INSERT INTO orders (user_id, total) VALUES (?, ?)', [userId, total]);
// Read from replica
const [rows] = await getReadPool().execute('SELECT * FROM orders WHERE user_id = ?', [userId]);
Monitoring Replication Lag
-- MySQL: Check replica lag
SHOW REPLICA STATUS\G
-- Look for: Seconds_Behind_Source
-- PostgreSQL: Check replication lag on primary
SELECT client_addr, state, sent_lsn, write_lsn, flush_lsn, replay_lsn,
pg_size_pretty(pg_wal_lsn_diff(sent_lsn, replay_lsn)) AS lag
FROM pg_stat_replication;
Handling Replication Lag in Applications
// After a write, read from primary for consistency
async function createOrder(data) {
await primary.execute('INSERT INTO orders ...', [data]);
// Read from primary immediately after write (avoid stale replica read)
const [order] = await primary.execute('SELECT * FROM orders WHERE id = ?', [data.id]);
return order;
}
// For non-critical reads, use replicas
async function listOrders(userId) {
return getReadPool().execute('SELECT * FROM orders WHERE user_id = ?', [userId]);
}
Best Practices
- Route writes to primary and reads to replicas — never write to a replica
- After a write, read from primary for immediate consistency (read-your-writes pattern)
- Monitor replication lag continuously and alert if it exceeds your tolerance
- Use health checks to remove lagging or offline replicas from the read pool
- For MySQL, use
super-read-onlyto prevent accidental writes to replicas - Consider using ProxySQL or PgBouncer with read/write routing instead of application-level routing
- Start with one replica and add more as your read workload grows
- Place replicas in different availability zones for geographic distribution