A MongoDB replica set provides automatic failover and data redundancy by maintaining multiple copies of your data across different servers. This guide walks through configuring a production-ready three-member replica set with authentication, monitoring, and maintenance best practices.
Replica Set Architecture
A MongoDB replica set consists of:
- Primary — accepts all write operations
- Secondaries — replicate data from the primary and can serve read operations
- Arbiter (optional) — participates in elections but holds no data
The minimum recommended configuration is three data-bearing members. If the primary becomes unavailable, the remaining members automatically elect a new primary, typically within 10-12 seconds.
Prerequisites
- Three VPS instances with at least 4GB RAM each
- MongoDB 7.0+ installed on all nodes
- Private network connectivity between nodes
- NTP synchronized clocks across all nodes
- Ports 27017 open between replica set members
Step 1: Install MongoDB on All Nodes
# Ubuntu 22.04/24.04
curl -fsSL https://www.mongodb.org/static/pgp/server-7.0.asc | sudo gpg --dearmor -o /usr/share/keyrings/mongodb-server-7.0.gpg
echo "deb [signed-by=/usr/share/keyrings/mongodb-server-7.0.gpg] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/7.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-7.0.list
sudo apt update && sudo apt install -y mongodb-org
# Rocky Linux 9
cat > /etc/yum.repos.d/mongodb-org-7.0.repo <<EOF
[mongodb-org-7.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/9/mongodb-org/7.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-7.0.asc
EOF
sudo dnf install -y mongodb-org
Step 2: Generate Keyfile for Authentication
Replica set members authenticate using a shared keyfile. Generate it on one node and copy to all others:
# Generate keyfile
openssl rand -base64 756 > /etc/mongod-keyfile
chmod 400 /etc/mongod-keyfile
chown mongod:mongod /etc/mongod-keyfile
# Copy to other nodes
scp /etc/mongod-keyfile node2:/etc/mongod-keyfile
scp /etc/mongod-keyfile node3:/etc/mongod-keyfile
Step 3: Configure mongod on Each Node
# /etc/mongod.conf (same on all nodes, except bindIp)
storage:
dbPath: /var/lib/mongo
journal:
enabled: true
wiredTiger:
engineConfig:
cacheSizeGB: 2 # ~60% of available RAM
systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log
net:
port: 27017
bindIp: 0.0.0.0 # Or specific private IP
replication:
replSetName: "rs0"
oplogSizeMB: 2048 # Adjust based on write volume
security:
authorization: enabled
keyFile: /etc/mongod-keyfile
Start MongoDB on all nodes:
sudo systemctl start mongod
sudo systemctl enable mongod
Step 4: Initialize the Replica Set
Connect to one node and initiate the replica set:
mongosh --host node1
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "node1:27017", priority: 10 },
{ _id: 1, host: "node2:27017", priority: 5 },
{ _id: 2, host: "node3:27017", priority: 5 }
]
})
Step 5: Create Admin User
// On the primary node
use admin
db.createUser({
user: "adminUser",
pwd: "StrongAdminPassword!",
roles: [
{ role: "root", db: "admin" }
]
})
// Create application user
use appdb
db.createUser({
user: "appUser",
pwd: "AppPassword123!",
roles: [
{ role: "readWrite", db: "appdb" }
]
})
Step 6: Verify Replication
// Check replica set status
rs.status()
// Verify replication is working
use appdb
db.test.insertOne({ message: "replication test", ts: new Date() })
// On a secondary, verify the document exists
db.getMongo().setReadPref("secondary")
db.test.findOne()
Connection String for Applications
# Standard connection string with all members
mongodb://appUser:AppPassword123!@node1:27017,node2:27017,node3:27017/appdb?replicaSet=rs0&authSource=appdb
# With read preference for read scaling
mongodb://appUser:password@node1,node2,node3/appdb?replicaSet=rs0&readPreference=secondaryPreferred
Read Preferences
MongoDB supports several read preference modes:
- primary — All reads go to primary (default, strongest consistency)
- primaryPreferred — Read from primary, fall back to secondary
- secondary — All reads go to secondaries
- secondaryPreferred — Read from secondary, fall back to primary
- nearest — Read from the member with lowest network latency
Monitoring the Replica Set
// Check replication lag
rs.printSecondaryReplicationInfo()
// Monitor oplog size and usage
db.getReplicationInfo()
// Check member health
rs.status().members.forEach(m => {
print(m.name + ": " + m.stateStr + " (lag: " + (m.optimeDate ? (new Date() - m.optimeDate)/1000 + "s" : "N/A") + ")")
})
Maintenance Operations
Adding a New Member
rs.add("node4:27017")
Removing a Member
rs.remove("node4:27017")
Forcing a Primary Election
// Step down current primary (for maintenance)
rs.stepDown(300) // Step down for 300 seconds
Production Best Practices
- Always use an odd number of voting members (3, 5, or 7) for clean elections
- Place members in different data centers or availability zones for disaster recovery
- Monitor replication lag and alert if it exceeds your RPO threshold
- Use write concern
majorityfor critical writes to ensure durability - Size the oplog to cover at least 24-72 hours of operations for maintenance flexibility
- Enable journaling (default in WiredTiger) for crash recovery
- Set appropriate priority values to control which node becomes primary