MySQL InnoDB Cluster provides a complete high-availability solution for MySQL databases, combining MySQL Group Replication, MySQL Router, and MySQL Shell into an integrated, easy-to-manage package. This guide walks you through setting up a production-ready InnoDB Cluster on your VPS infrastructure.
Prerequisites
Before setting up InnoDB Cluster, ensure you have:
- Three or more VPS instances running Ubuntu 22.04+ or Rocky Linux 9+
- MySQL 8.0.27 or later installed on each node
- Private network connectivity between all nodes
- At least 4GB RAM per node (8GB recommended for production)
- MySQL Shell 8.0.27+ installed on your administration machine
Understanding InnoDB Cluster Architecture
InnoDB Cluster uses a three-layer architecture. Group Replication handles data synchronization between MySQL instances using a Paxos-based consensus protocol. MySQL Router provides transparent connection routing and automatic failover. MySQL Shell serves as the administration interface for cluster setup and management.
In a typical three-node cluster, one node acts as the primary (read-write) and the others serve as secondaries (read-only). If the primary fails, the remaining nodes automatically elect a new primary, typically within seconds.
Step 1: Configure MySQL Instances
On each MySQL node, edit the configuration file to enable Group Replication prerequisites:
# /etc/mysql/mysql.conf.d/mysqld.cnf (Ubuntu)
# /etc/my.cnf.d/mysql-server.cnf (Rocky Linux)
[mysqld]
server-id = 1 # Unique per node: 1, 2, 3
bind-address = 0.0.0.0
report-host = node1.example.com
# Binary logging (required)
log-bin = mysql-bin
binlog-format = ROW
gtid-mode = ON
enforce-gtid-consistency = ON
# Group Replication settings
plugin-load-add = group_replication.so
group_replication_group_name = "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"
group_replication_start_on_boot = OFF
group_replication_local_address = "node1:33061"
group_replication_group_seeds = "node1:33061,node2:33061,node3:33061"
# Performance tuning for replication
replica_parallel_workers = 4
replica_preserve_commit_order = ON
binlog_transaction_dependency_tracking = WRITESET
# InnoDB settings
innodb_buffer_pool_size = 2G
innodb_flush_log_at_trx_commit = 1
innodb_redo_log_capacity = 1G
Restart MySQL on each node after applying the configuration changes:
sudo systemctl restart mysqld
Step 2: Create the Cluster Admin Account
On each node, create an administrative user for cluster operations:
mysql -u root -p
CREATE USER 'icadmin'@'%' IDENTIFIED BY 'YourStrongPassword123!';
GRANT ALL PRIVILEGES ON *.* TO 'icadmin'@'%' WITH GRANT OPTION;
FLUSH PRIVILEGES;
Step 3: Configure Instances with MySQL Shell
Use MySQL Shell to verify and configure each instance for InnoDB Cluster:
mysqlsh
// Connect and configure each instance
dba.configureInstance('icadmin@node1:3306', {clusterAdmin: 'icadmin'});
dba.configureInstance('icadmin@node2:3306', {clusterAdmin: 'icadmin'});
dba.configureInstance('icadmin@node3:3306', {clusterAdmin: 'icadmin'});
MySQL Shell will check each instance and apply any necessary configuration changes. You may need to restart MySQL after this step if configuration changes were required.
Step 4: Create the Cluster
Connect to the first node and create the cluster:
mysqlsh icadmin@node1:3306
// Create the cluster
var cluster = dba.createCluster('ProductionCluster', {
memberWeight: 90,
autoRejoinTries: 3,
expelTimeout: 5
});
// Add secondary nodes
cluster.addInstance('icadmin@node2:3306', {recoveryMethod: 'clone'});
cluster.addInstance('icadmin@node3:3306', {recoveryMethod: 'clone'});
// Check cluster status
cluster.status();
Step 5: Deploy MySQL Router
MySQL Router provides transparent failover for your applications. Install and bootstrap it on your application servers:
# Install MySQL Router
sudo apt install mysql-router # Ubuntu
sudo dnf install mysql-router # Rocky Linux
# Bootstrap against the cluster
sudo mysqlrouter --bootstrap icadmin@node1:3306 --user=mysqlrouter --directory=/etc/mysqlrouter
# Start MySQL Router
sudo systemctl start mysqlrouter
sudo systemctl enable mysqlrouter
After bootstrapping, MySQL Router listens on ports 6446 (read-write) and 6447 (read-only). Point your applications to these ports instead of connecting directly to MySQL instances.
Step 6: Verify and Monitor
Check cluster health regularly using MySQL Shell:
mysqlsh icadmin@node1:3306
var cluster = dba.getCluster();
cluster.status({extended: 1});
// Check for any issues
cluster.checkInstanceState('icadmin@node2:3306');
// View router connections
cluster.listRouters();
Handling Failover Scenarios
InnoDB Cluster handles most failover scenarios automatically. If the primary node goes down, the remaining nodes elect a new primary within 5-10 seconds. MySQL Router detects the topology change and redirects connections accordingly.
To manually switch the primary node (for planned maintenance):
var cluster = dba.getCluster();
cluster.setPrimaryInstance('icadmin@node2:3306');
To rejoin a node after maintenance:
cluster.rejoinInstance('icadmin@node1:3306');
Production Best Practices
- Use an odd number of nodes (3 or 5) to ensure proper quorum during network partitions
- Place nodes in different availability zones when possible for geographic redundancy
- Monitor replication lag using
performance_schema.replication_group_member_stats - Set appropriate timeouts — the
group_replication_member_expel_timeoutshould balance between fast failover and avoiding false positives - Test failover regularly by stopping MySQL on the primary and verifying automatic election
- Use MySQL Router's REST API (port 8443) for health checks in load balancer configurations