1. Introduction
  2. What is MySQL Replication?
    1. Replication Scheme
    2. Global Transaction Identifiers (GTID)
    3. Multi Threaded Slave
    4. Crash-safe Slave
    5. Group Commit
  3. Topology for MySQL Replication
    1. Master with Slaves (Single Replication)
    2. Master with Relay Slave (Chain Replication)
    3. Master with Active Master (Circular Replication)
    4. Master with Backup Master (Multiple Replication)
    5. Multiple Masters to Single Slave (Multi-Source Replication)
    6. Galera with Replication Slave (Hybrid Replication)
  4. Deploying a MySQL Replication Setup
  5. Connecting Application to the Replication Setup
    1. Application Connecter
    2. Fabric-aware Connector
    3. Reverse proxy/Load Balancer
  6. Failover with ClusterControl
    1. Failure of Master
    2. Failure of a Slave
  7. Operations - managing your MySQL Replication setup
    1. Show Replication Status
    2. Start/Stop Replication
    3. Promote Slave
    4. Rebuild Replication Slave
    5. Backup
    6. Restore
    7. Upgrade
    8. Configuration Changes
    9. Schema Changes
    10. Topology Changes
  8. Issues and Troubleshooting
    1. Replication Status
    2. Slave Lag
    3. Data Drifting
    4. Errant Transaction
    5. Corrupted Slave
    6. Recommendations

1. Introduction

MySQL Replication is probably the most popular high availability solution for MySQL, and widely used by top web properties like Twitter and Facebook. Although easy to set up, ongoing maintenance like software upgrades, schema changes, topology changes, failover and recovery have always been tricky. At least until MySQL 5.6.

Fortunately, MySQL 5.6 brought a number of significant enhancements to Replication, including Global Transaction IDs, event checksums, multi-threaded slaves and crash-safe slaves/masters. Replication got even better with MySQL 5.7.

This tutorial covers basic information about MySQL Replication, with information about the latest features introduced in 5.6 and 5.7. At the end, you should be able to answer questions like:

  • How do I use GTID with replication?
  • How do I recover my setup if my master fails?
  • How do I upgrade the master and slave servers without downtime?
  • How do I change my database schema across all servers?
  • How do I handle slave lag?
  • etc.

There is also a more hands-on, practical section on how to quickly deploy and manage a replication setup using ClusterControl. You would need 4 hosts/VMs if you plan on doing this.

2. What is MySQL Replication?

Replication enables data from one MySQL server (the master) to be replicated to one or more MySQL servers (the slaves). MySQL Replication is very easy to setup, and is used to scale out read workloads, provide high availability and geographic redundancy, and offload backups and analytic jobs.

2.1. Replication Scheme

There are currently two replication schemes supported by MySQL Replication:

  • Asynchronous replication
  • Semi-synchronous replication

There is no restriction in mixing replication schemes in the same topology. Both have their pros and cons. At the time of writing, there is no fully-synchronous solution for MySQL replication.

2.1.1. Asynchronous replication

MySQL Replication by default is asynchronous. This is the oldest, most popular and widely deployed replication scheme. With asynchronous replication, the master writes events to its binary log and slaves request them when they are ready. There is no guarantee that any event will ever reach any slave. It’s a loosely coupled master-slave relationship, where:

  • Master does not wait for Slave
  • Slave determines how much to read and from which point in the binary log
  • Slave can be arbitrarily behind master in reading and applying changes

If the master crashes, transactions that it has committed might not have been transmitted to any slave. Consequently, failover from master to slave in this case may result in failover to a server that is missing transactions relative to the master.

Asynchronous replication provides lower write latency, since a write is acknowledged locally by a master before being written to slaves. It is great for read scaling as adding more replicas does not impact replication latency. Good use cases for asynchronous replication include deployment of read replicas for read scaling, live backup copy for disaster recovery and analytics/reporting.

2.1.2. Semi-synchronous Replication

MySQL also supports semi-synchronous replication, where the master does not confirm transactions to the client until at least one slave has copied the change to its relay log, and flushed it to disk. To enable semi-synchronous replication, extra steps for plugin installation are required, and must be enabled on the designated MySQL master and slave.

Semi-synchronous seems to be good and practical solution for many cases where high availability and no data-loss is important. But you should consider that semi-synchronous has a performance impact due to the additional round trip and does not provide strong guarantees against data loss. When a commit returns successfully, it is known that the data exists in at least two places (on the master and at least one slave). If the master commits but a crash occurs while the master is waiting for acknowledgment from a slave, it is possible that the transaction may not have reached any slave.

A good use case for semi-synchronous replication is a backup master to reduce the impact of a master failure by minimizing the risk of data loss. We’ll explain this in details under ‘Chapter 3 - Topology for MySQL Replication’.

2.2. Global Transaction Identifier (GTID)

Global Transaction Identifiers (GTID) was introduced in MySQL 5.6. GTID is a unique identifier created and associated with each transaction committed on the server of origin (master). This identifier is unique not only to the server on which it originated, but is unique across all servers in a given replication setup. There is a one-to-one mapping between all transactions and all GTIDs. Note that MySQL and MariaDB have different GTID implementation, as we’ll explain further down.

2.2.1. Replication in MySQL 5.5 and earlier

In MySQL 5.5, resuming a broken replication setup required you to determine the last binary log file and position, which are distinct on nodes if binary logging is enabled. If the MySQL master fails, replication breaks and the slave will need to switch to another master. You will need to promote the most updated slave node to be a master, and manually determine a new binary log file and position of the last transaction executed by the slave. Another option is to dump the data from the new master node, restore it on slave and start replication with the new master node. These options are of course doable, but not very practical in production.

2.2.2. How GTID Solves the Problem

GTID (Global Transaction Identifier) provides a better transactions mapping across nodes. In MySQL 5.5. or before, Replication works in such a way that all nodes will generate different binlog files. Binlog events are same and in the same order, but binlog file offsets may vary. With GTID, slaves can see a unique transaction coming in from several masters and this can easily be mapped into the slave execution list if it needs to restart or resume replication.

Every transaction has an unique identifier which identifies it in the same way on every server. It’s not important anymore in which binary log position a transaction was recorded, all you need to know is the GTID: ‘966073f3-b6a4-11e4-af2c-080027880ca6:4’. GTID is built from two parts - the unique identifier of a server where a transaction was first executed, and a sequence number. In the above example, we can see that the transaction was executed by the server with server_uuid of ‘966073f3-b6a4-11e4-af2c-080027880ca6’ and it’s 4th transaction executed there. This information is enough to perform complex topology changes - MySQL knows which transactions have been executed and therefore it knows which transactions need to be executed next. Forget about binary logs, it’s now all in the GTID.

All necessary information for synchronizing with the master can be obtained directly from the replication stream. When you are using GTIDs for replication, you do not need to include MASTER_LOG_FILE or MASTER_LOG_POS options in the CHANGE MASTER TO statement; instead, it is necessary only to enable the MASTER_AUTO_POSITION option.

2.2.3. MariaDB GTID vs MySQL GTID

MariaDB has a different implementation of Global Transaction ID (GTID), and is enabled by default starting from MariaDB 10.0.2. A MariaDB GTID consists of three separated values:

  • Domain ID - Replication domain. A replication domain is a server or group of servers that generate a single, strictly ordered replication stream.
  • Server ID - Server identifier number to enable master and slave servers to identify themselves uniquely.
  • Event Group ID - A sequence number for a collection of events that are always applied as a unit. Every binlog event group (eg. transaction, DDL, non-transactional statement) is annotated with its GTID.

The figure below illustrates the differences between the two GTIDs:

In MariaDB, there is no special configuration needed on the server to start using GTID. Some of MariaDB GTID advantages:

  • It is easy to identify which server or domain the event group is originating from
  • You do not necessarily need to turn on binary logging on slaves
  • It allows multi-source replication with distinct domain ID
  • Enabling GTID features is dynamic, you don’t have to restart the MariaDB server
  • The state of the slave is recorded in a crash-safe way

Despite the differences between these two, it is still possible to replicate from MySQL 5.6 to MariaDB 10.0 or vice versa. However, you will not be able to use the GTID features to automatically pick the correct binlog position when switching to a new master. Old-style MySQL replication will work.

Further reading can be found at MariaDB GTID and MySQL GTID documentation page.

2.3. Multi-threaded Slave

MySQL 5.6 allows you to execute replicated events in parallel as long as data is split across several databases. This feature is named “Multi-Threaded Slave” (MTS) and it is easy to enable by setting slave_parallel_workers to a > 1 value. In MySQL 5.7, it can now be used for any workload, including intra-schema, unlike 5.6 where it could only be applied with one thread per schema.

2.4. Crash-safe Slave

Crash safe means even if a slave mysqld/OS crash, you can recover the slave and continue replication without restoring MySQL databases onto the slave. To make crash safe slave work, you have to use InnoDB storage engine only, and in 5.6 you need to set relay_log_info_repository=TABLE and relay_log_recovery=1.

Durability (sync_binlog = 1 and innodb_flush_log_at_trx_commit = 1) is NOT required.

2.5. Group Commit

InnoDB, like any other ACID-compliant database engine, flushes the redo log of a transaction before it is committed. InnoDB uses group commit functionality to group multiple such flush requests together to avoid one flush for each commit. With group commit, InnoDB issues a single write to the log file to perform the commit action for multiple user transactions that commit at about the same time, significantly improving throughput.

3. Topology for MySQL Replication

3.1. Master with Slaves (Single Replication)

This the most straightforward MySQL replication topology. One master receives writes, one or more slaves replicate from the same master via asynchronous or semi-synchronous replication. If the designated master goes down, the most up-to-date slave must be promoted as new master. The remaining slaves resume the replication from the new master.

3.2. Master with Relay Slaves (Chain Replication)

This setup use an intermediate master to act as a relay to the other slaves in the replication chain. When there are many slaves connected to a master, the network interface of the master can get overloaded. This topology allows the read replicas to pull the replication stream from the relay server to offload the master server. On the slave relay server, binary logging and log_slave_updates must be enabled, whereby updates received by the slave server from the master server are logged to the slave's own binary log.

Using slave relay has its problems:

  • log_slave_updates has some performance penalty.
  • Replication lag on the slave relay server will generate delay on all of its slaves.
  • Rogue transactions on the slave relay server will infect of all its slaves.
  • If a slave relay server fails and you are not using GTID, all of its slaves stop replicating and they need to be reinitialized.

3.3. Master with Active Master (Circular Replication)

Also known as ring topology, this setup requires two or more MySQL servers which act as master. All masters receive writes and generate binlogs with a few caveats:

  • You need to set auto-increment offset on each server to avoid primary key collisions.
  • There is no conflict resolution.
  • MySQL Replication currently does not support any locking protocol between master and slave to guarantee the atomicity of a distributed update across two different servers.
  • Common practice is to only write to one master and the other master acts as a hot-standby node. Still, if you have slaves below that tier, you have to switch to the new master manually if the designated master fails.

ClusterControl does not support this topology. ClusterControl will raise an alarm because two or more masters are currently running.

3.4. Master with Backup Master (Multiple Replication)

The master pushes changes to a backup master and to one or more slaves. Semi-synchronous replication is used between master and backup master. Master sends update to backup master and waits with transaction commit. Backup master gets update, writes to its relay log and flushes to disk. Backup master then acknowledges receipt of the transaction to the master, and proceeds with transaction commit. Semi-sync replication has a performance impact, but the risk for data loss is minimized.

This topology works well when performing master failover in case the master goes down. The backup master acts as a warm-standby server as it has the highest probability of having up-to-date data when compared to other slaves.

3.5. Multiple Masters to Single Slave (Multi-Source Replication)

Multi-Source Replication enables a replication slave to receive transactions from multiple sources simultaneously. Multi-source replication can be used to backup multiple servers to a single server, to merge table shards, and consolidate data from multiple servers to a single server.

MySQL and MariaDB have different implementations of multi-source replication, where MariaDB must have GTID with gtid-domain-id configured to distinguish the originating transactions while MySQL uses a separate replication channel for each master the slave replicates from. In MySQL, masters in a multi-source replication topology can be configured to use either global transaction identifier (GTID) based replication, or binary log position-based replication.

More on MariaDB multi source replication can be found in this blog post. For MySQL, please refer to the MySQL documentation.

3.6. Galera with Replication Slave (Hybrid Replication)

Hybrid replication is a combination of MySQL asynchronous replication and virtually synchronous replication provided by Galera. The deployment is now simplified with the implementation of GTID in MySQL replication, where setting up and performing master failover has become a straightforward process on the slave side.

Galera cluster performance is as fast as the slowest node. Having an asynchronous replication slave can minimize the impact on the cluster if you send long-running reporting/OLAP type queries to the slave, or if you perform heavy jobs that require locks like mysqldump. The slave can also serve as a live backup for onsite and offsite disaster recovery.

Hybrid replication is supported by ClusterControl and you can deploy it directly from the ClusterControl UI. For more information on how to do this, please read the blog posts - Hybrid replication with MySQL 5.6 and Hybrid replication with MariaDB 10.x.

4. Deploying a MySQL Replication Setup

We’ll now deploy a MySQL replication topology consisting of one master and two slaves, using ClusterControl. Our architecture is illustrated below:

Install ClusterControl by following the instructiosn on the Getting Started page. Do not forget to setup passwordless SSH from ClusterControl to all nodes (including the ClusterControl node itself). We are going to use root user for deployment. On ClusterControl node, run:

$ ssh-keygen -t rsa
$ ssh-copy-id 192.168.55.110
$ ssh-copy-id 192.168.55.111
$ ssh-copy-id 192.168.55.112
$ ssh-copy-id 192.168.55.113

Open the ClusterControl UI, go to the ‘Create Database Node’ and open the ‘MySQL Replication Master’ tab. In the dialog, specify the required information:

  • Vendor - Currently supported vendor is Percona Server
  • Version - MySQL major version. MySQL 5.6 is recommended.
  • Template- MySQL configuration template for the master. Leave it empty to use the default template located under /usr/share/cmon/templates. For MySQL 5.6, ClusterControl will use my.cnf.gtid_replication and my.cnf.replication for MySQL 5.5.
  • Hostname - IP address or hostname of the server. ClusterControl must able to reach the specified server using passwordless SSH.
  • Password - MySQL root password. ClusterControl will set this up for you.
  • SSH User - Specify the SSH user the ClusterControl will use to connect to the target host.
  • SSH Key Path - Passwordless SSH requires an SSH key. Specify the physical path to the key file here.
  • Replication User - User that you are going to use for replication. ClusterControl will set this up for you.
  • Replication Password - Password for Replication User.
  • Use Internal Repositories - By default, the installation requires internet access to install the software. Using Internal Repositories requires that images are prepared, this will point to the local repositories you wish to install the software from. ClusterControl will skip setting up Percona repository in the target node.

After we have filled in the required information, click on ‘Deploy’ to start the deployment.

You can monitor the deployment progress from the ‘Installation Progress’ dialog when you click on the spinning arrow icon on the top menu:

ClusterControl performs the following tasks on master1:

  1. verifies SSH connectivity,
  2. installs the specified MySQL Server,
  3. creates datadir and installs system tables,
  4. creates/grants the mysql user for MySQL Server,
  5. grants CMON user from the ClusterControl server,
  6. configures replication role for MySQL master with GTID,
  7. verifies the deployment,
  8. registers the node with the ClusterControl server.

The Replication master node (master1) is now deployed. Next, we add both slave nodes. Go to ClusterControl > select the database cluster > Actions > Add Node > Create and add a new DB Node and enter the relevant information for slave1:

Click ‘Add Node’. Monitor the progress at ClusterControl > Logs > Jobs and you should see something like below:

ClusterControl performs the following tasks when creating a new slave:

  1. verifies SSH connectivity,
  2. installs the same MySQL Server as the master from repository,
  3. creates datadir and installs system tables,
  4. creates/grants the mysql user for MySQL Server,
  5. grants CMON user from the ClusterControl server,
  6. stages the data on slave from the available master,
  7. configures replication role for MySQL slave with GTID,
  8. starts the replication,
  9. verifies the deployment,
  10. registers the node under the corresponding “cluster ID” in the ClusterControl server.

We can see from the status that the replication between slave1 and master1 has been started and the deployment status returned OK. At this point, slave1 is replicating from master1.

Next, add the second slave, slave2, by repeating the same steps as for slave1. Go to ClusterControl > select the database cluster > Actions > Add Node > Create and add a new DB Node and fill in the relevant information for slave2:

Click ‘Add Node’. The deployment should begin and at the end of it you will see the summary of the replication from the Overview page.

Our MySQL Replication setup is now deployed. By default, ClusterControl automatically configures all slaves as read-only.

5. Connecting Application to the Replication setup

By now, we should have a MySQL replication setup ready. The next thing is to import an existing database or to create a brand new database for a new application. When designing or deploying your application, just bear in mind that all write operations (statement/query that change the state of a database) must be executed ONLY on the master server. Example of write operations are statements that contain the following:

  • DDL - CREATE, ALTER, DROP, TRUNCATE, RENAME
  • DML - INSERT, DELETE, UPDATE, REPLACE
  • DCL - GRANT, REVOKE

Read operations can be executed on any of the servers in the replication setup. Slaves should therefore be started in read-only mode. Applications will not be able to modify data directly on the slaves, but the replication stream will still be able to update data on the read-only server.

In simple words, your application must be able to send writes to the master server and reads to the slave servers. If your application is not able to do this, you can use other options like application connectors or load balancers that support query routing with read-write split to minimize the changes on the application side.

5.1. Application connector

If your application runs on PHP, you can use MySQL native driver (mysqlnd) to perform read/write splitting without big changes on the application side. Since the connector itself performs the routing, the extra network latency involved in proxy-based solutions is avoided.

One of the major drawbacks with the application connector is you have to maintain it on each of the application server. For example, if a slave has been promoted as a new master, a new configuration must be updated on every application server. Having another tier that manages database availability is recommended. This is where a reverse proxy aka load balancer comes in handy.

5.2. Fabric-aware connector

Oracle released MySQL Fabric, an extensible framework for managing farms of MySQL Servers. At the time of writing, it supports two main category of features - High Availability and scaling out using data sharding. For High Availability, MySQL Fabric manages the replication relationships, detects the failure of the master and automatically promotes one of the slaves to be the new master. As for sharding, the admin can define how data is partitioned between shards - e.g., which table columns are to  be used as shard keys, and how to map the keys to the correct shard (HASH or RANGE). This is all transparent to the application.

MySQL Connectors are used by the application code to access the database(s), converting instructions from a specific programming language to the MySQL wire protocol, which is used to communicate with the MySQL Server processes. A 'Fabric-aware' connector stores a cache of the routing information that it has received from the mysqlfabric process and then uses that information to send transactions or queries to the correct MySQL Server. Currently the three supported Fabric-aware MySQL connectors are for PHP, Python and Java.

5.3. Reverse proxy/Load Balancer

At the time of writing, there are several reverse proxies that support read-write splitting e.g MaxScale, ProxySQL and MySQL Router (beta). ClusterControl v1.2.11 supports MaxScale deployment right from the UI. We have covered this in details in this blog post. By having MaxScale as the middle man, the application side does not need to perform health checks for slave consistency, replication lag or master/slave availability as these tasks have been taken care of by MaxScale. Applications just have to send queries to the load balancer servers, and the queries are then re-routed to the correct backends.

It is possible to install the load balancer on the same host as the application server or database server, but this is not recommended. The best practice is to install the load balancer on a separate server, independent from the application and database tiers. This to reduce the management complexity with a more straightforward architecture, and to make it easier to expand in the future.

By adding a reverse-proxy, our architecture should look like this:

For example deployment and more details, please refer to our blog post, How to Deploy and Manage MaxScale using ClusterControl.

6. Failover with ClusterControl

In order to keep your replication setup stable and running, it is important for the system to be resilient to failures. Failures are caused by either software bugs, configuration problems or hardware issues, and can happen at any time. In case a server goes down, ClusterControl will raise an alarm about the degraded setup and you will be notified via email or pager. Failover (promotion of a slave to master) can be performed via ClusterControl, and the admin need to decide which slave to promote. Usually it will be the slave that has the most up-to-date data.

How do you decide which slave is most up-to-date? The process is different depending on whether you are using GTID or not. GTID makes it easier, although you could run into issues like errant transactions. Failover of replication is covered in details in “Become a MySQL DBA” - Common Operations - Replication Topology Changes.

6.1. Failure of master server

Writes are done on the master server only. If the master fails, replication will stop. Failover must be done by promoting one of the most updated slave to be a master and resume the replication cluster operation. Applications doing updates must then reconnect to the newly promoted master and then continue to operate.

If the master (master1) is down, we need to promote one of the slaves (slave1) to become a master. To achieve this, go to ClusterControl > Nodes > choose slave1 > Promote Slave:

Once selected, click ‘Execute’. You will be prompted out with the following:

Effectively, the selected slave has become the new master. The existing slaves will automatically failover to the new master and continue to get updates while the old master is down.

When the old master comes up again, it will synchronize with the new master (which is handling application updates). This is orchestrated by ClusterControl. The following screenshot shows the old master (192.168.55.111) has became a slave in the replication chain:

Once the old master is up-to-date with the new master, it will remain as slave.

6.2. Failure of a slave

If a slave fails, applications connected to the slave can connect to another slave and continue to operate. ClusterControl will show the current status of failed slave in the Overview page:

When the slave comes up again, ClusterControl will automatically resume the replication with the master and make the slave available to the applications. Depending on how long the slave is lagging behind, or whether ClusterControl was unable to resume the replication due to a certain error, you can resynchronize the missing transactions manually. Or you can use the ‘Stage Replication Slave’ feature to rebuild the slave. In this case, ClusterControl will delete the old data on the slave, take a dump of the master data and provision the slave with it, before finally reconnecting it with the master.

At this point, the Replication cluster has been restored to its original topology.

7. Operations - managing your MySQL Replication setup

How you choose to deploy a replication setup also affects how you should manage it. In this section, we will assume a single master with multiple slaves, as deployed in section 4 of this tutorial. We’ll see how we can manage different operational tasks using ClusterControl.

7.1. Show Replication Status

You can find a summary of MySQL Replication status directly from the summary bar in the database cluster list. The Replication cluster status can be ACTIVE, FAILED or DEGRADED.

You can find further details on the master status, slave status and host statistics directly from the Cluster Overview page:

7.2. Start/Stop Replication

ClusterControl supports starting or stopping a slave from its UI. It’s similar to performing ‘STOP SLAVE’ and ‘START SLAVE’ via command line.

If either SQL or IO threads are stopped, ClusterControl will list out an extra option to start/stop the thread.

7.3. Promote Slave

Promoting a slave to master might be required if e.g. the master server goes down, or in case you would like to perform maintenance on the master host. Assuming you have GTID-based replication configured, you are able to promote a slave to master easily using ClusterControl. If the master is currently functioning correctly, then make sure you stop application queries before promoting another slave. This is to avoid to data loss. Connections on the current running master will be killed by ClusterControl after a 10 second grace period.

7.4. Rebuild replication slave

In case a slave gets corrupted, or it does not sync with the master for some reason, you might want to rebuild it. With ClusterControl, you would be able to rebuild a replication slave using the data from the master. It uses Percona Xtrabackup to stage the replication data on the slave. Note that this feature will wipe out the MySQL datadir of the slave.

Before proceeding with the rebuilding process from ClusterControl, you have to choose the available master. The slave process will be started automatically once the rebuilding completes.

7.5. Backup

We have blogged previously about backup strategies for MySQL. ClusterControl supports mysqldump and xtrabackup (full and incremental) to perform backups. Backups can be performed or scheduled on any database node (master or slaves) and stored locally or stored centrally on the ClusterControl node. When storing backups on the ClusterControl node, the backup is first created on the target database node and then streamed over using netcat to the controller node. You can also choose to backup individual databases or all databases. Backup progress is available under Backup > Reports section and you will get notification on the backup status each time it is created.

To create a backup, use Backup > Schedules > Start a Backup Immediately.

To schedule backups, scroll further down and configure the scheduling:

Backups created by ClusterControl can be restored on one of the database node.

7.6. Restore

ClusterControl has ability to restore backups (mysqldump and xtrabackup) created by ClusterControl or externally via some other tool. For external backup, the backup files must exist on the ClusterControl node and only xbstream, xbstream.gz and tar.gz extensions are supported.

The following steps will be performed:

  1. Stop all nodes in the replication setup.
  2. Copy the backup files to the selected server.
  3. Restore the backup.
  4. Once the restore job is completed, start the restored node under ClusterControl > Nodes > select the restored node > Start Node.
  5. Once started, promote the node as a new master (if it wasn’t a master) at ClusterControl > Nodes > select the restored node > Promote Slave.
  6. On each slave, rebuild the replication slave by go to ClusterControl > Nodes > slave node > Stage Replication Slave.

7.7 Software upgrade

You can perform a database software upgrade via ClusterControl > Manage > Upgrades > Upgrade. Upgrades are online and are performed on one node at a time. One node will be stopped, then the software is updated through package manager and finally the node is started again. If a node fails to upgrade, the upgrade process is aborted. Upgrades should only be performed when there is as little traffic as possible on the database hosts.

You can monitor the MySQL upgrade progress from ClusterControl > Logs > Jobs, as shown in the following screenshot:

ClusterControl performs upgrade of MySQL Replication setup by upgrading all slaves, one at a time. Once all slaves have been upgraded, verify the new version is correct from the Cluster Overview page. Then, promote an upgraded slave (using ‘Promote Slave’) to become the new master. Finally, upgrade the old master by repeating the same upgrade step.

7.8 Configuration Changes

System variables are found in my.cnf. Some of the variables are dynamic and can be set at runtime, others not. ClusterControl provides an interface to update MySQL configuration parameters on all DB instances at once. Select DB instance(s), configuration group and parameter and ClusterControl will perform the necessary changes via SET GLOBAL (if possible) and also make it persistant in my.cnf.

If restart is required, ClusterControl will acknowledge that in the 'Config Change Log' dialog:

More information in this blog post, Updating your MySQL Configuration.

7.9 Schema Changes

Traditionally, a schema change in MySQL was a blocking operation - a table had to be locked for the duration of the ALTER. In MySQL replication, some ALTERs may lock writes on the master and create replication lag. The reason is MySQL replication is single-threaded and if the SQL thread is executing an ALTER statement, it won’t execute anything else. It is also important to understand that the slave is able to start replicating the schema change only after it has completed on the master. This results in a significant amount of time needed to complete changes on the slave: time needed for a change on the master plus time needed for a change on the slave.

Luckily, there are ways to perform this operation online:

  • Rolling schema update - take one of the slaves out of rotation, execute ALTERs, bring it back, rinse and repeat until all slaves have been updated. Once that’s done, promote one of the slaves to master, run ALTER on the old master, bring it back as a slave.
  • Online schema changes - pt-oline-schema-change by Percona or Online Schema Change by Facebook.

Each method has its own pros and cons. More details in this blog post, Become a MySQL DBA blog series - Common operations - Schema Changes.

7.10 Topology Changes

Replication topology changes and failover processes are common operations, albeit complex. Changes are usually needed to help scale out, to distribute your database across multiple regions or data centers, or to perform software/hardware maintenance operations. The initial setup of a replication topology is simple, but as soon as you start changing it, things can quickly get complex.

Depending whether you are running on GTID-based or standard replication with binlog, the failover steps are different and require close attention. We have discussed this in details in this webinar on Replication Topology Changes for MySQL and MariaDB as well as this blog post - DBA Operations - Replication Topology Changes.

8. Issues and Troubleshooting

Because it is simple to setup, MySQL Replication is probably the most widely used mechanism to provide high availability. Unfortunately, it is also somewhat fragile:

  • Failover is not automatic and has to be performed by somebody who is skilled.
  • Slaves can easily end up with different data to the master, due to hardware problems, software bugs or the use of non-deterministic functions. Diverging datasets on master and slave servers causes replication to stop.
  • A crashing master can cause corruption of the binary log. When it is restarted, the slave servers would not be able to continue from the last binary log position.
  • GTID-based failover is exposed to errant transaction. We describe this further down in this tutorial, as well as in this blog.
  • Slave lag can be a nightmare when your application reads out-of-date data from a slave.
  • It is possible to set up two-way replication between two mysql servers. However, ring topologies are not recommended. MySQL Replication currently does not support any locking protocol between master and slave to guarantee the atomicity of a distributed updated across two different servers.

8.1 Replication Status

The replication status can only be checked from a replicating slave by using the following statement:

mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
               Slave_IO_State: Waiting for master to send event
                  Master_Host: 192.168.55.111
                  Master_User: slave
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: binlog.000005
          Read_Master_Log_Pos: 911532980
               Relay_Log_File: relay-bin.000004
                Relay_Log_Pos: 911533144
        Relay_Master_Log_File: binlog.000005
             Slave_IO_Running: Yes
            Slave_SQL_Running: Yes
              Replicate_Do_DB:
          Replicate_Ignore_DB:
           Replicate_Do_Table:
       Replicate_Ignore_Table:
      Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
                   Last_Errno: 0
                   Last_Error:
                 Skip_Counter: 0
          Exec_Master_Log_Pos: 911532980
              Relay_Log_Space: 911533311
              Until_Condition: None
               Until_Log_File:
                Until_Log_Pos: 0
           Master_SSL_Allowed: No
           Master_SSL_CA_File:
           Master_SSL_CA_Path:
              Master_SSL_Cert:
            Master_SSL_Cipher:
               Master_SSL_Key:
        Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
                Last_IO_Errno: 0
                Last_IO_Error:
               Last_SQL_Errno: 0
               Last_SQL_Error:
  Replicate_Ignore_Server_Ids:
             Master_Server_Id: 1
                  Master_UUID: a2bac331-a899-11e5-98f0-000c29901dfb
             Master_Info_File: /var/lib/mysql/master.info
                    SQL_Delay: 0
          SQL_Remaining_Delay: NULL
      Slave_SQL_Running_State: Slave has read all relay log; waiting for the slave I/O thread to update it
           Master_Retry_Count: 86400
                  Master_Bind:
      Last_IO_Error_Timestamp:
     Last_SQL_Error_Timestamp:
               Master_SSL_Crl:
           Master_SSL_Crlpath:
           Retrieved_Gtid_Set: a2bac331-a899-11e5-98f0-000c29901dfb:10-1937
            Executed_Gtid_Set: a2bac331-a899-11e5-98f0-000c29901dfb:1-1937

The following status variables are the main indicator that replication works as expected:

            Slave_IO_Running: Yes
           Slave_SQL_Running: Yes
       Seconds_Behind_Master: 0
            Master_Server_Id: 1

The above indicates the slave’s IO and SQL threads are running, replicating from the Master server (server-id=1) with no replication lag (where Seconds_Behind_Master is 0). Other than the abovementioned slave status, you can also use the following statements:

  • SELECT @@global.gtid_executed - Shows applied transactions
  • SELECT @@gtid_purged - Shows applied but purged from binary logs already

8.2 Replication Lag

Replication lag is the number of seconds that the slave is behind the master. If it happens, your application might read old data from the slave. This somewhat introduces a deficiency on the application side when retrieving data from a lagging slave. For example, you might configure the application to retrieve data when Seconds_Behing_Master is only equal to 0 on that slave. Else, the application falls back on the master to retrieve the data.

MySQL replication works with two threads, IO_THREAD & SQL_THREAD. For IO_THREAD, the slave:

  1. connects to a master,
  2. reads binary log events from the master as they come in
  3. copies them over to a local log file called relay log

While SQL_THREAD, the slave:

  1. reads events from a relay log, stored locally on the replication slave (the file that was written by IO thread)
  2. applies them as fast as possible.

Whenever replication lag happens, it’s important to determine whether it’s delaying on slave IO_THREAD or slave SQL_THREAD. Normally, I/O thread would not cause a big replication delay as it is just reading the binary logs from the master. However, It depends on the network connectivity and latency between the servers. The slave I/O thread could be slow because of high bandwidth usage. Usually, when the slave IO_THREAD is able to read binary logs quickly enough, it copies and piles up the relay logs on the slave – which is one indication that the slave IO_THREAD is not the culprit of slave lag.

When the slave SQL_THREAD is the source of replication delays, it is probably because the queries coming from the replication stream are taking too long to execute on the slave. This is sometimes due to different hardware between master/slave, different schema indexes, workload. Moreover, the slave OLTP workload sometimes causes replication delays because of locking. Take note that replication is single threaded prior to MySQL 5.6, which would be another reason for delays on the slave SQL_THREAD.

8.3 Data Drifting

Though the main purpose of replication is to have exact copies of data across the replication setup, data drifting can still happen between a MySQL master and its replicas. This can happen if there is transaction rollback on a non-transactional storage engine, a non-deterministic statement with statement-based replication, software bugs or human/application mistakes. It is also necessary to check slave consistency after a master failover event, as data drifting might happen after a new master is promoted.

You can use Percona Toolkit’s pt-table-checksum to perform an online replication consistency check by executing checksum queries on the master, which produces different results on replicas that are inconsistent with the master. Then, you can apply the missing transactions manually or use pt-table-sync to resynchronize the slave.

Using row-based replication (by setting binlog_format=ROW) is also a safe bet to reduce the risk of data drifting. With row-based replication, the master writes events to the binary log that indicate how individual table rows are changed. Replication of the master to the slave works by copying the events representing the row changes to the slave.

8.4 Errant Transaction

Errant transactions are transactions that are executed directly on a slave in GTID-based replication. Thus, they only exist on a specific slave. This could be the result of a mistake e.g, the application wrote to a slave instead of writing to the master or this could be by design e.g, you need additional tables for reports. It can cause data corruption or replication error if a slave with an errant transaction is promoted to the new master. The main issue with errant transactions is that when failing over, the slave may execute transactions ‘coming from nowhere’ that can silently corrupt your data or break replication.

If you find an errant transaction on one server, there are two ways to overcome errant transaction:

  • Either commit an empty transaction with the GTID of the errant one on all other servers;
  • Or, remove the corresponding GTID on the offending slave.

The bottomline is, before a new slave is promoted to be a master, it is necessary to check for errant transactions. We have covered this topic in details in this blog post, MySQL Replication and GTID-based failover - A Deep Dive into Errant Transactions.

8.5 Corrupted Slave

Corrupted slave happens when the relay logs are corrupted. A relay log is a log file of the binary log events coming from the master via replication IO thread. In case of corruption, replication would stop on the slave. There are multiple reasons that could lead to this problem, it could be network (especially if replicating over unreliable long distance networks), MySQL bugs on master or slave, hardware problems and few others.

Firstly, verify if the corruption happens on master or slave. A good indicator is if the other slaves are replicating without error, it’s most likely that only the relay log on that particular slave is corrupted. To fix it, simply re-point the replication on the slave to Relay_Master_Log_FIle:Exec_Master_Log_Pos:

(corrupted slave)> SLAVE STOP;
(corrupted slave)> CHANGE MASTER TO master_log_file=Relay_Master_Log_File,master_log_pos=Exec_Master_Log_Pos;
(corrupted slave)> SLAVE START;

8.6 Recommendations

  • Use Global Transaction Identifier (GTID) based replication for simpler deployment and failover.
  • Use InnoDB storage engine since it provides full transaction capability with ACID compliance and better crash recovery.
  • Replication only flows in one direction, applications only write on the master.
  • Deploy a backup master, the master pushes changes to a backup master and to one or more slaves
    • Use semi-synchronous replication between master and backup master
    • Master sends update to backup master and waits with transaction commit
    • Backup master gets update, writes to its relay log and flushes to disk. Backup master then acknowledges receipt of the transaction to the master.
    • Master proceeds with transaction commit.
    • Semi-sync replication has a performance impact, but the risk for data loss is minimized
  • Have only the replication process to make changes on the slaves, so as to minimize the risk of data conflicts on the slaves.
    • Slaves should therefore be started in read-only mode
    • Applications will not be able to modify data directly on the slaves, but the Replication process will still function on a read-only server
  • Replication sends larger packets between the servers, the max allowed packet set to a high value so as to avoid replication errors.
  • Binary logs need to be available to bring a new slave up to date
    • Provisioning of a new slave requires a combination of the last backup, and all transactions that happened after the backup
  • Replication connection parameters should not be placed in the my.cnf file. For instance, a slave may have its configuration file overwritten, and not know from what point to continue the replication.