blog

How to use Cluster-to-Cluster Replication in a Galera Cluster

Sebastian Insausti

Published:

Previously, we announced a new ClusterControl 1.7.4 feature called Cluster-to-Cluster Replication. It automates the entire process of setting up a disaster recovery cluster off your primary cluster, with replication in between.

With this feature:

  • One node in each cluster will replicate from another node in the other cluster.

  • The replication will be bi-directional between the clusters.

  • All nodes in the replica cluster will be read-only by default.

  • Active-Active clustering is only recommended if applications are only touching disjoint data sets on either cluster since the engine doesn’t offer any conflict detection or resolution.

You can find more information about what Cluster-to-Cluster Replication is here and how to configure it here. But now, let’s demonstrate how to actually implement Cluster-to-Cluster Replication.

For this walkthrough, we will assume you have ClusterControl installed and the Cluster-to-Cluster (C2C) Replication working. Otherwise, please, refer to the previously mentioned blog posts.

Cluster-to-Cluster Requirements

First, let’s quickly review the Cluster-to-Cluster configuration to make sure you have the following in place:

  • Percona XtraDB Cluster version 5.6.x and later, or MariaDB Galera Cluster version 10.x and later.

  • GTID enabled.

  • Binary Logging is enabled on at least one database node on each cluster.

  • The backup credentials must be the same in both Primary and Secondary (or Replica) clusters. 

  • Server-id must be the same in each of the Primary and Secondary clusters.

One example of this configuration is:

# REPLICATION SPECIFIC
server_id=1000
binlog_format=ROW
log_bin = /var/lib/mysql-binlog/binlog
log_slave_updates = ON
gtid_mode = ON
enforce_gtid_consistency = true
relay_log = relay-log
expire_logs_days = 7

And you should have this in all the nodes.

How to use it

At this point, you should have something like this:

So, now, what can you do with this?

Promote Replica Cluster

If you need to promote your Replica Cluster, first enable read-only in the Primary Cluster to avoid any issues. To do this, go to ClusterControl -> Cluster Actions -> Enable ReadOnly:

Then, disable read-only on each database node in the Replica Cluster. Go to ClusterControl -> Select Replica Cluster -> Nodes -> Node Actions -> Disable ReadOnly:

And that’s it. Now you can write in your Replica Cluster as if it were your Primary one. Looks easy, and actually, it is.

Rollback

If you need to roll back and write in your Primary Cluster again, you just need to enable ReadOnly on the Replica Cluster and disable it in the Primary one.

Bootstrap C2C Cluster

Let’s say you had a big issue in your Primary Cluster, and you need to bootstrap it. Well, you can do that by using ClusterControl. But then, what will happen with your C2C replication? 

Actually, nothing. After bootstrapping your Primary Cluster, if everything is configured correctly, your C2C will work as usual:

Rebuild Replication Slave

Ok, now, let’s say that for some reason, your bi-directional replication is broken, and the information is only flying in one way:

This could be the result of a bad configuration or a change in the configuration of the replication slave node. You can rebuild your Replica Cluster like this:

  • Make sure you have Read-only Enabled in all the Replica Clusters.

  • Run this in all the nodes of the Primary Cluster: $ mysql -p -e “set wsrep_on=off; reset master; set wsrep_on=on;”

  • Go to ClusterControl -> Select Replica Cluster -> Nodes -> Select the Slave node -> Node Actions -> Rebuild Replication Slave.

These steps should fix the issue.

Re-create the Cluster-to-Cluster Replication

In case you run into this scenario where the C2C replication was disconnected:

This could be due to a configuration issue or even a “Reset Slave All” command and you will need to re-create it. These should be the steps:

  • Delete the Secondary cluster from ClusterControl (Cluster Actions -> Delete Cluster).

  • Run “Reset Replica All” on the slave node in the Primary Cluster.

  • Run in all the nodes of the Primary Cluster: $ mysql -p -e “set wsrep_on=off; reset master; set wsrep_on=on;”

  • Stop the MySQL service in all the nodes in the Replica Cluster.

  • In the Primary Cluster, use the Create Replica Cluster feature with the option “Install software” disabled.

Now, you have your C2C Replication back.

Monitoring your Cluster-to-Cluster Replication

There are two ways to monitor your C2C Replication – ClusterControl UI and ClusterControl CLI.

ClusterControl UI

In the ClusterControl UI, you will see the current status of your C2C Replication:

As you can see in the image, you can enable/disable the Auto Recovery feature, check the status of each database node, check the status of the bi-directional replication, and check to see if the Replica Cluster is configured in ReadOnly. You can also see what node is replicating from/to.

When you access your Cluster, you will have a full suite of features available to manage and monitor your Cluster.

ClusterControl CLI

Also known as s9s, is a command-line tool introduced in ClusterControl version 1.4.1 to interact, control, and manage database clusters using the ClusterControl system. The command-line tool is invoked by executing a binary called s9s added by default in the ClusterControl installation.

In the ClusterControl CLI, apart from the common monitoring commands like Cluster status:

$ s9s cluster --list --long
ID STATE   TYPE   OWNER                      GROUP  NAME        COMMENT
 1 STARTED galera [email protected] admins C2C-Primary All nodes are operational.
 3 STARTED galera [email protected] admins C2C-Replica All nodes are operational.
Total: 2

Or Node status:

$ s9s node --cluster-id=1 --list --long
STAT VERSION    CID CLUSTER     HOST         PORT COMMENT
coC- 1.9.0.4814   1 C2C-Primary 10.10.10.127 9500 Up and running.
goS- 8.0.23       1 C2C-Primary 10.10.10.128 3306 Up and running.
goM- 8.0.23       1 C2C-Primary 10.10.10.129 3306 Up and running.
goM- 8.0.23       1 C2C-Primary 10.10.10.130 3306 Up and running.
Total: 4

You can also see the Replication list, where you will find the C2C Replication, and other replications that you may have configured in ClusterControl.

$ s9s replication --list --long
CID SLAVE             MASTER            STATUS MASTER_CLUSTER LAG
  1 10.10.10.128:3306 10.10.10.131:3306 Online 2 0
  2 10.10.10.131:3306 10.10.10.128:3306 Online 1 0
Total: 2 replication link(s)

If you want to know more about this powerful tool, you can refer to the Official Documentation.

Conclusion

A complex topology, like a High Availability environment, could require extra knowledge to be able to handle it. With this ClusterControl feature – Cluster-to-Cluster Replication (C2C) – you can deploy, manage, and monitor Primary and Replica Clusters from the same system in an easy and user-friendly way.

With C2C Replication, you won’t have to create a Cluster Replication manually. Using C2C will save time and effort, and with just a few clicks, you will have a disaster recovery cluster running.

Subscribe below to be notified of fresh posts