blog
How to Preform an Online Upgrade of MariaDB Galera Cluster 5.5 to MariaDB 10
The MariaDB team released a GA version of MariaDB Galera Cluster 10 in June 2014. MariaDB 10 is the equivalent of MySQL 5.6, and therefore, packed with lots of great features.
In this blog post, we’ll look into how to perform an online upgrade to MariaDB Galera Cluster 10. At the time of writing, MariaDB 10.1 was still in beta so the instructions in this blog are applicable to MariaDB 10.0. If you are running the Codership build of Galera (Galera Cluster for MySQL), you might be interested in the online upgrade to MySQL 5.6 instead.
Offline Upgrade
An offline upgrade requires downtime, but it is more straightforward. If you can afford a maintenance window, this is probably a safer way to reduce the risk of upgrade failures. The major steps consists of stopping the cluster, upgrading all nodes, bootstrap and starting the nodes. We covered the procedure in details in this blog post.
Online Upgrade
An online upgrade has to be done in rolling upgrade/restart fashion, i.e., upgrade one node at a time and then proceed to the next. During the upgrade, you will have a mix of MariaDB 5.5. and 10. This can cause problems if not handled with care.
Here is the list that you need to check prior to the upgrade:
- Read and understand the changes with the new version
- Note the unsupported configuration options between the major versions
- Determine your cluster ID from the ClusterControl summary bar
- garbd nodes will also need to be upgraded
- All nodes must have internet connection
- SST must be avoided during the upgrade so we must ensure each node’s gcache is appropriately configured and loaded prior to the upgrade.
- Some nodes will be read-only during the period of upgrade which means there will be some impact on the cluster’s write performance. Perform the upgrade during non-peak hours.
- The load balancer must be able to detect and exclude backend DB servers in read-only mode. Writes coming from MariaDB 10.0 are not compatible with 5.5 in a same cluster. Percona’s clusterchk and ClusterControl mysqlchk script should be able to handle this by default.
- If you are running on ClusterControl, ensure ClusterControl auto recovery feature is turned off to prevent ClusterControl from recovering a node during its upgrade.
Here is what we’re going to do to perform the online upgrade:
- Set up MariaDB 10 repository on all DB nodes.
- Increase gcache size and perform rolling restart.
- Backup all databases.
- Turn off ClusterControl auto recovery.
- Start the maintenance window.
- Upgrade packages in Db1 to MariaDB 10.
- Add compatibility configuration options on my.cnf of Db1 (node will be read-only).
- Start Db1. At this point the cluster will consist of MariaDB 10 and 5.5 nodes. (Db1 is read-only, writes go to Db2 and Db3)
- Upgrade packages in Db2 to MariaDB 10.
- Add compatibility configuration options on my.cnf of Db2 (node will be read-only).
- Start Db2. (Db1 and Db2 are read-only, writes go to Db3)
- Bring down Db3 so the read-only on upgraded nodes can be turned off (no more MariaDB 5.5 at this point)
- Turn off read-only on Db1 and Db2. (Writes go to Db1 and Db2)
- Upgrade packages in Db3 to MariaDB 10.
- Start Db3. At this point the cluster will consist of MariaDB 10 nodes. (Writes go to Db1, Db2 and Db3)
- Clean up the compatibility options on all DB nodes.
- Verify nodes performance and availability.
- Turn on ClusterControl auto recovery.
- Close maintenance window
Upgrade Steps
In the following example, we have a 3-node MariaDB Galera Cluster 5.5 that we installed via the Severalnines Configurator, running on CentOS 6 and Ubuntu 14.04. The steps performed here should work regardless of the cluster being deployed with or without ClusterControl. Omit sudo if you are running as root.
Preparation
1. MariaDB 10 packages are available in the MariaDB package repository. Replace the existing MariaDB repository URL with version 10.
On CentOS 6:
$ sed -i 's/5.5/10.0/' /etc/yum.repos.d/MariaDB.repo
On Ubuntu 14.04:
$ sudo sed -i 's/5.5/10.0/' /etc/apt/sources.list.d/MariaDB.list
2. Increase gcache size to a suitable amount. To be safe, increase the size so it can hold up to 1 to 2 hours downtime, as explained in this blog post under ‘Determining good gcache size’ section. In this example, we are going to increase the gcache size to 1GB. Open the MariaDB configuration file:
$ vim /etc/my.cnf # CentOS
$ sudo vim /etc/mysql/my.cnf # Ubuntu
Append or modify the following line under wsrep_provider_options:
wsrep_provider_options="gcache.size=1G"
Perform a rolling restart to apply the change. For ClusterControl users, you can use Manage > Upgrades > Rolling Restart.
3. Backup all databases. This is critical before performing any upgrade so you have something to fall back to if the upgrade fails. For ClusterControl users, you can use Backup > Start a Backup Immediately.
4. Turn off ClusterControl auto recovery from the UI, similar to screenshot below:
5. If you installed HAProxy through ClusterControl, there was a bug in previous versions in the health check script when detecting read-only node. Run the following command on all DB nodes to fix it (this is fixed in the latest version):
$ sudo sed -i 's/YES/ON/g' /usr/local/sbin/mysqlchk
Upgrading Database Server Db1 and Db2
1. Stop the MariaDB service and remove MariaDB Galera server and galera package.
CentOS:
$ service mysql stop
$ yum remove MariaDB*server MariaDB-client galera
Ubuntu:
$ sudo killall -15 mysqld mysqld_safe
$ sudo apt-get remove mariadb-galera* galera*
2. Modify the MariaDB configuration file for 10’s non-compatible option by commenting or removing the following line (if exists):
#engine-condition-pushdown=1
Then, append and modify the following lines for backward compatibility options:
[MYSQLD]
# Required for compatibility with galera-2. Ignore if you have galera-3 installed.
# Append socket.checksum=1 in wsrep_provider_options:
wsrep_provider_options="gcache.size=1G; socket.checksum=1"
# Required for replication compatibility
# Add following lines under [mysqld] directive:
binlog_checksum=NONE
read_only=ON
3. For CentOS, install the latest version via package manager and follow the substeps (a) and (b):
CentOS:
$ yum clean metadata
$ yum install MariaDB-Galera-server MariaDB-client MariaDB-common MariaDB-compat galera
3a) Start MariaDB with skip grant and run mysql_upgrade script to upgrade system table:
$ mysqld --skip-grant-tables --user=mysql --wsrep-provider='none' &
$ mysql_upgrade -uroot -p
Make sure the last line returns ‘OK’, indicating the mysql_upgrade succeeded.
3b) Gracefully kill the running mysqld process and start the server:
$ sudo killall -15 mysqld
$ sudo service mysql start
Ubuntu:
$ sudo apt-get update
$ sudo apt-get install mariadb-galera-server
**For Ubuntu and Debian packages, mysql_upgrade will be run automatically when they are installed.
4. Monitor the MariaDB error log and ensure the node joins through IST, similar to below:
150821 15:08:04 [Note] WSREP: Signalling provider to continue.
150821 15:08:04 [Note] WSREP: SST received: 2d14e556-473a-11e5-a56e-27822412a930:517325
150821 15:08:04 [Note] WSREP: Receiving IST: 1711 writesets, seqnos 517325-519036
150821 15:08:04 [Note] /usr/sbin/mysqld: ready for connections.
Version: '10.0.21-MariaDB-wsrep' socket: '/var/lib/mysql/mysql.sock' port: 3306 MariaDB Server, wsrep_25.10.r4144
150821 15:08:04 [Note] WSREP: IST received: 2d14e556-473a-11e5-a56e-27822412a930:519036
150821 15:08:04 [Note] WSREP: 0.0 (192.168.55.139): State transfer from 2.0 (192.168.55.141) complete.
150821 15:08:04 [Note] WSREP: Shifting JOINER -> JOINED (TO: 519255)
150821 15:08:04 [Note] WSREP: Member 0.0 (192.168.55.139) synced with group.
150821 15:08:04 [Note] WSREP: Shifting JOINED -> SYNCED (TO: 519260)
150821 15:08:04 [Note] WSREP: Synchronized with group, ready for connections
Repeat the same step for the second node. Once the upgrade is completed on Db1 and Db2, you should notice that writes are now redirected to Db3, which is still running on MariaDB 5.5:
This doesn’t mean that both upgraded MariaDB servers are down, they are just set to be read-only to prevent writes coming from 10 being replicated to 5.5 (which is not supported). We will switch the writes over just before we start the upgrade of the last node.
Upgrading the Last Database Server (Db3)
1. Stop MariaDB service on Db3:
$ sudo service mysql stop # CentOS
$ sudo killall -15 mysqld mysqld_safe # Ubuntu
2. Turn off read-only on Db1 and Db2 so they can start to receive writes. From this point, writes will go to MariaDB 10 only. On Db1 and Db2, run the following statement:
$ mysql -uroot -p -e 'SET GLOBAL read_only = OFF'
Verify that you see something like below on the HAProxy statistic page (ClusterControl > Nodes > select HAproxy node), indicating Db1 and Db2 are now up and active:
3. Modify the MariaDB configuration file for 10’s non-compatible option by commenting or removing the following line (if exists):
#engine-condition-pushdown=1
** There is no need to enable backward compatibility options anymore since the other nodes (Db1 and Db2) are already in 10.
4. Now we can proceed with the upgrade on Db3. Install the latest version via package manager. For CentOS, follow the substeps (a) and (b):
CentOS:
$ yum clean metadata
$ yum install MariaDB-Galera-server MariaDB-client MariaDB-common MariaDB-compat galera
4a) Start MariaDB with skip grant and run mysql_upgrade script to upgrade system table:
$ mysqld --skip-grant-tables --user=mysql --wsrep-provider='none' &
$ mysql_upgrade -uroot -p
Make sure the last line returns ‘OK’, indicating the mysql_upgrade succeeded.
4b) Gracefully kill the running mysqld process and start the server:
$ sudo killall -15 mysqld
$ sudo service mysql start
Ubuntu:
$ sudo apt-get update
$ sudo apt-get install mariadb-galera-server
**For Ubuntu and Debian packages, mysql_upgrade will be run automatically when they are installed.
Wait until the last node joins the cluster and verify the status and version from the ClusterControl Overview tab:
That’s it. Your cluster is upgraded to MariaDB Galera 10. Next, we need to perform some cleanups.
Cleaning Up
1. Fetch the new configuration file contents into ClusterControl by going to ClusterControl > Manage > Configurations > Reimport Configuration.
2. Remove the backward compatibility option on Db1 and Db2 by modifying the following lines:
# Remove socket.checksum=1 in wsrep_provider_options:
wsrep_provider_options="gcache.size=1G"
# Remove or comment following lines:
#binlog_checksum=NONE
#read_only=ON
Restart Db1 and Db2 (one node at a time) to immediately apply the changes.
3. Enable ClusterControl automatic recovery:
4. In some occasions, you might see “undefined()” appearing in Overview page. This is due to a ClusterControl bug in previous versions on detecting a new node. To reset the node status, run the following on the ClusterControl node:
$ sudo service cmon stop
$ mysql -uroot -p -e 'truncate cmon.server_node'
$ sudo service cmon start
Take note that the upgrade preparation part could take a long time if you have a huge dataset to backup. The upgrading process (excluding preparation) took us approximately 45 minutes to complete.
For MySQL Galera users, we have covered the similar online upgrade to MySQL 5.6 in this blog post.
References
- Percona XtraDB Cluster In-Place Upgrading Guide: From 5.5 to 5.6
- MariaDB Knowledge Base: Upgrading from MariaDB 5.5 to MariaDB 10.0
- Upgrading MariaDB Galera Cluster 5.5 to 10.0 (CentOS/RHEL)
- Howto: Online upgrade of Galera Cluster to MySQL 5.6