blog
Database Cluster Management: Manual vs Automation
In a previous blog, we looked at the efficiency gains when automating the deployment of a Galera cluster for MySQL. In this post, we are going to dive into cluster management. Should we manage our cluster manually, or does it make sense to automate the procedures ? What do we gain with automation?
Cluster management involves a number of tasks:
- Node management:
- Restart service (per node or rolling restart)
- Bootstrap cluster
- Stop cluster
- Reboot server
- Rebuild node
- Recover cluster/node
- Find most advanced node (i.e. node with latest data)
- Configuration management:
- Centrally manage configuration files
- Change configurations
- Rolling restarts
- Upgrades
- Rolling upgrades
- Backup management:
- Create backup (physical or logical, full or incremental)
- Schedule backup
- Restore backup
- Schema and users management:
- Create schema
- Manage user privileges
- Private keys and certificates management:
- Manage private keys and certificates
- Generate new private keys and certificates
- Manage CA bundles
- Import existing private keys and certificates
Let’s take a look at the different steps involved in some of the above, and see if it would make sense to automate them.
Node Management
Restarting MySQL service is simple in a standalone setup but in a Galera cluster, performing a rolling restart might be tricky in some ways. You have to ensure the node reaches primary state before proceeding to the next node. In some occasions, a node restart could bring chaos as described in details in this blog post.
Rolling restart is one of the common tasks that you need to do properly. In an environment without a cluster manager, you would:
- Login to a DB node
- Restart MySQL server
- Verify the node state using ‘SHOW STATUS LIKE ‘wsrep_cluster_state’ and ensure it reaches Primary
- Repeat step #1 to #3 for every node in the cluster
In ClusterControl, you can achieve this in two ways, using “Rolling Restart” function or restart the node, one node at a time, under the Nodes tab. To perform a rolling restart, go to ClusterControl -> Manage -> Upgrades -> Rolling Restart -> Proceed.
You can then monitor the restart progress under ClusterControl -> Logs -> Jobs, similar to the following screenshot:
Although the duration of the rolling restart is not much different, one would have to be standby to supervise the whole procedure vs. triggering a job through ClusterControl and getting notified when the procedure has completed./p>
Configuration Management
Tuning the database configuration is a continuous process. There are plenty of configuration variables available for instance in MySQL and it’s not trivial to remember which ones are dynamic variables (that you can change on runtime without restart) or non-dynamic variable (where MySQL restart is required). Acknowledging this in advance is important so we are aware of the next step to take – so our new configuration is loaded correctly.
Let’s say we want to adjust the innodb_buffer_pool_size and change log_error (both are non-dynamic variables) in a three-node Galera cluster:
- Login to first DB node via SSH
- Make configuration changes inside my.cnf
- Restart the node
- Verify the node state using ‘SHOW STATUS LIKE ‘wsrep_cluster_state’ and ensure it reaches Primary
- Repeat step #1 to #4 for each node in the cluster
ClusterControl provides a centralised interface from which to manage your configurations. It knows which variable changes require a restart or not.We covered this in details in this blog post. Let’s say you want to change the same configuration variables as described above, here is what you should do via the ClusterControl UI:
-
Go to ClusterControl -> Manage -> Configuration -> Change Parameters.
-
Then specify the variable that we want to change:
ClusterControl will then perform the configuration change and advise the next step:
-
From the log window above, we should perform a rolling for the change to take effect. Next, perform the rolling restart as described in the first section, Nodes Management.
Backup Management
Backup is critical and essential when managing our database cluster. For MySQL, two main methods are mysqldump and xtrabackup. There are a number of commands and options based on the chosen method, whether you want to perform a local or remote backup, compressed or plain output, per database or all databases. Once you have figured out the ideal command, you can schedule it using cron. The main steps to schedule a backup:
- Create the backup destination path.
- Experiment with backup commands (this is the most time consuming process).
- Create a cron job for backup.
- Run the housekeeping command to test it it works.
- Create a cron job for backup housekeeping.
Alternatively, it is possible to schedule backups via ClusterControl. Backup files can be stored on the DB node where the backup is taken, or it can be streamed to the ClusterControl node so it does not take extra space on the database server. Other interesting options are parallel copy threads, throttle rate, network streaming throttle rate, as well as desync of the target node when the backup is running. Older backups are also automatically purged based on a configurable backup rotation interval.
Creating a backup schedule via ClusterControl:
All created backups are available under ClusterControl -> Backups -> Reports:
Backup sets include a full backup and related incremental backups, restoration of a backup set can also be performed from ClusterControl.
Finally, if the scheduled Galera node is down for maintenance or unavailable for some reason, it is possible to configure ClusterControl so that it schedules the backup on the next available node.
MySQL User Management
A common misconception when adding users to Galera cluster is to create them using “INSERT INTO mysql.user” statement, which is plain wrong. The suggested way is to use DDL statements such as GRANT or CREATE USER to create users, since the table mysql.user is a MyISAM table (Galera replicates DDL statement regardless of the storage engine).
Creating a MySQL user can be a simple or tricky task depending on your authentication, authorization and accounting policy. Modifying a user’s complex privileges usually requires a long GRANT statement, which can be error-prone. For example, one would do:
$ mysql -uroot -p
mysql> CREATE USER 'userx'@'%' IDENTIFIED BY 'mypassword';
mysql> GRANT SELECT,INSERT,UPDATE,DELETE,INDEX,CREATE TEMPORARY TABLES,SHOW VIEW,ALTER ROUTINE,CREATE VIEW,EVENT ON app1.* TO 'userx'@'%' WITH MAX_QUERIES_PER_HOUR 50 MAX_CONNECTIONS_PER_HOUR 5 MAX_USER_CONNECTIONS 5;
With ClusterControl, the above CREATE and GRANT statements can be done through a wizard:
ClusterControl also provides an overview of inactive users, where it detects accounts that have not been used since the last server restart. This is useful in cases where an admin has created unnecessary accounts, for instance, to minimize client authentication problems.;
BONUS: Private Keys and Certificates Management
In this last chapter, we are going to touch upon security and encryption for Galera Cluster. ClusterControl supports enabling SSL encryption (client-server encryption) as well as encryption of replication traffic between the Galera nodes. This is especially important when deploying on public clouds, or across different data centers.
Enabling fully SSL encryption (client-server + Galera replication) requires many steps, as we’ve shown in this blog post. Starting ClusterControl v1.3.x, you can perform these tasks with less than 5 clicks:
-
Click on Enable SSL Encryption and choose “Create Certificate”:
-
Then, enable the Galera replication encryption:
-
Once enabled, go to Settings -> Key Management tab and you can manage the generated private keys and certificates there, similar to the screenshot below:
That’s it. Setting up a cluster with full SSL encryption is not a difficult job anymore with ClusterControl. The above is a subset of management functionality that can be automated via ClusterControl, you are very welcome to give it a try and let us know what you think.