blog

How to Migrate WHMCS Database to MariaDB Galera Cluster

Ashraf Sharif

Published:

WHMCS is an all-in-one client management, billing and support solution for web hosting companies. It’s one of the leaders in the hosting automation world to be used alongside the hosting control panel itself. WHMCS runs on a LAMP stack, with MySQL/MariaDB as the database provider. Commonly, WHMCS is installed as a standalone instance (application and database) independently by following the WHMCS installation guide, or through software installer tools like cPanel Site Software or Softaculous. The database can be made highly available by migrating to a Galera Cluster of 3 nodes.

In this blog post, we will show you how to migrate the WHMCS database from a standalone MySQL server (provided by the WHM/cPanel server itself) to an external three-node MariaDB Galera Cluster to improve the database availability. The WHMCS application itself will be kept running on the same cPanel server. We’ll also give you some tuning tips to optimize performance.

Deploying the Database Cluster

  1. Install ClusterControl:
    $ whoami
    root
    $ wget https://severalnines.com/downloads/cmon/install-cc
    $ chmod 755 install-cc
    $ ./install-cc

    Follow the instructions accordingly until the installation is completed. Then, go to the http://192.168.55.50/clustercontrol (192.168.55.50 being the IP address of the ClusterControl host) and register a super admin user with password and other required details.

  2. Setup passwordless SSH from ClusterControl to all database nodes:
    $ whoami
    root
    $ ssh-keygen -t rsa # Press enter on all prompts
    $ ssh-copy-id 192.168.55.51
    $ ssh-copy-id 192.168.55.52
    $ ssh-copy-id 192.168.55.53
  3. Configure the database deployment for our 3-node MariaDB Galera Cluster. We are going to use the latest supported version MariaDB 10.3:

    Make sure you get all green checks after pressing ‘Enter’ when adding the node details. Wait until the deployment job completes and you should see the database cluster is listed in ClusterControl.

  4. Deploy a ProxySQL node (we are going to co-locate it with the ClusterControl node) by going to Manage -> Load Balancer -> ProxySQL -> Deploy ProxySQL. Specify the following required details:

    Under “Add Database User”, you can ask ClusterControl to create a new ProxySQL and MySQL user as it sets up , thus we put the user as “portal_whmcs”, assigned with ALL PRIVILEGES on database “portal_whmcs.*”. Then, check all the boxes for “Include” and finally choose “false” for “Are you using implicit transactions?”.

Once the deployment finished, you should see something like this under Topology view:

Our database deployment is now complete. Keep in mind that we do not cover the load balancer tier redundancy in this blog post. You can achieve that by adding a secondary load balancer and string them together with Keepalived. To learn more about this, check out ProxySQL Tutorials under chapter “4.2. High availability for ProxySQL”.

WHMCS Installation

If you already have WHMCS installed and running, you may skip this step.

Take note that WHMCS requires a valid license which you have to purchase beforehand in order to use the software. They do not provide a free trial license, but they do offer a no questions asked 30-day money-back guarantee, which means you can always cancel the subscription before the offer expires without being charged.

To simplify the installation process, we are going to use cPanel Site Software (you may opt for WHMCS manual installation) to one of our sub-domain, selfportal.mytest.io. After creating the account in WHM, go to cPanel > Software > Site Software > WHMCS and install the web application. Login as the admin user and activate the license to start using the application.

At this point, our WHMCS instance is running as a standalone setup, connecting to the local MySQL server.

ClusterControl
Single Console for Your Entire Database Infrastructure
Find out what else is new in ClusterControl

Migrating the WHMCS Database to MariaDB Galera Cluster

Running WHMCS on a standalone MySQL server exposes the application to single-point-of-failure (SPOF) from database standpoint. MariaDB Galera Cluster provides redundancy to the data layer with built-in clustering features and support for multi-master architecture. Combine this with a database load balancer, for example ProxySQL, and we can improve the WHMCS database availability with very minimal changes to the application itself.

However, there are a number of best-practices that WHMCS (or other applications) have to follow in order to work efficiently on Galera Cluster, especially:

  • All tables must be running on InnoDB/XtraDB storage engine.
  • All tables should have a primary key defined (multi-column primary key is supported, unique key does not count).

Depending on the version installed, in our test environment installation (cPanel/WHM 11.78.0.23, WHMCS 7.6.0 via Site Software), the above two points did not meet the requirement. The default cPanel/WHM MySQL configuration comes with the following line inside /etc/my.cnf:

default-storage-engine=MyISAM

The above would cause additional tables managed by WHMCS Addon Modules to be created in MyISAM storage engine format if those modules are enabled. Here is the output of the storage engine after we have enabled 2 modules (New TLDs and Staff Noticeboard):

MariaDB> SELECT tables.table_schema, tables.table_name, tables.engine FROM information_schema.tables WHERE tables.table_schema='whmcsdata_whmcs' and tables.engine <> 'InnoDB';
+-----------------+----------------------+--------+
| table_schema    | table_name           | engine |
+-----------------+----------------------+--------+
| whmcsdata_whmcs | mod_enomnewtlds      | MyISAM |
| whmcsdata_whmcs | mod_enomnewtlds_cron | MyISAM |
| whmcsdata_whmcs | mod_staffboard       | MyISAM |
+-----------------+----------------------+--------+

MyISAM support is experimental in Galera, which means you should not run it in production. In some worse cases, it could compromise data consistency and cause writeset replication failures due to its non-transactional nature.

Another important point is that every table must have a primary key defined. Depending on the WHMCS installation procedure that you performed (as for us, we used cPanel Site Software to install WHMCS), some of the tables created by the installer do not come with primary key defined, as shown in the following output:

MariaDB [information_schema]> SELECT TABLES.table_schema, TABLES.table_name FROM TABLES LEFT JOIN KEY_COLUMN_USAGE AS c ON (TABLES.TABLE_NAME = c.TABLE_NAME AND c.CONSTRAINT_SCHEMA = TABLES.TABLE_SCHEMA AND c.constraint_name = 'PRIMARY' ) WHERE TABLES.table_schema <> 'information_schema' AND TABLES.table_schema <> 'performance_schema' AND TABLES.table_schema <> 'mysql' and TABLES.table_schema <> 'sys' AND c.constraint_name IS NULL;
+-----------------+------------------------------------+
| table_schema    | table_name                         |
+-----------------+------------------------------------+
| whmcsdata_whmcs | mod_invoicedata                    |
| whmcsdata_whmcs | tbladminperms                      |
| whmcsdata_whmcs | tblaffiliates                      |
| whmcsdata_whmcs | tblconfiguration                   |
| whmcsdata_whmcs | tblknowledgebaselinks              |
| whmcsdata_whmcs | tbloauthserver_access_token_scopes |
| whmcsdata_whmcs | tbloauthserver_authcode_scopes     |
| whmcsdata_whmcs | tbloauthserver_client_scopes       |
| whmcsdata_whmcs | tbloauthserver_user_authz_scopes   |
| whmcsdata_whmcs | tblpaymentgateways                 |
| whmcsdata_whmcs | tblproductconfiglinks              |
| whmcsdata_whmcs | tblservergroupsrel                 |
+-----------------+------------------------------------+

As a side note, Galera would still allow tables without primary key to exist. However, DELETE operations are not supported on those tables plus it would expose you to much bigger problems like node crash, writeset certification performance degradation or rows may appear in a different order on different nodes.

To overcome this, our migration plan must include the additional step to fix the storage engine and schema structure, as shown in the next section.

Migration Plan

Due to restrictions explained in the previous chapter, our migration plan has to be something like this:

  1. Enable WHMCS maintenance mode
  2. Take backups of the whmcs database using logical backup
  3. Modify the dump files to meet Galera requirement (convert storage engine)
  4. Bring up one of the Galera nodes and let the remaining nodes shut down
  5. Restore to the chosen Galera node
  6. Fix the schema structure to meet Galera requirement (missing primary keys)
  7. Bootstrap the cluster from the chosen Galera node
  8. Start the second node and let it sync
  9. Start the third node and let it sync
  10. Change the database pointing to the appropriate endpoint
  11. Disable WHMCS maintenance mode

The new architecture can be illustrated as below:

Our WHMCS database name on the cPanel server is “whmcsdata_whmcs” and we are going to migrate this database to an external three-node MariaDB Galera Cluster deployed by ClusterControl. On top of the database server, we have a ProxySQL (co-locate with ClusterControl) running to act as the MariaDB load balancer, providing the single endpoint to our WHMCS instance. The database name on the cluster will be changed to “portal_whmcs” instead, so we can easily distinguish it.

Firstly, enable the site-wide Maintenance Mode by going to WHMCS > Setup > General Settings > General > Maintenance Mode > Tick to enable – prevents client area access when enabled. This will ensure there will be no activity from the end user during the database backup operation.

Since we have to make slight modifications to the schema structure to fit well into Galera, it’s a good idea to create two separate dump files. One with the schema only and another one for data only. On the WHM server, run the following command as root:

$ mysqldump --no-data -uroot whmcsdata_whmcs > whmcsdata_whmcs_schema.sql
$ mysqldump --no-create-info -uroot whmcsdata_whmcs > whmcsdata_whmcs_data.sql

Then, we have to replace all MyISAM occurrences in the schema dump file with ‘InnoDB’:

$ sed -i 's/MyISAM/InnoDB/g' whmcsdata_whmcs_schema.sql

Verify that we don’t have MyISAM lines anymore in the dump file (it should return nothing):

$ grep -i 'myisam' whmcsdata_whmcs_schema.sql

Transfer the dump files from the WHM server to mariadb1 (192.168.55.51):

$ scp whmcsdata_whmcs_* 192.168.55.51:~

Create the MySQL database. From ClusterControl, go to Manage -> Schemas and Users -> Create Database and specify the database name. Here we use a different database name called “portal_whmcs”. Otherwise, you can manually create the database with the following command:

$ mysql -uroot -p 
MariaDB> CREATE DATABASE 'portal_whmcs';

Create a MySQL user for this database with its privileges. From ClusterControl, go to Manage -> Schemas and Users -> Users -> Create New User and specify the following:

In case you choose to create the MySQL user manually, run the following statements:

$ mysql -uroot -p 
MariaDB> CREATE USER 'portal_whmcs'@'%' IDENTIFIED BY 'ghU51CnPzI9z';
MariaDB> GRANT ALL PRIVILEGES ON portal_whmcs.* TO portal_whmcs@'%';

Take note that the created database user has to be imported into ProxySQL, to allow the WHMCS application to authenticate against the load balancer. Go to Nodes -> pick the ProxySQL node -> Users -> Import Users and select “portal_whmcs”@”%”, as shown in the following screenshot:

In the next window (User Settings), specify Hostgroup 10 as the default hostgroup:

Now the restoration preparation stage is complete.

In Galera, restoring a big database via mysqldump on a single-node cluster is more efficient, and this improves the restoration time significantly. Otherwise, every node in the cluster would have to certify every statement from the mysqldump input, which would take longer time to complete.

Since we already have a three-node MariaDB Galera Cluster running, let’s stop MySQL service on mariadb2 and mariadb3, one node at a time for a graceful scale down. To shut down the database nodes, from ClusterControl, simply go to Nodes -> Node Actions -> Stop Node -> Proceed. Here is what you would see from ClusterControl dashboard, where the cluster size is 1 and the status of the db1 is Synced and Primary:

Then, on mariadb1 (192.168.55.51), restore the schema and data accordingly:

$ mysql -uportal_whmcs -p portal_whmcs < whmcsdata_whmcs_schema.sql
$ mysql -uportal_whmcs -p portal_whmcs < whmcsdata_whmcs_data.sql

Once imported, we have to fix the table structure to add the necessary "id" column (except for table "tblaffiliates") as well as adding the primary key on all tables that have been missing any:

$ mysql -uportal_whmcs -p
MariaDB> USE portal_whmcs;
MariaDB [portal_whmcs]> ALTER TABLE `tblaffiliates` ADD PRIMARY KEY (id);
MariaDB [portal_whmcs]> ALTER TABLE `mod_invoicedata` ADD `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;
MariaDB [portal_whmcs]> ALTER TABLE `tbladminperms` ADD `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;
MariaDB [portal_whmcs]> ALTER TABLE `tblconfiguration` ADD `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;
MariaDB [portal_whmcs]> ALTER TABLE `tblknowledgebaselinks` ADD `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;
MariaDB [portal_whmcs]> ALTER TABLE `tbloauthserver_access_token_scopes` ADD `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;
MariaDB [portal_whmcs]> ALTER TABLE `tbloauthserver_authcode_scopes` ADD `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;
MariaDB [portal_whmcs]> ALTER TABLE `tbloauthserver_client_scopes` ADD `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;
MariaDB [portal_whmcs]> ALTER TABLE `tbloauthserver_user_authz_scopes` ADD `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;
MariaDB [portal_whmcs]> ALTER TABLE `tblpaymentgateways` ADD `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;
MariaDB [portal_whmcs]> ALTER TABLE `tblproductconfiglinks` ADD `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;
MariaDB [portal_whmcs]> ALTER TABLE `tblservergroupsrel` ADD `id` INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST;

Or, we can translate the above repeated statements using a loop in a bash script:

#!/bin/bash

db_user='portal_whmcs'
db_pass='ghU51CnPzI9z'
db_whmcs='portal_whmcs'
tables=$(mysql -u${db_user} "-p${db_pass}"  information_schema -A -Bse "SELECT TABLES.table_name FROM TABLES LEFT JOIN KEY_COLUMN_USAGE AS c ON (TABLES.TABLE_NAME = c.TABLE_NAME AND c.CONSTRAINT_SCHEMA = TABLES.TABLE_SCHEMA AND c.constraint_name = 'PRIMARY' ) WHERE TABLES.table_schema <> 'information_schema' AND TABLES.table_schema <> 'performance_schema' AND TABLES.table_schema <> 'mysql' and TABLES.table_schema <> 'sys' AND c.constraint_name IS NULL;")
mysql_exec="mysql -u${db_user} -p${db_pass} $db_whmcs -e"

for table in $tables
do
        if [ "${table}" = "tblaffiliates" ]
        then
                $mysql_exec "ALTER TABLE ${table} ADD PRIMARY KEY (id)";
        else
                $mysql_exec "ALTER TABLE ${table} ADD id INT NOT NULL AUTO_INCREMENT PRIMARY KEY FIRST";
        fi
done

At this point, it's safe to start the remaining nodes to sync up with mariadb1. Start with mariadb2 by going to Nodes -> pick db2 -> Node Actions -> Start Node. Monitor the job progress and make sure mariadb2 is in Synced and Primary state (monitor the Overview page for details) before starting up mariadb3.

Finally, change the database pointing to the ProxySQL host on port 6033 inside WHMCS configuration file, as in our case it's located at /home/whmcsdata/public_html/configuration.php:

$ vim configuration.php

Don't forget to disable WHMCS maintenance mode by going to WHMCS > Setup > General Settings > General > Maintenance Mode > uncheck "Tick to enable - prevents client area access when enabled". Our database migration exercise is now complete.

Testing and Tuning

You can verify if by looking at the ProxySQL's query entries under Nodes -> ProxySQL -> Top Queries:

For the most repeated read-only queries (you can sort them by Count Star), you may cache them to improve the response time and reduce the number of hits to the backend servers. Simply rollover to any query and click Cache Query, and the following pop-up will appear:

What you need to do is to only choose the destination hostgroup and click "Add Rule". You can then verify if the cached query got hit under "Rules" tab:

From the query rule itself, we can tell that reads (all SELECT except SELECT .. FOR UPDATE) are forwarded to hostgroup 20 where the connections are distributed to all nodes while writes (other than SELECT) are forwarded to hostgroup 10, where the connections are forwarded to one Galera node only. This configuration minimizes the risk for deadlocks that may be caused by a multi-master setup, which improves the replication performance as a whole.

That's it for now. Happy clustering!

Subscribe below to be notified of fresh posts