blog

How to Perform Online Schema Changes on MySQL Using gh-ost

Krzysztof Ksiazek

Published:

In the previous blog post, we discussed how gh-ost works internally and how it compares to Percona’s pt-online-schema-change. Today we’d like to focus on operations – how can we test a schema change with gh-ost to verify it can be executed without any issues? And how do we go ahead and perform the actual schema change?

Testing migration

Ensuring that a migration will go smoothly is one of the most important steps in the whole schema change process. If you value your data, then you definitely want to avoid any risk of data corruption or partial data transformation. Let’s see how gh-ost allows you to test your migration.

gh-ost gives you numerous ways to test. First of all, you can execute a no-op migration by skipping the –execute flag. Let’s look at an example – we want to add a column to a table.

root@ip-172-30-4-235:~# ./gh-ost --host=172.30.4.235 --user=sbtest --password=sbtest --database=sbtest1 --table=sbtest1 --alter="ADD COLUMN x INT NOT NULL DEFAULT '0'" --chunk-size=2000 --max-load=Threads_connected=20

We here pass access details like user, password, database and table to alter. We also define what change needs to be added. Finally, we define chunk size for the background copy process and what we understand as a max load. Here we can pass different status counters in MySQL (not all makes sense) – we used threads_connected but we could use, for example, ‘threads_running’. Once this threshold is crossed, gh-ost starts to throttle writes.

# Migrating `sbtest1`.`sbtest1`; Ghost table is `sbtest1`.`_sbtest1_gho`
# Migrating ip-172-30-4-4:3306; inspecting ip-172-30-4-235:3306; executing on ip-172-30-4-235
# Migration started at Tue Dec 20 14:00:45 +0000 2016
# chunk-size: 2000; max-lag-millis: 1500ms; max-load: Threads_connected=20; critical-load: ; nice-ratio: 0.000000

Next, we see information about migration – what tables do we alter, which table is used as a ghost (temporary) table. Gh-ost creates two tables, one with _gho suffix is a temporary table with the new schema and it’s the target of the data copying process. The second table, with _ghc suffix, stores migration logs and status. We can also see a couple of other defaults – maximum acceptable lag is 1500 milliseconds (1.5 seconds) – gh-ost may work with an external script to create up to millisecond granularity for lag control. If you don’t set –replication-lag-query flag, seconds_behind_master from SHOW SLAVE STATUS will be used, which has granularity of one second.

# throttle-additional-flag-file: /tmp/gh-ost.throttle
# Serving on unix socket: /tmp/gh-ost.sbtest1.sbtest1.sock

Here we have information about throttle flag file – creating it will automatically trigger throttling on gh-ost. We also have an unix socket file, which can be used to control gh-ost’s configuration at runtime.

Copy: 0/0 100.0%; Applied: 0; Backlog: 0/100; Time: 1s(total), 0s(copy); streamer: binlog.000042:102283; State: migrating; ETA: due
CREATE TABLE `_sbtest1_gho` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `k` int(10) unsigned NOT NULL DEFAULT '0',
  `c` char(120) NOT NULL DEFAULT '',
  `pad` char(60) NOT NULL DEFAULT '',
  `x` int(11) NOT NULL DEFAULT '0',
  PRIMARY KEY (`id`),
  KEY `k_1` (`k`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 MAX_ROWS=1000000

Finally, we have information about progress – nothing interesting here as we ran a no-op change. We also have information about the schema of the target table.

Now that we tested no-op change, it’s time for some more real-life tests. Again, gh-ost gives you an option to verify that everything goes as planned. What we can do is to use one of our database replicas to run the change on, and verify it went fine. Gh-ost will stop the replication for us as soon as the change completes, to ensure that we can compare data from old and new table. It’s not so easy to compare tables with different schemas so we may want to start with a change which doesn’t do anything. For example:

ALTER TABLE … ENGINE=InnoDB;

Let’s run this migration to verify that gh-ost actually does its job correctly:

root@ip-172-30-4-235:~# ./gh-ost --host=172.30.4.235 --user=sbtest --password=sbtest --database=sbtest1 --table=sbtest1 --alter="ENGINE=InnoDB" --chunk-size=2000 --max-load=Threads_connected=20 --test-on-replica --execute

Once it’s done, you will see your slave in the following state.

mysql> P grep Running
PAGER set to 'grep Running'
mysql> SHOW SLAVE STATUSG
             Slave_IO_Running: No
            Slave_SQL_Running: No
      Slave_SQL_Running_State:
1 row in set (0.00 sec)

Replication has been stopped so no new changes are being added.

mysql> SHOW TABLES FROM sbtest1;
+-------------------+
| Tables_in_sbtest1 |
+-------------------+
| _sbtest1_gho      |
| sbtest1           |
+-------------------+
2 rows in set (0.00 sec)

Gh-ost table has been left for you to look into. Now, as we run a noop alter, we can compare both tables to verify that the whole process worked flawlessly. There are a couple of methods to do that. You can, for example, dump the table contents via SELECT … INTO OUTFILE and then compare md5 of both dump files. You can also use CHECKSUM TABLE command in MySQL:

mysql> CHECKSUM TABLE sbtest1.sbtest1, sbtest1._sbtest1_gho EXTENDED;
+----------------------+-----------+
| Table                | Checksum  |
+----------------------+-----------+
| sbtest1.sbtest1      | 851491558 |
| sbtest1._sbtest1_gho | 851491558 |
+----------------------+-----------+
2 rows in set (9.27 sec)

As long as checksums are identical (no matter how you calculated them), you should be safe to assume that both tables are identical and the migration process went fine.

Performing an Actual migration

Once we verified that gh-ost can execute our schema change correctly, it’s time to actually execute it. Keep in mind that you may need to manually drop old tables that were created by gh-ost during the process of testing the migration. You can also use –initially-drop-ghost-table and –initially-drop-old-table flags to ask gh-ost to do it for you. The final command to execute is exactly the same as we used to test our change, we just added –execute to it.

./gh-ost --host=172.30.4.235 --user=sbtest --password=sbtest --database=sbtest1 --table=sbtest1 --alter="ADD COLUMN x INT NOT NULL DEFAULT '0'" --chunk-size=2000 --max-load=Threads_connected=20 --execute

Once started, we’ll see a summary of the job. The main change is that the “migrating” host points to our master, 172.30.4.4 and we use one of slaves, 172.30.4.235 to look for binary logs.

# Migrating `sbtest1`.`sbtest1`; Ghost table is `sbtest1`.`_sbtest1_gho`
# Migrating ip-172-30-4-4:3306; inspecting ip-172-30-4-235:3306; executing on ip-172-30-4-235
# Migration started at Fri Dec 23 19:18:00 +0000 2016
# chunk-size: 2000; max-lag-millis: 1500ms; max-load: Threads_connected=20; critical-load: ; nice-ratio: 0.000000
# throttle-additional-flag-file: /tmp/gh-ost.throttle
# Serving on unix socket: /tmp/gh-ost.sbtest1.sbtest1.sock

We can also see progress messages printed by gh-ost:

Copy: 0/9982267 0.0%; Applied: 0; Backlog: 7/100; Time: 4s(total), 0s(copy); streamer: binlog.000074:808522953; State: migrating; ETA: N/A
Copy: 0/9982267 0.0%; Applied: 538; Backlog: 100/100; Time: 5s(total), 1s(copy); streamer: binlog.000074:808789786; State: migrating; ETA: N/A
Copy: 0/9982267 0.0%; Applied: 1079; Backlog: 100/100; Time: 6s(total), 2s(copy); streamer: binlog.000074:809092031; State: migrating; ETA: N/A
Copy: 0/9982267 0.0%; Applied: 1580; Backlog: 100/100; Time: 7s(total), 3s(copy); streamer: binlog.000074:809382067; State: migrating; ETA: N/A
Copy: 0/9982267 0.0%; Applied: 2171; Backlog: 84/100; Time: 8s(total), 4s(copy); streamer: binlog.000074:809718243; State: migrating; ETA: N/A
Copy: 4000/9982267 0.0%; Applied: 2590; Backlog: 33/100; Time: 9s(total), 5s(copy); streamer: binlog.000074:810697550; State: migrating; ETA: N/A
Copy: 12000/9982267 0.1%; Applied: 3006; Backlog: 5/100; Time: 10s(total), 6s(copy); streamer: binlog.000074:812459945; State: migrating; ETA: N/A
Copy: 28000/9982267 0.3%; Applied: 3348; Backlog: 12/100; Time: 11s(total), 7s(copy); streamer: binlog.000074:815749963; State: migrating; ETA: N/A
Copy: 46000/9982267 0.5%; Applied: 3736; Backlog: 0/100; Time: 12s(total), 8s(copy); streamer: binlog.000074:819054426; State: migrating; ETA: N/A
Copy: 60000/9982267 0.6%; Applied: 4032; Backlog: 4/100; Time: 13s(total), 9s(copy); streamer: binlog.000074:822321562; State: migrating; ETA: N/A
Copy: 78000/9982267 0.8%; Applied: 4340; Backlog: 12/100; Time: 14s(total), 10s(copy); streamer: binlog.000074:825982397; State: migrating; ETA: N/A
Copy: 94000/9982267 0.9%; Applied: 4715; Backlog: 0/100; Time: 15s(total), 11s(copy); streamer: binlog.000074:829283130; State: migrating; ETA: N/A
Copy: 114000/9982267 1.1%; Applied: 5060; Backlog: 24/100; Time: 16s(total), 12s(copy); streamer: binlog.000074:833357982; State: migrating; ETA: 17m19s
Copy: 130000/9982267 1.3%; Applied: 5423; Backlog: 16/100; Time: 17s(total), 13s(copy); streamer: binlog.000074:836654200; State: migrating; ETA: 16m25s

From those we can see how many rows were copied, how many events have been applied from binary logs, if there is a backlog of binlog events to apply, how long the whole process and copying of data took, binlog coordinates where gh-ost is looking for new events, state of the job (migrating, throttled, etc) and estimated time to complete the process.

Important to remember is that the number of rows to copy is just an estimate based on the EXPLAIN output for:

SELECT * FROM yourschema.yourtable;

You can see it below in ‘rows’ column and on gh-ost status output:

mysql> EXPLAIN SELECT * FROM sbtest1.sbtest1G
*************************** 1. row ***************************
           id: 1
  select_type: SIMPLE
        table: sbtest1
   partitions: NULL
         type: ALL
possible_keys: NULL
          key: NULL
      key_len: NULL
          ref: NULL
         rows: 9182788
     filtered: 100.00
        Extra: NULL
1 row in set, 1 warning (0.00 sec)
Copy: 0/9182788 0.0%; Applied: 0; Backlog: 0/100; Time: 1m15s(total), 0s(copy); streamer: binlog.000111:374831609; State: migrating; ETA: N/A
Copy: 0/9182788 0.0%; Applied: 0; Backlog: 100/100; Time: 1m20s(total), 5s(copy); streamer: binlog.000111:374945268; State: throttled, lag=33.166494s; ETA: N/A
Copy: 0/9182788 0.0%; Applied: 0; Backlog: 100/100; Time: 1m25s(total), 10s(copy); streamer: binlog.000111:374945268; State: throttled, lag=2.766375s; ETA: N/A
Copy: 0/9182788 0.0%; Applied: 1907; Backlog: 100/100; Time: 1m30s(total), 15s(copy); streamer: binlog.000111:375777140; State: migrating; ETA: N/A
Copy: 0/9182788 0.0%; Applied: 4543; Backlog: 100/100; Time: 1m35s(total), 20s(copy); streamer: binlog.000111:376924495; State: migrating; ETA: N/A

If you are interested in having precise numbers, you can use –exact-rowcount flag in gh-ost. If you use it, gh-ost will execute SELECT COUNT(*) FROM yourtable;, making sure that the number of rows has been calculated precisely.

After some time, gh-ost should complete the change, leaving the old table with _del suffix (_yourtable_del). In case something went wrong, you still can recover old data and then, using binary logs, replay any events which are missing. Obviously, it’s not the cleanest or fastest way to recover but it has been made possible – we’d surely take it over data loss.

What we described above is the default way in which gh-ost performs migration – read binary log from a slave, analyze table on a slave and execute changes on the master. This way we minimize any extra load which is put on the master. If you’d like to execute all your changes on the master, it is possible, as long as your master uses RBR format.

To execute our change on the master, we need to execute gh-ost in a way like below. We use our master’s IP in –host flag. We also use –allow-on-master flag to tell gh-ost that we are going to run the whole process on the master only.

./gh-ost --host=172.30.4.4 --user=sbtest --password=sbtest --database=sbtest1 --table=sbtest1 --alter="ADD COLUMN x INT NOT NULL DEFAULT '0'" --chunk-size=2000 --max-load=Threads_connected=20 --allow-on-master --execute

As you can clearly see, gh-ost gives you numerous ways in which you can ensure the schema change will be performed smoothly and in a safe manner. We cannot stress enough how important it is for a DBA to have a way to test every operation. Flexibility is also very welcome – default behavior of reducing load on the master makes perfect sense, but it is good that gh-ost still allows you to execute everything on the master only.

In the next blog post, we are going to discuss some safety measures that come with gh-ost. Namely, we will talk about its throttling mechanism and ways to perform runtime configuration changes.

Subscribe below to be notified of fresh posts