blog

Logical Database Backups Using MySQL Shell

Pon Suresh Pandian

Published

Mysqldump is a popular logical backup tool for MySQL which was originally written by Igor Romanenko.

Mysqldump performs logical backups (set of SQL statements). By default, mysqldump does not dump information_schema tables and it never takes performance_schema. However, the major drawback of mysqldump is that it uses only one thread while doing backup and restore. (Even your server has 64 cores). To overcome this drawback, MySQL introduced new utilities on the Shell client. In this blog, I am going to explain these new backup utilities.

Overview of MySQL Shell 

The MySQL shell is a powerful and advanced client and code editor for MySQL servers. MySQL shell 8.0.21 includes some exciting new utilities to create a logical dump and do a logical restore for the entire database instance, including users.

MySQL shell 8.0.22 included a logical backup of specific tables and restore.

Utilities 

  • util.dumpInstance() – Dump an entire database instance, including users.
  • util.dumpSchemas() – Dump a set of schemas.
  • util.loadDump() – Load a dump into a target database.
  • util.dumpTables() – Dump specific tables and views.

util.dumpInstance()

The dumpInstance() utility will dump all the databases that are presented in the MySQL data directory. It will exclude the  information_schema, mysql, ndbinfo, performance_schema, and sys schemas while taking the dump.

Syntax 

util.dumpInstance(outputUrl[, options]) 

It will dump to the local filesystem, outputUrl is a string specifying the path to a local directory where the dump files are to be placed. You can specify the absolute path or a path relative to the current working directory.

In this utility, there is a dry run option to inspect the schemas and view the compatibility issues, then run the dump with the appropriate compatibility options applied to remove the issues.

Options 

Let’s look at some important options for this dumputility.

ocimds : [True | False]

When this option is set to true, It will check the data dictionary, index dictionary, and encryption options in CREATE TABLE statements are commented out in the DDL files, to ensure that all tables are located in the MySQL data directory and use the default schema encryption. 

It will check any storage engines in CREATE TABLE statements other than InnoDB, for grants of unsuitable privileges to users or roles, and other compatibility issues.

If any non-conforming SQL statement is found, an exception is raised and the dump is halted.

So we are suggesting using the dryRun option to list out all of the issues with the items in the dump before the dumping process is started.  Use the compatibility option to automatically fix the issues in the dump output.

Note: This option only has support for the Instance dump utility and schema dump utility.

Example 1 

MySQL  localhost:3306 ssl  cart  JS > util.dumpInstance("/home/vagrant/production_backup", {ocimds: true,compatibility: ["strip_restricted_grants"]})
Acquiring global read lock
Global read lock acquired
All transactions have been started
Locking instance for backup
NOTE: Backup lock is not supported in MySQL 5.7 and DDL changes will not be blocked. The dump may fail with an error or not be completely consistent if schema changes are made while dumping.
Global read lock has been released
Checking for compatibility with MySQL Database Service 8.0.22
NOTE: MySQL Server 5.7 detected, please consider upgrading to 8.0 first. You can check for potential upgrade issues using util.checkForServerUpgrade().
NOTE: User 'backupuser'@'localhost' had restricted privileges (RELOAD, SUPER, CREATE TABLESPACE) removed
NOTE: User 'root'@'127.0.0.1' had restricted privileges (RELOAD, SHUTDOWN, FILE, SUPER, CREATE TABLESPACE) removed
NOTE: User 'root'@'::1' had restricted privileges (RELOAD, SHUTDOWN, FILE, SUPER, CREATE TABLESPACE) removed
NOTE: User 'root'@'localhost' had restricted privileges (RELOAD, SHUTDOWN, FILE, SUPER, CREATE TABLESPACE, PROXY) removed
ERROR: Table 'cart'.'sales' uses unsupported storage engine MyISAM (fix this with 'force_innodb' compatibility option)
Compatibility issues with MySQL Database Service 8.0.22 were found. Please use the 'compatibility' option to apply compatibility adaptations to the dumped DDL.

Util.dumpInstance: Compatibility issues were found (RuntimeError)

So we have a MyISAM table in my cart database. The dry run option throws the error.

If you want to fix these errors automatically in your dump file, pass the compatibility option as an argument in your command.

Example 2 

MySQL  localhost:3306 ssl  cart  JS > util.dumpInstance("/home/vagrant/production_backup", {dryRun: true ,ocimds: true,compatibility: ["strip_restricted_grants","force_innodb"]})
Acquiring global read lock
Global read lock acquired
All transactions have been started
Locking instance for backup
NOTE: Backup lock is not supported in MySQL 5.7 and DDL changes will not be blocked. The dump may fail with an error or not be completely consistent if schema changes are made while dumping.
Global read lock has been released
Checking for compatibility with MySQL Database Service 8.0.22
NOTE: MySQL Server 5.7 detected, please consider upgrading to 8.0 first. You can check for potential upgrade issues using util.checkForServerUpgrade().
NOTE: User 'backupuser'@'localhost' had restricted privileges (RELOAD, SUPER, CREATE TABLESPACE) removed
NOTE: User 'root'@'127.0.0.1' had restricted privileges (RELOAD, SHUTDOWN, FILE, SUPER, CREATE TABLESPACE) removed
NOTE: User 'root'@'::1' had restricted privileges (RELOAD, SHUTDOWN, FILE, SUPER, CREATE TABLESPACE) removed
NOTE: User 'root'@'localhost' had restricted privileges (RELOAD, SHUTDOWN, FILE, SUPER, CREATE TABLESPACE, PROXY) removed
NOTE: Table 'cart'.'sales' had unsupported engine MyISAM changed to InnoDB
Compatibility issues with MySQL Database Service 8.0.22 were found and repaired. Please review the changes made before loading them.
Writing global DDL files
Writing users DDL
Writing DDL for schema `cart`
Writing DDL for table `cart`.`salaries`
Writing DDL for table `cart`.`sales`
Writing DDL for table `cart`.`t1`
Preparing data dump for table `cart`.`salaries`
Data dump for table `cart`.`salaries` will be chunked using column `id`
Preparing data dump for table `cart`.`sales`
Data dump for table `cart`.`sales` will be chunked using column `id`
Preparing data dump for table `cart`.`t1`
NOTE: Could not select a column to be used as an index for table `cart`.`t1`. Chunking has been disabled for this table, data will be dumped to a single file.

Now the dry run is fine and there are no exceptions. Let’s run the dump instance command to take an instance backup.

The target directory must be empty before the export takes place. If the directory does not yet exist in its parent directory, the utility creates it.

Example 3 

MySQL  localhost:3306 ssl  cart  JS > util.dumpInstance("/home/vagrant/production_backup", {compatibility: ["strip_restricted_grants","force_innodb"],threads : 12})
Acquiring global read lock
Global read lock acquired
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Writing users DDL
Writing DDL for schema `cart`
Writing DDL for view `cart`.`price`
Writing DDL for table `cart`.`dummy`
Writing DDL for table `cart`.`salaries`
Writing DDL for schema `sbtest`
Writing DDL for table `sbtest`.`sbtest1`
Writing DDL for table `sbtest`.`sbtest10`
.
.
.
1 thds dumping - 99% (624.55K rows / ~625.40K rows), 896.15K rows/s, 10.13 MB/s uncompressed, 3.73 MB/s compressed 
Duration: 00:00:00s                                                                                               
Schemas dumped: 2                                                                                                 
Tables dumped: 18                                                                                                 
Uncompressed data size: 7.14 MB                                                                                   
Compressed data size: 2.79 MB                                                                                     
Compression ratio: 2.6                                                                                            
Rows written: 624550                                                                                              
Bytes written: 2.79 MB                                                                                            
Average uncompressed throughput: 7.14 MB/s                                                                        
Average compressed throughput: 2.79 MB/s

Above we have used a compatibility option. So while taking the dump, it will convert MyISAM tables into InnoDB and store them in to file.

Logs 

[vagrant@centos14 production_backup]$ cat [email protected]
-- MySQLShell dump 1.0.1  Distrib Ver 8.0.22 for Linux on x86_64 - for MySQL 8.0.22 (MySQL Community Server (GPL)), for Linux (x86_64)
--
-- Host: localhost    Database: cart    Table: sales
-- ------------------------------------------------------
-- Server version 5.7.32
--
-- Table structure for table `sales`
--
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!50503 SET character_set_client = utf8mb4 */;
CREATE TABLE IF NOT EXISTS `sales` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `name` varchar(30) DEFAULT NULL,
  `address` varchar(30) DEFAULT NULL,
  PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;
/*!40101 SET character_set_client = @saved_cs_client */;

If you are using mysqldump, it will store the output into a single file. But here it generates more files as we will explain below.

These are the available files in the backup directory.

[vagrant@centos13 production_backup]$ ls -lrth
total 52K
-rw-r-----. 1 vagrant vagrant  707 Nov  6 02:36 @.json
-rw-r-----. 1 vagrant vagrant  287 Nov  6 02:36 cart.json
-rw-r-----. 1 vagrant vagrant  240 Nov  6 02:36 @.sql
-rw-r-----. 1 vagrant vagrant  240 Nov  6 02:36 @.post.sql
-rw-r-----. 1 vagrant vagrant 2.6K Nov  6 02:36 @.users.sql
-rw-r-----. 1 vagrant vagrant  733 Nov  6 02:36 [email protected]
-rw-r-----. 1 vagrant vagrant  486 Nov  6 02:36 cart.sql
-rw-r-----. 1 vagrant vagrant  575 Nov  6 02:36 [email protected]
-rw-r-----. 1 vagrant vagrant    8 Nov  6 02:36 cart@[email protected]
-rw-r-----. 1 vagrant vagrant    8 Nov  6 02:36 cart@salaries@@1.tsv.zst.idx
-rw-r-----. 1 vagrant vagrant   47 Nov  6 02:36 cart@[email protected]
-rw-r-----. 1 vagrant vagrant   24 Nov  6 02:36 cart@salaries@@1.tsv.zst
-rw-r-----. 1 vagrant vagrant  252 Nov  6 02:36 @.done.json
  • This @.json file contains server details and a list of users, database names, and their character sets.
  • This cart.json file contains the view, SP, and function names along with the list of tables.
  • These @.sql and @.post.sql files contain MySQL server version details.
  • This @.users.sql file contains a list of database users.
  • This [email protected] file contains table structure.
  • This  cart.sql file contains a database statement.
  • This [email protected] file contains column names and character sets.
  • The cart@[email protected] file is a binary file. It stores table index stats.
  • The  cart@[email protected] file is a binary file and it stores data.
  • This @.done.json file contains backup end time and data file sizes in KB.

util.dumpSchemas()

It will dump the specific schemas that you mention in the arguments for this utility.

Syntax 

​util.dumpSchemas(schemas, outputUrl[, options])

Example 

MySQL  localhost:3306 ssl  cart  JS > util.dumpSchemas(["cart"], "/home/vagrant/production_backup",{compatibility: ["strip_restricted_grants","force_innodb"],threads :12})
Acquiring global read lock
Global read lock acquired
All transactions have been started
Locking instance for backup
NOTE: Backup lock is not supported in MySQL 5.7 and DDL changes will not be blocked. The dump may fail with an error or not be completely consistent if schema changes are made while dumping.
Global read lock has been released
Writing global DDL files
Writing DDL for table `cart`.`price_tag`
Writing DDL for schema `cart`
Writing DDL for table `cart`.`salaries`
Writing DDL for table `cart`.`sales`
NOTE: Table 'cart'.'sales' had unsupported engine MyISAM changed to InnoDB
Preparing data dump for table `cart`.`price_tag`
Data dump for table `cart`.`price_tag` will be chunked using column `id`
Data dump for table `cart`.`price_tag` will be written to 1 file
Preparing data dump for table `cart`.`salaries`
Data dump for table `cart`.`salaries` will be chunked using column `id`
Data dump for table `cart`.`salaries` will be written to 2 files
Preparing data dump for table `cart`.`sales`
Data dump for table `cart`.`sales` will be chunked using column `id`
Running data dump using 12 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Data dump for table `cart`.`sales` will be written to 1 file                                               
1 thds dumping - 150% (3 rows / ~2 rows), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed          
Duration: 00:00:00s                                                                              
Schemas dumped: 1                                                                                
Tables dumped: 3                                                                                 
Uncompressed data size: 53 bytes                                                                 
Compressed data size: 0 bytes                                                                    
Compression ratio: 53.0                                                                          
Rows written: 3                                                                                  
Bytes written: 0 bytes                                                                           
Average uncompressed throughput: 53.00 B/s                                                       
Average compressed throughput: 0.00 B/s                

util.dumpTables 

If you want to dump specific tables we can use dumpTables utility.

For the larger tables, mysqldump will take more time. Use the dumpTables utility to reduce the time.

Syntax 

util.dumpTables(schema, tables, outputUrl[, options])

Example 

MySQL  localhost:33060+ ssl  sbtest  JS > util.dumpTables("sbtest", [ "sbtest14", "sbtest16" ], "/home/vagrant/specific_table",{threads: 12})
Acquiring global read lock
Global read lock acquired
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Writing DDL for table `sbtest`.`sbtest16`
Writing DDL for table `sbtest`.`sbtest14`
Preparing data dump for table `sbtest`.`sbtest16`
Data dump for table `sbtest`.`sbtest16` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest14`
Data dump for table `sbtest`.`sbtest14` will be chunked using column `id`
Running data dump using 12 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Data dump for table `sbtest`.`sbtest16` will be written to 1 file
Data dump for table `sbtest`.`sbtest14` will be written to 1 file
1 thds dumping - 99% (78.07K rows / ~78.08K rows), 0.00 rows/s, 0.00 B/s uncompressed, 0.00 B/s compressed
Duration: 00:00:00s                                                                                       
Schemas dumped: 1                                                                                         
Tables dumped: 2                                                                                          
Uncompressed data size: 892.39 KB                                                                         
Compressed data size: 348.91 KB                                                                           
Compression ratio: 2.6                                                                                    
Rows written: 78068                                                                                       
Bytes written: 348.91 KB                                                                                  
Average uncompressed throughput: 892.39 KB/s                                                              
Average compressed throughput: 348.91 KB/s 

Dump Loading Utility 

The dump loading utility provides data streaming to remote storage, parallel loading of tables or table chunks, progress state tracking, resume and reset capability, and the option of concurrent loading while the dump is still taking place.

Note: The dump loading utility uses the LOAD DATA LOCAL INFILE statement, so we need to enable this local_infile parameter globally while importing.

The dump loading utility checks whether the sql_require_primary_key system variable is set to ON, and if it is, returns an error if there is a table in the dump files with no primary key.

Syntax 

util.loadDump(url[, options])

Example 

MySQL  localhost:3306 ssl  sbtest  JS > util.loadDump("/home/vagrant/specific_table", {progressFile :"/home/vagrant/specific_table/log.json",threads :12})
Loading DDL and Data from '/home/vagrant/specific_table' using 12 threads.
Opening dump...
Target is MySQL 8.0.22. Dump was produced from MySQL 8.0.22
Checking for pre-existing objects...
Executing common preamble SQL
[Worker006] Executing DDL script for `sbtest`.`sbtest1`
[Worker004] Executing DDL script for `sbtest`.`sbtest12`
2 thds loading  100% (892.39 KB / 892.39 KB), 0.00 B/s, 0 / 2 tables done[Worker001] sbtest@sbtest12@@0.tsv.zst: Records: 39034  Deleted: 0  Skipped: 0  Warnings: 0
[Worker005] sbtest@sbtest1@@0.tsv.zst: Records: 39034  Deleted: 0  Skipped: 0  Warnings: 0
Executing common postamble SQL                                                                                                   
2 chunks (78.07K rows, 892.39 KB) for 2 tables in 1 schemas were loaded in 1 sec (avg throughput 892.39 KB/s)
0 warnings were reported during the load.

By default, full-text indexes for a table are created only after the table is completely loaded, which speeds up the import.

You can also choose to disable index creation during the import and create the indexes afterward.

The dump loading utility imports across multiple threads to maximize the parallelism. If the dump files were compressed by MySQL Shell’s dump utilities, the dump loading utility handles decompression.

You can select individual tables or schemas to import or exclude from the import.

You can choose to skip binary logging on the target MySQL instance during the import using a SET sql_log_bin=0 statement.

Conclusion

This is one of the powerful utilities in MySQL 8.0. It is now possible to dump from MySQL 5.6 and load these dumps into MySQL 5.7 or 8.0. However, dumping of user accounts is not supported when dumping from MySQL 5.6. In my next blog, we shall compare the backup/restoration speed of MySQLdump and shell utility.

Subscribe below to be notified of fresh posts