Tips for Storing Your Percona Backups in the Cloud

Paul Namuag


Percona has made a lot of great contributions in the open source world. One of the most downloaded software is their backup software for MySQL and for MongoDB. These softwares play a vital role for every organization or companies that are deploying backups especially for their disaster and recovery plan.

Speaking of Percona backups, a consideration of taking such a backup is determining where it will be stored. Storing your backups doesn’t just mean recovery, the data also has to be stored safely so that no intruders can access and is limited only to people who manage the database and the recovery methods. 

Organisations are using Percona backups for taking backups for their MySQL and MongoDB database deployments. Storing those backups in multiple destinations and taking advantage of cloud storage. Storing the backups in the cloud are taken advantage of because it’s cost efficient, secure, and provides high availability storage when needed.

Now, let’s take a dip as we provide tips on how you can store Percona Backups in the Cloud.

Security is a Must

For some setups, security is based on a level of degree on how a security measure and requirements it has to be. Security for backups doesn’t mean encryption alone is enough, or encryption alone is a must. Security can be implemented with isolation and a level of tight privacy when it comes to accessing the database servers or the storage network where your backup files are stored. Take note that encrypting a file at rest and decrypting a file takes time especially when the backup data is very huge. It also depends on the encryption standard algorithm. The more advanced and complicated the encryption is, the more it takes time and resource intensive during encryption and decryption.

Data At Rest Encryption

When taking a backup and then storing your data locally, you have choices on how to make it secure and safe. Storing it locally, you can encrypt the backup as your primary choice. For example, taking a logical backup using mysqldump as follows:

Create a Encryption Key File

The encryption file will be used as your pass in order to encrypt and decrypt your created backup using Percona Backup (Percona XtraBackup or Percona Backup for MongoDB).

$ openssl rand -base64 24 > /root/isolated_directory/keyfile.key

Now that you have the generated file, it has to be stored safely and locally. You can limit the permission of this file and of course store it in an isolated or safe location.

$ chmod -R 400 /root/isolated_directory

Create Your Logical Backup

 /usr/bin/mysqldump --defaults-file=/etc/my.cnf  --flush-privileges --hex-blob --opt --master-data=2 --single-transaction --skip-lock-tables --triggers --routines --events --gtid  --databases db1 db2 db3 db4  |gzip -6 -c  |  openssl enc  -aes-256-cbc -pass file:/root/isolated_directory/keyfile.key  > /root/mysqldump_2020-12-27_schemaanddata.sql.gz

If you want to explore more on how to best encrypt your database, go checkout and read our previous blog Database Backup Encryption – Best Practices.

Storing The Encryption Key Via Vault

For a strong level of security, you can ideally store your encryption key or token to a vault. For example, using Hashicorp Vault. If you want to learn how to set up and store your valuable keys or password, take time to read up on how to get started with Hashicorp Vault.

If you want to store it on third party providers, AWS Secrets Manager can be one of your choice. It is a fully-managed service backed by Amazon which fully integrates Amazon KMS. You can store and retrieve your encryption keys or even your database passwords and ssh credentials.

Data In Transit Encryption

Backup copies spread across into multiple destinations provides redundancy and extends the level of your disaster recovery plan. Among the primary reasons for storing Percona Backups in the Cloud is to have a high level of backup redundancy when failure occurs followed by data corruption. Cloud is known to be secure, fast, reliable, and cost efficient. Storing your data in the cloud mostly are fully-managed services such as the commonly big three cloud providers Amazon S3, Google Cloud Storage, and Azure Blob Storage. Other competing services such as Exoscale, Backblaze or other S3 capable services which do support S3 compatible API. As files can be transferred, make sure it is sent via TLS/SSL channel. For example, in AWS S3, you can configure the bucket to only process requests over TLS/SSL like below,


  "Id": "ExamplePolicy",

  "Version": "2012-10-17",

  "Statement": [


      "Sid": "AllowSSLRequestsOnly",

      "Action": "s3:*",

      "Effect": "Deny",

      "Resource": [




      "Condition": {

        "Bool": {

          "aws:SecureTransport": "false"



      "Principal": "*"




For other cloud storage, for example with Exoscale, you can have your configuration file .s3cfg such as,


host_base = sos-{zone}

host_bucket = %(bucket)s.sos-{zone}

access_key = $EXO_SOS_KEY

secret_key = $EXO_SOS_SECRET

use_https = True

Take note that transferring data over TLS/SSL ensures data is encrypted while transferring your Percona Backup data on-prem to the cloud. TLS is the secure version of SSL, so in any case, using TLS is more recommended than the latter.

Database Storage Availability

Percona Backup copies stored in the cloud shall be served right immediately when needed. If you are storing your Percona Backups in the cloud using a lifecycle mechanism, ensure that the retention period is long enough based on the RPO your organization or company have defined. When stored in the cloud, you have to think also of its retention period and of course you have to be wary of storage usage consumption. In that case, define a lifecycle management or you can use a life cycle mechanism that the cloud provider offers.

Storage availability ensures that network speed and bandwidth offers in time that you need to pull your Percona Backup needed during data recovery. Take note that this has to be part of your disaster and recovery plan.  In that case, make sure that your RTO (Recovery Time Objective) has been tested and reliably responsive to the expected time of accomplishment. The time you are performing your data recovery, business continuity and how long the downtime can be are depending on the time your database is fully functional, or at least online and able to serve most important data. Business has to be up and continuous all the time.

Cost Efficiency and Service Flexibility

Backup redundancy and its life cycle management imposes cost as well. This might not be an issue for SMEs/SMBs but for organizations and companies that are managing large volumes of data, then this can be a vital aspect of storing your Percona Backups to the Cloud. Every transfer and data stored imposes cost. You have to determine and choose the right file storage plan and right provider based on your certain requirements.

There are some small-scale organizations, their database backups are stored in a file-based storage for example using Google Drive or Microsoft’s OneDrive. But let’s clarify that a database backup such as backups taken using Percona Backup tools, these require metadata and some can be incremental backups which is far better to store your database backups on an object-based storage service. Not only that, object-based storage capabilities offer a lot of flexibility and capabilities such as ACLs, geographic-aware storage, and the cost is very flexible based on the size and resource you can choose. In essence, cost is very advantageous with object-based storage services and these come to play on how you can also limit the files to be accessed, life-cycle engagement, regional/geographical awareness, flexibility with performance of the server or the bandwidth when sending and/or receiving data.

Let’s take an example of the cost comparison here.

First, we’ll take a look at Amazon AWS S3 Standard tier. Let’s give a sample perspective of the cost. Take note that S3 standard is good for general purpose storage for any type of data, typically used for frequently accessed data. You can go checkout other tiers in the AWS S3 pricing page.

For US N. Virginia,

First 50 TB/Month

$0.023 per GB

Next 450 TB / Month

$0.022 per GB

Over 500 TB / Month

$0.021 per GB

Then, price differs for US West (Northern California),

First 50 TB/Month

$0.026 per GB

Next 450 TB / Month

$0.025 per GB

Over 500 TB / Month

$0.024 per GB


Now, you see the cost changes depend on the region where you have to access your bucket as the source of origin. 

While on the other hand, Google Cloud Storage has the following price for Standard Storage under region South Carolina (us-east1) which starts at $0.20 GB/mo. But if data moves between different locations on the same continent, a US-EAST1 to NORTHAMERICA-NORTHEAST1 will charge $0.01/GB.

Whereas, if we look at the Oracle Storage Cloud Pricing,  object storage starts at $0.0255 and object storage requests start at $0.0034. More on that, they also have Block Volume Performance Units which starts at $0.0017 and selectively, you can choose the type of VPUs you want to add and price will also increase.

To sum up, it’s not about which one is the better option and better choice. It depends on the type of storage, featured mechanisms, resources, availability, SLA’s, and other features that makes it more reliable when you need your Percona Backups in time and on the fly.

Using Automation Software

Automating your database backups whether it’s a Percona Server for MySQL or MariaDB database or a Percona Server for MongoDB, it’s better to always use the software you can rely on when creating or kicking the backups. 

There are open source scripts you might try to use or download. Even you can create your own and use Ansible, Chef, Puppet, or SaltStack. For enterprise software, you can use ClusterControl to manage everything you need. It supports taking backups for your Percona Server for MySQL and MongoDB, MySQL, MariaDB, and MongoDB. These are the types of open source databases that you can use with Percona Backup Tools, for which ClusterControl supports.

ClusterControl offers you to create a backup policy and run it on the fly, or even schedule it. The great thing is that it offers backup redundancy not only to store your backup on-prem, but also store it over the cloud. Currently, it supports the big three cloud providers i.e. AWS, GCP, and Azure. You can take a look at our previous blog as we guide you through creating backups and then send it to the cloud

Alternatively, another cost-effective solution is using Backup Ninja, a SaaS (Software-as-a-Service) platform which is a simple, yet efficient solution for database and file backup management. With Backup Ninja, it supports a lot of cloud storage providers that you might have not used before, but opens up a new opportunity for you to take advantage of their services, or use the cloud storage service that you have but have no automated backup service. Checkout the list of Cloud Storage Partners page. This SaaS is a well devised solution for your backup management which offers scheduling of database backups, restoring your database backup, backup redundancy, compression, security, and encryption. It offers logical and binary backups either full or incremental backups. It does also support file backup as well aside from database backups. Price starts at $40 dollars with unlimited options. Great bargain of Backup Ninja is that it offers a Free tier that allows you to have 1 backup per agent per day, 1 backup restore per agent per day, but local storage only.

Subscribe below to be notified of fresh posts