blog

Basic Considerations for Taking a MongoDB Backup

Onyancha Brian Henry

Published:

Database systems have a responsibility to store and ensure consistent availability of relevant data whenever needed at any time in operation. Most companies fail to continue with business after cases of data loss as a result of a database failure, security bridge, human error or catastrophic failure that may completely destroy the operating nodes in production. Keeping databases in the same data center puts one at high risk of losing all the data in case of these outrages.

Replication and Backup are the commonly used ways of ensuring high availability of data. The selection between the two is dependent on how frequently the data is changing. Backup is best preferred where data is not changing more frequently and no expectation of having so many backup files. On the other end, replication is preferred for frequently changing data besides some other merits associated like serving data in a specific location by reducing the latency of requests. However, both replication and backup can be used for maximum data integrity and consistency during restoration in any case of failure.

Database backups render more advantages besides providing a restoration point to providing basics for creating new environments for development, open access, and staging without tempering with production. The development team can quickly and easily test newly integrated features and accelerate their development. Backups can also be used to the checkpoint for code errors wherever the resulting data is not consistent.

Considerations for Backing Up MongoDB

Backups are created at certain points to reflect (acting as a snapshot of the database) what data the database hosts at that given moment. If the database fails at a given point, we can use the last backup file to roll back the DB to a point before it failed. However, one needs to take into consideration some factors before doing a recovery and they include:

  1. Recovery Point Objective
  2. Recovery Time Objective
  3. Database and Snapshot Isolation
  4. Complications with Sharding
  5. Restoration Process
  6. Performance Factors and Available Storage
  7. Flexibility
  8. Complexity of Deployment

Recovery Point Objective

This is carried out so as to determine how much data you are ready to lose during the backup and restoration process. For example, if we have user data and clickstream data, user data will be given priority over the clickstream analytics since the latter can be regenerated by monitoring operations in your application after restoration. A continuous backup should be preferred for critical data such as bank information, production industry data and communication systems information and should be carried out in close intervals. If the data does not change frequently, it may be less expensive to lose much of it if you do a restored snapshot of for example 6 months or 1 year earlier.

Recovery Time Objective

This is to analyze and determine how quickly the restoration operation can be done. During recovery, your applications will incur some downtime which is also directly proportional to the amount of data that needs to be recovered. If you are restoring a large set of data it will take longer.

Database and Snapshot Isolation

Isolation is a measure of how close backup snapshots are from the primary database servers in terms of logical configuration and physically. If they happen to be close enough, the recovery time reduces at the expense of increased likelihood of being destroyed at the same time the database is destroyed. It is not advisable to host backups and the production environment in the same system so as to avoid any disruption on the servers from mitigating into the backups too. 

Complications with Sharding

For a distributed database system through sharding, some backup complexity is presented and write activities may have to be paused across the whole system. Different shards will finish different types of backups at different times. Considering logical backups and Snapshot backups,

Logical Backups

  • Shards are of different sizes hence will finish at different times
  • MongoDB-base dumps will ignore the –oplog hence won’t be consistent at each shard.
  • The balancer could be off while it is supposed to be on just because some shards maybe have not finished the restoration process

Snapshot Backups

  • Works well for a single replica from versions 3.2 and later. You should, therefore, consider updating your MongoDB version.

Restoration Process

Some people carry out backups without testing if they will work in case of restoration. A backup in essence is to provide a restoration capability otherwise it will render to be useless. You should always try to run the backups at different test servers to see if they are working.

Performance Factors and Available Storage

Backups also tend to take many sizes as the data from the database itself and need to be compressed enough not to occupy a lot of unnecessary space that may cut the overall storage resources of the system. They can be archived into zip files hence reducing their overall sizes. Besides, as mentioned before, one can archive the backups in different datacenters from the database itself. 

Backups may determine different performances of the database in that some could degrade it. In that case, continuous backups will render some setback hence should be converted to scheduled backups to avoid depletion of maintenance windows. In most cases, secondary servers are deployed to support backups. In this case:

  • Single nodes cannot be consistently backed up because MongoDB uses read-uncommitted without an oplog when using the mongodump command and in that case backups will not be safe.
  • Use secondary nodes for backups since the process itself takes time according to the amount of data involved and the applications connected will incur some downtime. If you use the primary which has to also update the Oplogs, then you may lose some data during that downtime
  • The restore process takes a lot of time but the storage resources assigned are tiny.

Flexibility

Many at times you may not want some of the data during backup, as for the example of Recovery Point Objective, one may want the recovery be done and filter out the user clicks data. To do so, you need a Partial backup strategy that will provide the flexibility to filter out the data that you won’t be interested in, hence reduce the recovery duration and resources that would have been wasted. Incremental backup can also be useful such that only data parts that have changed will be backed up from the last snapshot rather than taking entire backups for every snapshot.

Complexity of Deployment

Your backup strategy should be easy to set and maintain with time. You can also schedule your backups so that you don’t need to do them manually whenever you want to.

Conclusion

Database systems guarantee “life after death” if only there is well-established backup up system in place. The database could be destroyed by catastrophic factors, human error or security attacks that can lead to loss or corruption of data. Before doing a backup, one has to consider the type of data in terms of size and importance. It is always not advisable to keep your backups in the same data center as your database so as to reduce the likelihood of the backups being destroyed simultaneously. Backups may alter the performance of the database hence one should be careful about the strategy to use and when to carry out the backup. Do not carry out your backups on the primary node since it may result in system downtime during the backup and consequently loss important data.

Subscribe below to be notified of fresh posts