Multi Datacenter Setups With PostgreSQL
The main goals of a multi-datacenter (or multi-DC) setup — regardless of whether the database ecosystem is SQL (PostgreSQL, MySQL), or NoSQL (MongoDB, Cassandra) to name just a few — are Low Latency for end users, High Availability, and Disaster Recovery. At the heart of such an environment lies the ability to replicate data, in ways that ensure its durability (as a side note Cassandra’s durability configuration parameters are similar to those used by PostgreSQL). The various replication requirements will be discussed below, however, the extreme cases will be left to the curious for further research.
Replication using asynchronous log shipping has been available in PostgreSQL for a long time, and synchronous replication introduced in version 9.1 opened a whole new set of options to developers of PostgreSQL management tools.
Things to Consider
One way to understanding the complexity of a PostgreSQL multi-DC implementation is by learning from the solutions implemented for other database systems, while keeping in mind that PostgreSQL insists on being ACID compliant.
A multi-DC setup includes, in most cases at least one datacenter in the cloud. While cloud providers take on the burden of managing the database replication on behalf of their clients, they do not usually match the features available in specialized management tools. For example with many enterprises embracing hybrid cloud and/or multi-cloud solutions, in addition to their existing on premise infrastructure, a multi-DC tool should be able to handle such a mixed environment.
Further, in order to minimize downtime during a failover, the PostgreSQL management system should be able to request (via an API call) a DNS update, so the database requests are routed to the new master cluster.
Networks spanning large geographical areas are high latency connections and all solutions must compromise: forget about synchronous replication, and use one primary with many read replicas. See the AWS MongoDB and Severalnines/Galera Cluster studies for an in-depth analysis of network effects on replication. On a related note, a nifty tool for testing the latency between locations is Wonder Network Ping Statistics.
While the high latency nature of WAN cannot be changed, the user experience can be dramatically improved by ensuring that reads are served from a read-replica close to the user location, however with some caveats. By moving replicas away from the primary, writes are delayed and thus we must do away with synchronous replication. The solution must also be able to work around other issues such as read-after-write-consistency and stale secondary reads due to connection loss.
In order to minimize the RTO, data needs to be replicated to a durable storage that is also able to provide high read throughput, and according to Citus Data one option that meets those requirements is AWS S3.
The very notion of multiple data center implies that the database management system must be able to present the DBA with a global view of all data centers and the various PostgreSQL clusters within them, manage multiple versions of PostgreSQL, and configure the replication between them.
When replicating writes to regional data centers, the propagation delay must be monitored. If the delay exceeds a threshold, an alarm should be triggered indicating that the replica contains stale data. The same principle applies to asynchronous multi-master replication.
In a synchronous setup, high latency, or network disruptions may lead to delays in serving client requests while waiting for the commit to complete, while in asynchronous configurations there are risks of split-brain, or degraded performance for an extended period of time. Split-brain and delays on synchronous commits are unavoidable even with well established replication solutions as explained in the article Geo-Distributed Database Clusters with Galera.
Another consideration is vendor support — as of this writing AWS does not support PostgreSQL cross-region replicas.
Intelligent management systems should monitor the network latency between data centers and recommend or adjust changes e.g. synchronous replication is perfectly fine between AWS Availability Zones where data centers are connected using fiber networks. That way a solution can achieve zero data loss and it can also implement master-master replication along with load balancing. Note that AWS Aurora PostgreSQL does not currently provide a master-master replication option.
Decide on the level of replication: cluster, database, table. The decision criteria should include bandwidth costs.
Implement cascaded replication in order to work around network disruptions that can prevent replicas to receive updates from master due to geographical distance.
Taking into consideration the all the requirements identify the products that are best suited for the job. A note of caution though: each solution comes with its own caveats that must be dealt with by following the recommendations in the product documentation. See for example the BDR Monitoring requirement.
The PostgreSQL official documentation contains a list of non-commercial open source applications, and an expanded list including commercial closed source solutions can be found at the Replication, Clustering, and Connection Pooling wiki page. A few of those tools have been reviewed in more detail in the Top PG Clustering HA Solutions for PostgreSQL article.
There is no turnkey solution, but some products can provide most of the features, especially when working with the vendor.
Here’s a non-exhaustive list:
- Citus Data provide their own PostgreSQL build, enhanced with impressive enterprise features and deep integration with AWS.
- EnterpriseDB offer a large suite of services that can be combined to meet most of the requirements. Most information is at Product Documentation.
- Postgres-BDR is a powerful replication tool designed specifically for geographically distributed clusters, however it doesn’t integrate with any cloud provider.
- ClusterControl comes with an impressive feature set– for managing PostgreSQL. It also has limited cloud integration.
- ElephantSQL works across many cloud providers. However, there is no option for an on-premise setup.
- Crunchy PostgreSQL for Kubernetes is a cloud agnostic product built on the upstream PostgreSQL.
As we’ve seen, when it comes to choosing a PostgreSQL multi-datacenter solution, there isn’t a one-size fits all solution. Often, compromising is a must. However, a good understanding of the requirements and implications can go a long way in making an informed decision.
Compared to static (read-only) data, a solution for databases needs to consider the replication of updates (writes). The literature describing both SQL and NoSQL replication solutions insists on using a single source of truth for writes with many replicas in order to avoid issues such as split-brain, and read-after-write consistency.
Lastly, interoperability is a key requirement considering that multi-DC setups may span data centers located on premise, and various cloud providers.
Subscribe to get our best and freshest content