blog

Using PostgreSQL Logical Replication to Maintain an Always Up-to-Date Read/Write TEST Server

Achilleas Mantzios

Published:

In this blog entry we’ll talk about logical replication in PostgreSQL: its use cases, general information on the status of this technology, and a special use case in particular on how to setup a subscriber (replica) node of the primary server in order to function as the database server for the testing environment, and the challenges met.

Intro

Logical replication, officially introduced in PostgreSQL 10, is the latest replication technology offered by the PostgreSQL community. Logical replication is a continuation of the legacy of physical replication with which it shares a lot of ideas and code. Logical replication works like physical replication using the WAL to record logical changes independent from the version or specific architecture. In order to be able to provide logical replication to the core offering the PostgreSQL community has gone a long way.

Types of replication and History of PostgreSQL replication

The types of replication in databases can be classified as follows:

  • Physical (AKA binary) replication
    • Operating system level (vSphere replication)
    • File system level (DRBD)
    • Database level (WAL based)
  • Logical replication (Database level)
    • Trigger based (DBMirror, Slony)
    • Middleware (pgpool)
    • WAL based (pglogical, Logical Replication)

The roadmap that brings to today’s WAL based logical replication was:

  • 2001: DBMirror (trigger based)
  • 2004: Slony1 (trigger based), pgpool (middleware)
  • 2005: PITR (WAL based) introduced in PostgreSQL 8.0
  • 2006: Warm standby in PostgreSQL 8.2
  • 2010: Physical streaming replication, hot standby in PostgreSQL 9.0
  • 2011: Synchronous streaming replication in PostgreSQL 9.1
  • 2012: Cascading streaming replication in PostgreSQL 9.2
  • 2013: Background workers in PostgreSQL 9.3
  • 2014: Logical decoding API, replication slots. (The foundations for Logical Replication) in PostgreSQL 9.4
  • 2015: 2ndQuadrant introduces pglogical, the ancestor or Logical Replication
  • 2017: Logical Replication in core PostgreSQL 10!

As we can see lots of technologies collaborated to make Logical Replication a reality: WAL archiving, warm/hot standbys, physical WAL replication, background workers, logical decoding. Assuming the reader is familiar with most of the notions of physical replication, we’ll talk about the basic components of Logical Replication.

PostgreSQL Logical Replication basic concepts

Some terminology:

  • Publication: A set of changes from a set of tables defined in a specific database on a physical replication primary server. A Publication may handle all or some of: INSERT, DELETE, UPDATE, TRUNCATE.
  • Publisher node: The server where the publication resides.
  • Replica identity: A way to identify the row on the subscriber side for UPDATEs and DELETEs.
  • Subscription: A connection to a publisher node and one or more publications in it. A subscription uses a dedicated replication slot on the publisher for replication. Additional replication slots may be used for the initial synchronization step.
  • Subscriber node: The server where the subscription resides.

Logical replication follows a publish/subscribe model. One or more subscribers may subscribe to one or more publications on a publisher node. Subscribers may re-publish to allow for cascading replication. Logical replication of a table consists of two stages:

  • Taking a snapshot of the table on the publisher and copying it to the subscriber
  • Applying all changes (since the snapshot) in the same sequence

Logical replication is transactional and guarantees that the order of changes being applied to the subscriber remains the same as on the publisher. Logical replication gives much more freedom than physical (binary) replication, therefore can be used in more ways:

  • Single database or table specific replication (no need to replicate the whole cluster)
  • Setting triggers to the subscriber for a specific task (such as anonymizing, which is a pretty hot topic after GDPR came into force)
  • Having a subscriber node collect data from many publisher nodes, thus allowing for central analytical processing
  • Replication between different versions/architectures/platforms (zero downtime upgrades)
  • Using the subscriber node as the database server for a test/development environment. Why we want this, is because testing against real data is the most realistic kind of tests.

Caveats and Restrictions

There are certain things we must have in mind when using logical replication, some of them might influence some design decisions but others may lead to critical incidents.

Restrictions

  • Only DML operations are supported. No DDL. The schema has to be defined beforehand
  • Sequences are not replicated
  • Large Objects are not replicated
  • Only plain base tables are supported (materialized views, partition root tables, foreign tables are not supported)

Caveats

The basic issue that sooner or later we’ll have to face when using Logical Replication are conflicts on the subscriber. The subscriber is a normal read/write server which can act as primary in a physical replication setup, or even as publisher in a cascading logical replication setup. As long as writes on the subscribed tables are performed there might be conflicts. A conflict arises when replicated data violates a constraint on the table they are applied to. Usually, the operation that causes this is INSERT, DELETES or UPDATES which don’t have any effect due to missing rows will not cause a conflict. When a conflict arises the replication stops. The logical background worker will be restarted in the specified interval (wal_retrieve_retry_interval), however, the replication will fail again until the cause of the conflict is resolved. This is a critical condition that must be dealt with immediately. Failing to do so will make the replication slot stuck at its current position, the publisher node will start accumulating WALs and inevitably the publisher node will run out of disk space. A conflict is the most common reason why replication might stop but any other erroneous condition will have the same effect: e.g. we added a new NOT NULL column on a subscribed table but forgot to define a default value, or added a column on a published table but forgot to define it on the subscribed table, or made a mistake on its type and the two types are not compatible. All those errors will stop the replication. There are two ways to resolve a conflict:

  1. Solve the actual problem
  2. Skip the failing transaction by calling pg_replication_origin_advance

Solution b. as also shown here can be dangerous and tricky as it is basically a trial and error process, and if one chooses the current LSN on the publisher he/she might easily end up with a broken replication system as there might be operations between the problematic LSN and the current LSN that we would want to keep. So the best way is to actually solve the problem on the subscriber side. E.g. if a UNIQUE KEY violation arises then we might update the data on the subscriber or just delete the row. In a production environment, all this must be automated or at least semi automated.

Setting up the publisher and subscriber nodes

For a general overview of the logical replication in practice please read this blog.

The relevant parameters for logical replication are:

  • Publisher side
    • wal_level >= “logical”
    • max_replication_slots >= #subscriptions + initial table synchronization
    • max_wal_senders >= max_replication_slots + other_physical_standbys
  • Subscriber side
    • max_replication_slots >= #subscriptions
    • max_logical_replication_workers >= #subscriptions + initial table synchronization
    • max_worker_processes >= max_logical_replication_workers + 1 + max_parallel_workers

We will focus on the special considerations that arise from our special purpose that we need logical replication to achieve: create a test database cluster for use by the testing dept. The publication may be defined either for all tables or table by table. I suggest the table by table approach as it gives us maximum flexibility. The general steps may be summarized as follows:

  • Perform a fresh initdb on the subscriber node
  • Dump the schema of the publisher cluster and copy to the subscriber node
  • Create the schema on the subscriber
  • Decide on which tables you need and which you don’t need.

Regarding the above bullet, there are two reasons on why you may not need a table to be replicated or setup for replication:

  • It is a dummy table with no importance (and maybe you should drop it from production as well)
  • is a table local to the production environment, which means that it makes perfect sense that the same table in the testing (subscriber) environment has its own data

All tables taking part in the logical replication must have a REPLICA IDENTITY. This is by default the PRIMARY KEY, and if not available a UNIQUE key can be defined. Next step to find the status of the tables with regards to REPLICA IDENTITY.

  • Find the tables with no obvious candidate for REPLICA IDENTITY
    select table_schema||'.'||table_name from information_schema.tables where table_type='BASE TABLE' AND table_schema||'.'||table_name NOT IN (select table_schema||'.'||table_name from information_schema.table_constraints WHERE constraint_type in ('PRIMARY KEY','UNIQUE')) AND table_schema NOT IN ('information_schema','pg_catalog') ;
  • Find the tables with no PRIMARY KEY but with a UNIQUE INDEX
    select table_schema||'.'||table_name from information_schema.table_constraints WHERE constraint_type = 'UNIQUE' EXCEPT select table_schema||'.'||table_name from information_schema.table_constraints WHERE constraint_type = 'PRIMARY KEY';
  • Go through the above lists and decide what to do with each table
  • Create the publication with the tables for which there exists a PK
    select 'CREATE PUBLICATION data_for_testdb_pub FOR TABLE ONLY ' || string_agg(qry.tblname,', ONLY ') FROM (select table_schema||'.'||quote_ident(table_name) as tblname from information_schema.tables where table_type='BASE TABLE' AND table_schema||'.'||table_name IN (select table_schema||'.'||table_name from information_schema.table_constraints WHERE constraint_type in ('PRIMARY KEY')) AND table_schema NOT IN( 'information_schema','pg_catalog')  ORDER BY 1) as qry;
    gexec
  • Then create the subscription on the subscriber node
    create subscription data_for_testdb_pub CONNECTION 'dbname=yourdb host=yourdbhost user=repmgr' PUBLICATION data_for_testdb_pub ;

    The above will copy the data as well.

  • Add tables that you want which have a UNIQUE index
    Run both in publisher and subscriber nodes, e.g.:

    ALTER TABLE someschema.yourtable REPLICA IDENTITY USING INDEX yourindex_ukey;

    On the publisher:

    ALTER PUBLICATION data_for_testdb_pub ADD TABLE ONLY someschema.yourtable;

    On the subscriber:

    ALTER SUBSCRIPTION data_for_testdb_pub REFRESH PUBLICATION WITH ( COPY_DATA );
  • At this point (synchronization) you should always have an eye on the PostgreSQL log on the subscriber node. You don’t want any errors or anything (timeout) which prohibit the continuation of logical replication. SOLVE ANY ERROR IMMEDIATELY, or the publisher will keep accumulating WAL files in pg_wal and eventually run out of space. So you have to deal with
    • All ERRORs or any message regarding the logical worker which results in exiting
    • Also take care of
      • wal_receiver_timeout
      • wal_sender_timeout

After solving all problems you should have your subscriber node running happily. So the next question is how to use this as a test database server. You will have to deal with those problems/issues:

  1. Anonymization
  2. Primary keys and Unique keys which are based on sequences violations
  3. A general set of good practices
  4. Monitoring

Anonymization

Regarding anonymization of personal data which is enforced by GDPR in EU, you should write some ALWAYS triggers that blank out all fields regarding addresses, bank accounts, marital status, phone numbers, emails, etc. You should consult your security officer in your company about what to keep and what to blank out. The triggers should be defined as ALWAYS since the logical worker runs the statements as REPLICA.

Primary Keys with Sequences

Regarding sequences, clearly there is gonna be a problem with those keys unless dealt with before any test is started. Consider this case:

  • On Friday afternoon you make some tests on the subscriber database inserting a new row to some table. This will have as an ID the next value generated by the sequence.
  • You go home for the weekend.
  • Some production user enters a row in the same table on the publisher database.
  • The row will be replicated based on the REPLICA IDENTITY to the subscriber node but will fail due to PK violation ERROR. The logical background worker will exit and retry. But will keep failing as long as the problem persists.
  • The replication will stuck. The replication slot will start accumulating WALs.
  • The publisher runs out of disk space.
  • On the weekend you get an email that your Primary node has PANIC’ed!

So, in order to solve the sequence problem you may take the following approach:

select 'SELECT setval(''' || seqrelid::regclass||''','||CASE WHEN seqincrement <0 THEN -214748364 ELSE 214748364 END||');' from pg_sequence where seqtypid=20;
gexec

What the above does is to set sequences to a large enough value so that they never overlap for a fairly large window in the future, allowing you to have a problem-free test server.

A set of Good Practices

You should really tell your programmers to make their tests non persistent. So any test after it completes should leave the database in the same state as it was prior to the test. With sequence-based IDs insertions this is not an issue, we saw earlier a solution. But with non-sequence (e.g. compound) UNIQUE keys that might be a problem. So it’s best to delete those test data before some production row with the same value hits the subscribed table.

Here we should also add dealing with schema changes. All schema changes must be done also on the subscriber in order not to break replicated DML traffic.

Download the Whitepaper Today
 
PostgreSQL Management & Automation with ClusterControl
Learn about what you need to know to deploy, monitor, manage and scale PostgreSQL

Monitoring

You really should invest in a good monitoring solution. You should monitor for ...

At the subscriber:

  • ALL messages in subscriber’s log that relevant to logical worker exit. Installing a tool as tail_n_mail can really help with this. A configuration known to work:
    INCLUDE: ERROR:  .*publisher.*
    INCLUDE: ERROR:  .*exited with exit.*
    INCLUDE: LOG:  .*exited with exit.*
    INCLUDE: FATAL:  
    INCLUDE: PANIC:

    Once we get an alert coming from tail_n_mail we should immediately solve the problem.

  • pg_stat_subscription. Pid should be not null. Also the lag should be small.

At the publisher:

  • pg_stat_replication. This should have as many rows as they are supposed to be: One for each connected streaming replication standby (subscriber nodes and other physical standbys included).
  • pg_replication_slots for the subscriber slot. This should be active.

Generally, it takes some time till you have your ideal test database server running without problems, but once you have solved them all, your programmers will thank you for having it!

Subscribe below to be notified of fresh posts