blog

Configuring PostgreSQL for Business Continuity

Venkata Nagothi

Published

Business Continuity for Databases

Business continuity for databases means databases must be continuously operational even during the disasters. It is imperative to ensure that production databases are available for the applications all the time even during the disasters, otherwise, could end-up being an expensive deal. DBAs, Architects would need to ensure that database environments can sustain disasters and are disaster recovery SLA compliant. To ensure disasters does not affect database availability, databases must be configured for business continuity.

Configuring databases for business continuity involves a lot of architecting, planning, designing and testing. A lot of factors like data centers and their geographic territories including infrastructure design come into consideration when it comes to designing and implementing an effective disaster recovery strategy for databases. That explains the fact that “Business Continuity = Avoid outages during disasters”.

To ensure production databases survive a disaster, a Disaster Recovery (DR) site must be configured. Production and DR sites must be part of two geographically distant Data Centres. This means, a standby database must be configured at the DR site for every production database so that, the data changes occurring on production database are immediately synced across to the standby database via transaction logs. This can be achieved by “Streaming Replication” capability in PostgreSQL.

What Needs to Happen if Disaster Strikes Production (or Primary) Database?

When production (primary) database crashes or becomes unresponsive, standby database must be promoted to primary and the applications must be pointed to newly promoted standby (new primary) database and all of it must happen automatically within the designated outage SLA. This process is termed as failover.

Configuring PostgreSQL for High Availability

As said above, to ensure that the PostgreSQL is disaster recovery compliant, it must be first configured with Streaming Replication (master + standby database) and with automatic standby promotion/ failover capabilities. Let us look at how to configure streaming replication first and followed by the “failover” process.

Configuring Standby Database (Streaming Replication)

Streaming replication is the in-built feature of PostgreSQL which ensures data is replicated from Primary to Standby database via WAL records and supports both Asynchronous and Synchronous replication methods. This way of replicating is quite reliable and suitable for environments demanding real-time and highly performant replication.

Configuring streaming standby is pretty simple. The first step is to ensure that primary database configurations are as follows:

Primary Database Mandatory Configurations

Ensure the following parameters are configured in postgresql.conf on the primary database. Doing the following changes would require a database restart.

wal_level=logical

wal_level parameter ensures that the information required for replication is written to the WAL files.

max_wal_senders=1 (or any number more than 0)

WAL records are shipped by wal sender process from the primary database to the standby database. So, the above parameter must be configured to minimum 1. More than a value of 1 is required when multiple wal senders are required.

Enable WAL Archiving

There is no hard dependency on WAL Archiving for streaming replication. However, it is strongly recommended to configure WAL Archiving because, if the standby lags behind and if the required WAL files are removed from the pg_xlog (or pg_wal) directory, then, Archive files will be required to get the standby in sync with the primary if not, the standby must be rebuilt from scratch.

archive_mode=on
archive_command=

Primary database must be configured to accept connections from standby.

Below configuration must be there in pg_hba.conf

host    replication     postgres        /32            trust

Now, take a backup of the primary database and restore the same on the DR site. Once done with the restoration, build the recovery.conf file in the data directory with the below contents:

standby_mode=’on’
primary_conninfo=’host=, port=, user=’
restore_command=’cp /database/wal_restore/%f %p’
trigger_file=’/database/promote_trigfile’
recovery_target_timeline=’latest’

Now, start the standby database. The streaming replication must be enabled. The below message in the postgresql log file of standby database confirms that streaming replication is succesfully working:

2018-01-13 00:22:44 AEDT [4432]: [1] user=,db=,app=,client= LOG:  started streaming WAL from primary at 127/CD000000 on timeline 1
2018-01-13 00:22:44 AEDT [4268]: [5] user=,db=,app=,client= LOG:  redo starts at 127/CD380170

That concludes that a fully functional streaming replication is in place. Next step to install/configure repmgr. Before that, let us understand the failover process.

Download the Whitepaper Today
 
PostgreSQL Management & Automation with ClusterControl
Learn about what you need to know to deploy, monitor, manage and scale PostgreSQL

What is Failover?

Failover occurs when the primary database becomes completely unavailable due to a disaster. During the failover process, the Standby database will be promoted to become a new primary database so that applications can continue the business operations.

Automatic Failover

The whole failover process has to happen automatically to ensure effective business continuity is in place and this can only be achieved by some middleware tools. The whole idea is to avoid a manual intervention of DBAs, Developers.

One such tool which helps perform automatic failover is “repmgr”.

Let us take a look at repmgr and its capabilities.

Repmgr

Repmgr is an opensource tool developed by 2nd Quadrant. This tool helps to perform various database administrative activities like building and monitoring PostgreSQL replication including performing automated failover activities in the event of a disaster and also helps to perform switchover operations.

Repmgr is an easy to install tool and the configurations are not complex as well. Let us take a look at installation first:

Installing repmgr

Download the tool from here.

Untar the tarball and perform the installation as shown below:

I have installed repmgr-4.2.0 on a CentOS 7 host and I have installed repmgr against PostgreSQL-11.1.Before installation ensure PostgreSQL bin directory is part of $PATH and the PostgreSQL lib directory is part of $LD_LIBRARY_PATH. To understand that the repmgr is installed against PostgreSQL-11.1, I am displaying the “make install” output below:

[root@buildhost repmgr-4.2.0]# ./configure --prefix=/opt/repmgr-4.2
[root@buildhost repmgr-4.2.0]# make
[root@buildhost repmgr-4.2.0]# make install
Building against PostgreSQL 11
/bin/mkdir -p '/opt/pgsql-11.1/lib'
/bin/mkdir -p '/opt/pgsql-11.1/share/extension'
/bin/mkdir -p '/opt/pgsql-11.1/share/extension'
/bin/mkdir -p '/opt/pgsql-11.1/bin'
/bin/install -c -m 755  repmgr.so '/opt/pgsql-11.1/lib/repmgr.so'
/bin/install -c -m 644 .//repmgr.control '/opt/pgsql-11.1/share/extension/'
/bin/install -c -m 644 .//repmgr--unpackaged--4.0.sql .//repmgr--4.0.sql .//repmgr--4.0--4.1.sql .//repmgr--4.1.sql .//repmgr--4.1--4.2.sql .//repmgr--4.2.sql  '/opt/pgsql-11.1/share/extension/'
/bin/install -c -m 755 repmgr repmgrd '/opt/pgsql-11.1/bin/'

Configuring repmgr for Automatic Failover

Before looking at configuring “repmgr”, the databases must be configured with streaming replication which we have seen earlier. To start with, both the databases (primary and standby) must be configured with the following parameter in postgresql.conf:

Primary

[postgres@buildhost log]$ grep "shared_preload" /data/pgdata11/postgresql.conf
shared_preload_libraries = 'repmgr'     # (change requires restart)

Standby

[postgres@buildhost log]$ grep "shared_preload" /data/pgdata-standby11/postgresql.conf
shared_preload_libraries = 'repmgr'     # (change requires restart)

The above parameter is to enable “repmgrd” daemon which, continuously runs in the background and monitors the streaming replication. Without this parameter, it is not possible to perform automatic failover. Changing this parameter would need a database restart.
Next, build the repmgr configuration file (say with the name repmgr.conf) for both the databases. Primary database must have a configuration file with the following contents:

node_id=1
node_name=node1
conninfo='host=xxx.xxx.xx.xx user=postgres dbname=postgres connect_timeout=2'
data_directory='/data/pgdata11'

Place the file in the data directory, in this case, it is “/data/pgdata11”.

Standby database configuration file must have the following contents:

node_id=2
node_name=node2
conninfo='host=xxx.xxx.xx.xx user=postgres dbname=postgres port=6432 connect_timeout=2'
data_directory='/data/pgdata-standby11'

failover=automatic
promote_command='repmgr standby promote -f /data/pgdata-standby11/repmgr.conf'
follow_command='repmgr standby follow -f /data/pgdata-standby11/repmgr.conf --upstream-node-id=%n'

monitoring_history=yes
monitor_interval_secs=5

log_file='/data/pgdata-standby11/repmgr_logs/repmgr.log'
log_status_interval=5
log_level=DEBUG

promote_check_timeout=5
promote_check_interval=1

master_response_timeout=5
reconnect_attempts=5
reconnect_interval=10

Both the databases must be registered with repmgr.
Register Primary database

[postgres@buildhost pgdata-standby11]$ repmgr -f /data/pgdata11/repmgr.conf primary register
INFO: connecting to primary database...
NOTICE: attempting to install extension "repmgr"
NOTICE: "repmgr" extension successfully installed
NOTICE: primary node record (id: 1) registered

Register Standby Database

[postgres@buildhost pgdata-standby11]$ repmgr -f /data/pgdata-standby11/repmgr.conf standby register --upstream-node-id=1
INFO: connecting to local node "node2" (ID: 2)
INFO: connecting to primary database
INFO: standby registration complete
NOTICE: standby node "node2" (id: 2) successfully registered

Run the below command to ensure repmgr logging is working.

[postgres@buildhost ~]$ repmgrd -f /data/pgdata-standby11/repmgr.conf --verbose --monitoring-history
[2019-02-16 16:31:26] [NOTICE] using provided configuration file "/data/pgdata-standby11/repmgr.conf"
[2019-02-16 16:31:26] [WARNING] master_response_timeout/5: unknown name/value pair provided; ignoring
[2019-02-16 16:31:26] [NOTICE] redirecting logging output to "/data/pgdata-standby11/repmgr_logs/repmgr.log"

If you can observe, I have configured log_level to DEBUG to generate detailed logging in the standby’s repmgr.conf file. Check the logs for replication status.
Check if the replication is working as expected using repmgr:

[postgres@buildhost pgdata-standby11]$ repmgr -f /data/pgdata-standby11/repmgr.conf cluster show
 ID | Name  | Role    | Status    | Upstream | Location | Connection string
----+-------+---------+-----------+----------+----------+-------------------------------------------------------------------------------
 1  | node1 | primary | * running |          | default  | host=xxx.xxx.xx.xx user=postgres dbname=postgres connect_timeout=2
 2  | node2 | standby |   running | node1    | default  | host=xxx.xxx.xx.xx user=postgres dbname=postgres port=6432 connect_timeout=2

The above message confirms replication is running fine.

Now, if I shutdown the primary database, repmgrd daemon should detect the failure of the primary database and should promote the standby database. Let us see if that happens -The primary database is stopped:

[postgres@buildhost ~]$ pg_ctl -D /data/pgdata-standby11 stop
waiting for server to shut down.... done
server stopped

The standby database must be promoted automatically. The repmgr logs would show the same:

fallback_application_name=repmgr is 2
[2019-02-14 17:09:23] [WARNING] unable to reconnect to node 1 after 5 attempts
[2019-02-14 17:09:23] [DEBUG] is_server_available(): ping status for host=xxx.xxx.xx.xx user=postgres dbname=postgres port=6432 connect_timeout=2 is 0
[2019-02-14 17:09:23] [DEBUG] do_election(): electoral term is 1
[2019-02-14 17:09:23] [DEBUG] get_active_sibling_node_records():
  SELECT n.node_id, n.type, n.upstream_node_id, n.node_name, n.conninfo, n.repluser, n.slot_name, n.location, n.priority, n.active, n.config_file, '' AS upstream_node_name     FROM repmgr.nodes n    WHERE n.upstream_node_id = 1      AND n.node_id != 2      AND n.active IS TRUE ORDER BY n.node_id
[2019-02-14 17:09:23] [DEBUG] clear_node_info_list() - closing open connections
[2019-02-14 17:09:23] [DEBUG] clear_node_info_list() - unlinking
[2019-02-14 17:09:23] [DEBUG] do_election(): primary location is "default", standby location is "default"
[2019-02-14 17:09:23] [DEBUG] no other nodes - we win by default
[2019-02-14 17:09:23] [DEBUG] election result: WON
[2019-02-14 17:09:23] [NOTICE] this node is the only available candidate and will now promote itself
[2019-02-14 17:09:23] [DEBUG] get_node_record():
  SELECT n.node_id, n.type, n.upstream_node_id, n.node_name, n.conninfo, n.repluser, n.slot_name, n.location, n.priority, n.active, n.config_file, '' AS upstream_node_name   FROM repmgr.nodes n  WHERE n.node_id = 1
[2019-02-14 17:09:23] [INFO] promote_command is:
  "repmgr standby promote -f /data/pgdata-standby11/repmgr.conf"
WARNING: master_response_timeout/5: unknown name/value pair provided; ignoring
DEBUG: connecting to: "user=postgres connect_timeout=2 dbname=postgres host=xxx.xxx.xx.xx port=6432 fallback_application_name=repmgr"
DEBUG: connecting to: "user=postgres connect_timeout=2 dbname=postgres host=xxx.xxx.xx.xx fallback_application_name=repmgr"
DEBUG: connecting to: "user=postgres connect_timeout=2 dbname=postgres host=xxx.xxx.xx.xx port=6432 fallback_application_name=repmgr"
NOTICE: promoting standby to primary
DETAIL: promoting server "node2" (ID: 2) using "pg_ctl  -w -D '/data/pgdata-standby11' promote"
DETAIL: waiting up to 5 seconds (parameter "promote_check_timeout") for promotion to complete
DEBUG: setting node 2 as primary and marking existing primary as failed
NOTICE: STANDBY PROMOTE successful
DETAIL: server "node2" (ID: 2) was successfully promoted to primary

The above precisely means, repmgr was unable to connect to the primary database and after unsuccessful 5 attempts, the standby is promoted to the new primary database. Below is what shows up the promoted standby (new primary) database logs:


2019-02-14 17:09:21 AEDT [20789]: [1] user=,db=,app=,client= FATAL:  could not connect to the primary server: could not connect to server: Connection refused
                Is the server running on host "xxx.xxx.xx.xx" and accepting
                TCP/IP connections on port 5432?
2019-02-14 17:09:23 AEDT [20506]: [7] user=,db=,app=,client= LOG:  received promote request
2019-02-14 17:09:23 AEDT [20506]: [8] user=,db=,app=,client= LOG:  redo done at 10F/5A335FF0
2019-02-14 17:09:23 AEDT [20506]: [9] user=,db=,app=,client= LOG:  last completed transaction was at log time 2019-02-14 17:08:38.350695+11
2019-02-14 17:09:23 AEDT [20506]: [10] user=,db=,app=,client= LOG:  selected new timeline ID: 2
2019-02-14 17:09:23 AEDT [20506]: [11] user=,db=,app=,client= LOG:  archive recovery complete
2019-02-14 17:09:24 AEDT [20507]: [1] user=,db=,app=,client= LOG:  checkpoint starting: force
2019-02-14 17:09:24 AEDT [20504]: [7] user=,db=,app=,client= LOG:  database system is ready to accept connections

I have only mentioned the important parameters in the repmgr configuration file. There are a lot of other parameters which can be modified to meet various requirements. The other important parameters are replication_lag_* parameters as shown below:

#replication_lag_warning=300            # repmgr node check --replication-lag
#replication_lag_critical=600           #

Repmgr would check the thresholds of above parameters before promoting standby. If replication lag is critical then, the promotion will not go-ahead. Which is really good because if standby gets promoted when there is a lag that would result in a data loss.

The applications must ensure they reconnect to newly promoted standby successfully with-in the expected timeframe. The load balancers would have the capability of diverting the app connections when the primary database becomes unresponsive. The other alternative would be using middleware tools like PgPool-II to ensure all the connections are diverted successfully.

To ensure successful high-availability architecture is deployed in production, thorough end-to-end testing of the complete process must be performed. In my experience, we use to term this exercise as DR DRILL. Meaning, every 6 months or so, a switchover operation would be performed to ensure standby is successfully getting promoted and the app connections are reconnecting to the promoted standby successfully. The existing primary will become a new standby. Once the switchover operation is successful, metrics are taken down to see SLAs are met.

What is Switchover?

As explained above, Switchover is a planned activity wherein the roles of Primary and Standby database are switched over. Meaning, Standby will become primary and primary will become standby. Using repmgr, this can be achieved. Below is what repmgr does when switchover command is issued.

$ repmgr -f /etc/repmgr.conf standby switchover
    NOTICE: executing switchover on node "node2" (ID: 2)
    INFO: searching for primary node
    INFO: checking if node 1 is primary
    INFO: current primary node is 1
    INFO: SSH connection to host "node1" succeeded
    INFO: archive mode is "off"
    INFO: replication lag on this standby is 0 seconds
    NOTICE: local node "node2" (ID: 2) will be promoted to primary; current primary "node1" (ID: 1) will be demoted to standby
    NOTICE: stopping current primary node "node1" (ID: 1)
    NOTICE: issuing CHECKPOINT
    DETAIL: executing server command "pg_ctl -D '/data/pgdata11' -m fast -W stop"
    INFO: checking primary status; 1 of 6 attempts
    NOTICE: current primary has been cleanly shut down at location 0/0501460
    NOTICE: promoting standby to primary
    DETAIL: promoting server "node2" (ID: 2) using "pg_ctl -D '/data/pgdata-standby11' promote"
    server promoting
    NOTICE: STANDBY PROMOTE successful
    DETAIL: server "node2" (ID: 2) was successfully promoted to primary
    INFO: setting node 1's primary to node 2
    NOTICE: starting server using  "pg_ctl -D '/data/pgdata11' restart"
    NOTICE: NODE REJOIN successful
    DETAIL: node 1 is now attached to node 2
    NOTICE: switchover was successful
    DETAIL: node "node2" is now primary
    NOTICE: STANDBY SWITCHOVER is complete

What Else repmgr Can Do?

  • Repmgr can help build the standby databases from scratch
  • Multiple standby databases can be built with one master running
  • Cascading standby’s can be built which I feel is more beneficial than configuring multiple standbys to one master database

What if Both Primary and Standby Are Gone?

Well, this is a situation wherein business think about having multiple standby instances. If all of them are gone, then, the only way out is to restore the database from the backups. This is the reason why a good backup strategy is imperative. The backups must be test-restored, validated on regular basis to ensure backups are reliable. Infrastructure design for backups must be such that, restoration and recovery of the backups must not take long. The restoration and recovery process of the backups must be completed within the designated SLA. SLAs for backups are designed in terms of RTO (Recovery Time Objective) and RPO (Recovery Point Objective). Meaning, RTO: time taken to restore and recover the backup must be with-in the SLA and RPO: till what point in time the recovery was done must be acceptable, in other terms it is data loss tolerance and generally businesses say 0 data loss tolerance.

Conclusion

  • Business continuity for PostgreSQL is an important requirement for mission-critical database environments. Achieving this involves a lot of planning and costs.
  • Resources and Infrastructure must be optimally utilized to ensure an efficient disaster recovery strategy is in place.
  • There could challenges from costs perspective which needs to be taken care
  • If the budget permits, ensure there are multiple DR sites to failover
  • In-case the backups are to be restored, ensure a good backup strategy is in place.

Subscribe below to be notified of fresh posts