blog

Running Multiple PostgreSQL Instances on a Single Host

Paul Namuag

Published

We recently announced the release of ClusterControl 1.7.3 which includes a variety of improvements and newly added features. One of these new features is the addition of support in ClusterControl to allow a user to setup and manage multiple PostgreSQL instances on the same host. This new feature is what we will be discussing in our blog below, including reasons for why this type of setup can help you save on resources as well as provide step-by-step instructions for how to achieve this type of installation in ClusterControl.

Why Would You Need A Multiple-PostgreSQL Installation on a Single Host?

With today’s rapid development and improvement of technologies from hardware to software, the scope of requirements has become more adaptable, flexible, and scalable. Some organizations even prefer to leverage the technology stack as scaling it is easier. In addition, there are situations where you might want to deploy a database server on a high-end, powerful server which contains a large CPU, lots of memory, and fast, powerful, and non-volatile storage devices such as SSD/Fusion IO/NVMe. This, however, can sometimes be a waste of resources if you are looking to run the shared-resources of a database server (such as using it as a slave, a hot-backup machine, or even as a backup verification server). In certain setups, you might want to use the resources available in your powerful server as both your development and QA server to avoid unwanted hardware costs (instead of buying a dedicated machine or spawning a new compute instance in the cloud).

How To Setup A Multi-PostgreSQL Installation

For this example we’ll create a cluster with a multi-PostgreSQL installation along with multi-PostgreSQL running instances in a single host using ClusterControl.

Note: as of the current version (i.e. ClusterControl 1.7.3), ClusterControl does not allow you to create a cluster or initialize a cluster if you specify a master and slave information with a multi-version installed PostgreSQL or with a multi-instances of PostgreSQL running within a single host. However, you can import a node with multi-version installed or multi-instances of PostgreSQL running in a single host.

Server Details and Information

Since we cannot currently initiate or create a cluster when there’s a multiple version installed of PostgreSQL, we’ll import an existing or running instance of PostgreSQL. Below are the server information.

IP: 192.168.30.10
OS user: vagrant
OS type and version: Ubuntu 16.04.6 LTS (xenial)

and some information from my /etc/postgresql/9.6/multi_pg/postgresql.conf,

data_directory = '/data/pgsql/master/data'
hba_file = '/etc/postgresql/9.6/multi_pg/pg_hba.conf'   
ident_file = '/etc/postgresql/9.6/multi_pg/pg_ident.conf'
external_pid_file = '/var/run/postgresql/9.6-main.pid'  
listen_addresses = '*'  
port = 7654
max_connections = 100   
shared_buffers = 511995kB
work_mem = 10239kB
maintenance_work_mem = 127998kB 
dynamic_shared_memory_type = posix
wal_level = hot_standby 
full_page_writes = on   
wal_log_hints = on
checkpoint_completion_target = 0.9
max_wal_senders = 16
wal_keep_segments = 32  
hot_standby = on
effective_cache_size = 1535985kB
logging_collector = on  
log_timezone = 'Etc/UTC'
cluster_name = '9.6/multi_pg'   
stats_temp_directory = '/var/run/postgresql/9.6-main.pg_stat_tmp'
datestyle = 'iso, mdy'
timezone = 'Etc/UTC'
lc_messages = 'en_US.UTF-8'
lc_monetary = 'en_US.UTF-8'
lc_numeric = 'en_US.UTF-8'
lc_time = 'en_US.UTF-8' 
default_text_search_config = 'pg_catalog.english'

Wherein an existing versions have already been installed:

root@debnode1:/home/vagrant# dpkg -l | grep 'object-relational'
ii  postgresql-11                     11.4-1.pgdg16.04+1                         amd64        object-relational SQL database, version 11 server
ii  postgresql-9.2                    9.2.24-1.pgdg16.04+1                       amd64        object-relational SQL database, version 9.2 server
ii  postgresql-9.6                    9.6.14-1.pgdg16.04+1                       amd64        object-relational SQL database, version 9.6 server

Additionally, for this setup, there are additional instances which are running…

root@debnode1:/data/pgsql/master# ps axufwww | grep 'postgre[s]'
postgres  1243  0.0  0.8 186064 17916 ?        S    15:59   0:00 /usr/lib/postgresql/9.2/bin/postgres -D /var/lib/postgresql/9.2/main -c config_file=/etc/postgresql/9.2/main/postgresql.conf
postgres  1285  0.0  0.1 186064  3860 ?        Ss   15:59   0:00  _ postgres: checkpointer process   
postgres  1286  0.0  0.2 186064  4620 ?        Ss   15:59   0:00  _ postgres: writer process   
postgres  1287  0.0  0.1 186064  3860 ?        Ss   15:59   0:00  _ postgres: wal writer process   
postgres  1288  0.0  0.2 186808  6008 ?        Ss   15:59   0:00  _ postgres: autovacuum launcher process   
postgres  1289  0.0  0.1 145808  3736 ?        Ss   15:59   0:00  _ postgres: stats collector process   
postgres  1246  0.0  1.2 309600 25884 ?        S    15:59   0:00 /usr/lib/postgresql/11/bin/postgres -D /var/lib/postgresql/11/main -c config_file=/etc/postgresql/11/main/postgresql.conf
postgres  1279  0.0  0.1 309600  4028 ?        Ss   15:59   0:00  _ postgres: 11/main: checkpointer   
postgres  1280  0.0  0.1 309600  4028 ?        Ss   15:59   0:00  _ postgres: 11/main: background writer   
postgres  1281  0.0  0.4 309600  9072 ?        Ss   15:59   0:00  _ postgres: 11/main: walwriter   
postgres  1282  0.0  0.3 310012  6496 ?        Ss   15:59   0:00  _ postgres: 11/main: autovacuum launcher   
postgres  1283  0.0  0.1 164516  3528 ?        Ss   15:59   0:00  _ postgres: 11/main: stats collector   
postgres  1284  0.0  0.3 309892  6596 ?        Ss   15:59   0:00  _ postgres: 11/main: logical replication launcher  

For this example, we will use PostgreSQL 9.6.

Building The Master-Slave PostgreSQL Cluster

In order to create a cluster, we need to setup the PostgreSQL instance manually and then import that instance into ClusterControl later. Alternatively, we can create a cluster with just one master node and let ClusterControl handle it but to do this we will need to shutdown all other running nodes. This would not be ideal if you are operating on busy PostgreSQL database servers.

Now, let’s do the manual setup… 

root@debnode1:/etc/postgresql/9.6/multi_pg# sudo -iu postgres /usr/lib/postgresql/9.6/bin/pg_ctl -D /data/pgsql/master/data initdb
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.

The database cluster will be initialized with locale "en_US.UTF-8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".

Data page checksums are disabled.

creating directory /data/pgsql/master/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default timezone ... Etc/UTC
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok

WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.

Success. You can now start the database server using:

    /usr/lib/postgresql/9.6/bin/pg_ctl -D /data/pgsql/master/data -l logfile start

Then start the database by running the command below,

root@debnode1:/etc/postgresql/9.6/multi_pg# sudo -iu postgres /usr/lib/postgresql/9.6/bin/pg_ctl -D /data/pgsql/master/data  -o "-c config_file=/etc/postgresql/9.6/multi_pg/postgresql.conf" -l /var/log/postgresql/postgresql-9.6-master.log start  
server starting

Now, let’s verify if the instance runs and uses the desired port we used:

root@debnode1:/etc/postgresql/9.6/multi_pg# netstat -ntlvp46|grep postgres
tcp        0      0 127.0.0.1:5432          0.0.0.0:*               LISTEN      1246/postgres
tcp        0      0 127.0.0.1:5433          0.0.0.0:*               LISTEN      1243/postgres
tcp        0      0 0.0.0.0:7654            0.0.0.0:*               LISTEN      18403/postgres
tcp6       0      0 :::7654                 :::*           

Now, it looks correct. The pid of 18403 shows that we are able to run it and has both IPv4 and IPv6 open.

Now, let’s import this to ClusterControl. Go to Deploy → Import Existing Server/Database, to import the desired master node we just setup.

After you hit button Import, you’ll be able to have a cluster with one master node just like below:

Now, let’s create a slave within the same host (i.e. with IP 192.168.30.10).

And don’t worry, ClusterControl will handle it for you as a sample job activity log shows below.

You can see that it was successfully setup and installed. Technically, ClusterControl will create a directory under /etc/postgresql//p for Debian/Ubuntu based system and generate the required configuration files. While for RHEL/Centos/Fedora based systems, it will generate under data_dir path.

Now let’s confirm with pg_lsclusters and see if the multi-PostgreSQL instance are running in parallel in a host. See below:

root@debnode1:/var/log/postgresql# pg_lsclusters 
Ver Cluster  Port Status          Owner    Data directory               Log file
9.2 main     5433 online          postgres /var/lib/postgresql/9.2/main /var/log/postgresql/postgresql-9.2-main.log
9.6 multi_pg 7654 online          postgres /data/pgsql/master/data      /var/log/postgresql/postgresql-9.6-master.log
9.6 pg_7653  7653 online,recovery postgres /data/pgsql/slave/data       pg_log/postgresql-%Y-%m-%d_%H%M%S.log
11  main     5432 online          postgres /var/lib/postgresql/11/main  /var/log/postgresql/postgresql-11-main.log

In addition to this, the metrics as for the Logical Replication clusters are seen below:

Promoting the Slave in a Multi-PostgreSQL Running Instances in a Single Host

Slave promotion is easy for a multi-PostgreSQL running instances in a single host. As you can see below, this type of environment works flawlessly when it’s handled by ClusterControl. 

Now, let’s see what happens in the background while ClusterControl promotes the slave. See the complete job spec and details

[09:01:02]:Successfully promoted a new master.
[09:01:02]:192.168.30.10:7653: promote finished (this is the new master).
[09:01:02]:Servers after promote:
192.168.30.10:7653:
• Role: master (slaves: 1)
• Status: CmonHostOnline (NODE_CONNECTED)
• Receive/replay: 0/30020C0; 0/30020C0

192.168.30.10:7654:
• Role: slave (slaves: 0)
• Status: CmonHostOnline (NODE_CONNECTED)
• Receive/replay: 0/30020C0; 0/30020C0
• Master: 192.168.30.10:7653


[09:01:02]:192.168.30.10:7654: Restarted with new master.
[09:01:02]:192.168.30.10:7654: Started PostgreSQL.
[09:00:53]:192.168.30.10: done
server started
[09:00:53]:192.168.30.10: waiting for server to start....
[09:00:52]:192.168.30.10:7654: Executing: su - postgres -c '/usr/lib/postgresql/9.6/bin/pg_ctl start -w -o "-p 7654" --pgdata=/etc/postgresql/9.6/multi_pg/ --log /var/log/postgresql/postgresql-11-main.log'
[09:00:51]:192.168.30.10:7654: Start postgreSQL node.
[09:00:51]:192.168.30.10:7654: Starting PostgreSQL.
[09:00:51]:192.168.30.10:7654: Successfully created '/data/pgsql/master/data/recovery.conf'.
[09:00:50]:192.168.30.10:7654: Creating '/data/pgsql/master/data/recovery.conf': Setting 192.168.30.10:7653 as master.
[09:00:50]:192.168.30.10: servers diverged at WAL position 0/3001890 on timeline 1
no rewind required
[09:00:49]:Running /usr/lib/postgresql/9.6/bin/pg_rewind --target-pgdata=/data/pgsql/master/data --source-server="host=192.168.30.10 port=7653 user=dbapgadmin password=***** dbname=postgres"
[09:00:47]:192.168.30.10:7653: Granting host (192.168.30.10:7654).
[09:00:45]:192.168.30.10:7654: Stopped PostgreSQL.
[09:00:38]:192.168.30.10:7654: Waiting to stop.
[09:00:38]:192.168.30.10:7654: node is already stopped. No need to stop it.
[09:00:38]:192.168.30.10:7654: Stop postgreSQL node.
[09:00:38]:192.168.30.10:7654: Stopping PostgreSQL.
[09:00:38]:Switching slaves to the new master.
[09:00:38]:192.168.30.10:7653: Became master, ok.
[09:00:37]:192.168.30.10:7653: Waiting to become a master.
[09:00:37]:192.168.30.10: server promoting
[09:00:36]:192.168.30.10:7653: Attempting to promote using pg_ctl.
[09:00:36]:192.168.30.10:7653: Promoting host.
[09:00:35]:192.168.30.10:7654: Stopped PostgreSQL.
[09:00:28]:192.168.30.10:7654: Waiting to stop.
[09:00:28]:192.168.30.10: done
server stopped
[09:00:28]:192.168.30.10: waiting for server to shut down....
[09:00:27]:192.168.30.10:7654: Executing: su - postgres -c '/usr/lib/postgresql/9.6/bin/pg_ctl stop --pgdata=/etc/postgresql/9.6/multi_pg/'
[09:00:26]:192.168.30.10:7654: Stop postgreSQL node.
[09:00:26]:192.168.30.10:7654: Stopping PostgreSQL.
[09:00:26]:192.168.30.10:7654: Stopping the current master.
[09:00:26]:Switching over to 192.168.30.10:7653 (previous master is 192.168.30.10:7654)
[09:00:26]:Servers:
192.168.30.10:7653:
• Role: slave (slaves: 0)
• Status: CmonHostOnline (NODE_CONNECTED)
• Receive/replay: 0/3001820; 0/3001820
• Master: 192.168.30.10:7654

192.168.30.10:7654:
• Role: master (slaves: 1)
• Status: CmonHostOnline (NODE_CONNECTED)
• Receive/replay: 0/3001820; 0/3001820


[09:00:26]:192.168.30.10:7653: Current master is 192.168.30.10:7654.
[09:00:26]:192.168.30.10:7653: Promoting server to master.
Job spec: {
    "command": "promote_replication_slave",
    "group_id": 1,
    "group_name": "admins",
    "job_data": 
    {
        "clusterId": "6",
        "slave_address": "192.168.30.10:7653"
    },
    "user_id": 1,
    "user_name": "[email protected]"
}

As you see, it was handled smoothly even on the same host. The topology result shows that it has been successfully promoted.

Conclusion

We’re excited about the release of ClusterControl 1.7.3 think it has a lot to offer. We also think this new Multi-PostgreSQL instances running on the same host feature is another great step in improving our overall support for PostgreSQL. Try it out and let us know what are your thoughts below on this new feature.

Subscribe below to be notified of fresh posts