blog
Upgrading Your Database to PostgreSQL Version 10 – What You Should Know
As more and more posts on PostgreSQL 11 appear on the web, the more outdated you may feel when using Postgres 9. Although the PostgreSQL 10 version release only happened just months ago, people are already are talking about the next version. Things are moving, so you don’t want to be left behind. In this blog we will discuss what you need to know to upgrade to the latest version, Postgres 10.
Upgrade Options
The first thing you should be aware of before you start is that there are several ways of doing the upgrade:
- Traditional pg_dumpall(pg_dump) / pg_restore(psql)
- Traditional pg_upgrade
- Trigger based replication (Slony, self-written)
- Using pglogical replication
Why is there such a variety? Because each has a different history, requiring different efforts to be set up and offering different services. Let’s look closer at each of them.
Traditional Dump/Restore
pg_dump t > /tmp/f
psql -p 5433 -f /tmp/f
Traditional dump/restore takes the longest time to complete and yet it is often a popular choice for those who can afford the downtime. First, it’s as easy as taking a logical backup and restoring it to a new, higher version of the database. You could say it’s not an upgrade, really, as you “import” your data to a “new structure”. As a result you will end up with two setups – one old (lower version) and the newly upgraded one. If the restoration process finishes without error, you are pretty much there. If not, you have to modify the existing old cluster to eliminate any errors and start the process over again.
If you use psql for import, you might also need to create some preload scripts yourself to execute on the new setup prior to migration. For example, you would want to pg_dumpall -g to get a list of needed roles to prepare in the new setup, or the opposite run pg_dump -x to skip permissions from old one. This process is pretty simple on small databases, the complexity grows with the size and complexity of your db structure and depends on what features you have setup. Basically for this method to be successful, you need to keep trying and fixing until the upgrade is successful.
The advantages to using this method include…
- While you may spend a long time with one backup you made – the load on the old server is as small as taking one backup.
- This method is mostly just a backup-restore sequence (potentially with some spells, songs and drumming)
- Using this method is the oldest way to upgrade and has been verified by MANY people
When you finally complete the upgrade you either have to shut down the old server or accept some data loss (or alternatively replay the DML that happened onto the old server while restoring a backup to the new server). And the time spent doing that is relative to the size of your database.
You can, of course, start “using” a new database before restore has finished (especially before all indexes are built – often the most time it takes is for indexes). But nevertheless such downtime is often unacceptable.
Traditional pg_upgrade
MacBook-Air:~ vao$ /usr/local/Cellar/postgresql/10.2/bin/initdb -D tl0 >/tmp/suppressing_to_save_screen_space_read_it
WARNING: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
MacBook-Air:~ vao$ /usr/local/Cellar/postgresql/10.2/bin/pg_upgrade -b /usr/local/Cellar/postgresql/9.5.3/bin -B /usr/local/Cellar/postgresql/10.2/bin -d t -D tl0 | tail
Creating script to delete old cluster ok
Upgrade Complete
----------------
Optimizer statistics are not transferred by pg_upgrade so,
once you start the new server, consider running:
./analyze_new_cluster.sh
Running this script will delete the old cluster’s data files:
./delete_old_cluster.sh
Traditional pg_upgrade was created to shorten the time it takes to upgrade to a major version. Depending on the amount of relations you have it can be as fast as minutes (seconds in ridiculous cases, like one table database and hours in the “opposite cases”) especially with –link argument.
The preparation sequence slightly differs from first upgrade method. In order to mock up the upgrade and thus to check if it’s possible, you should build streaming replication or recover a standby server from WALs. Why is this so complicated? You want to be sure to test the upgrade on as-close-in-state-database as you had originally. “Binary” replication or PITR will help us here. After you finish the recovery and recovery_target_action = promote (PITR) or promoted the newly built slave (pg_ctl promote or place a trigger file) (streaming replication) you can then try to run pg_upgrade. Checking the pg_upgrade_internal.log will give you an idea if the process was successful or not. Furthermore, you have the same try-and-fix approach as the previous method. You save the actions taken against the test database in a script, until you successfully pg_upgrade it. In addition, you can destroy no longer needed test upgraded database, run thensaved script to prepare the original database for perform the upgrade.
The advantages to using this method include…
- Shorter downtime than logical backup/restore
- A neat process – pg_upgrade upgrades the original database with existing data and structure
- Has been used used a lot in past and still would be the preference for the most DBAs running version below 9.4 (which allows using pglogical)
The disadvantages of using this method include…
- Requires downtime
Trigger Based Replication
Assuming version 10 is on port 5433 and has the same table prepared:
db=# create server upgrade_to_10 foreign data wrapper postgres_fdw options (port '5433', dbname 'dbl0');
CREATE SERVER
Time: 9.135 ms
db=# create user mapping for vao SERVER upgrade_to_10 options (user 'vao');
CREATE USER MAPPING
Time: 8.741 ms
db=# create foreign table rl0 (pk int, t text) server upgrade_to_10 options (table_name 'r');
CREATE FOREIGN TABLE
Time: 9.358 ms
This is an extremely simplistic fn() and trigger for very basic logical replication. Such approach is so primitive it won’t work with foreign keys, but the code is short:
db=# create or replace function tf() returns trigger as $$
begin
if TG_0P = 'INSERT' then
insert into r10 select NEW.*;
elseif TG_0P = 'UPDATE' then
delete from rl0 where pk = NEW.pk;
insert into rl0 select NEW.*;
elseif TG_0P = 'DELETE' then
delete from rl0 where pk = OLD.pk;
end if;
return case when TG_0P in ('INSERT','UPDATE') then NEW else OLD end;
end;
SS language plpgsql;
CREATE FUNCTION
Time: 8.531 ms
db=# create trigger t before insert or update or delete on r for each row execute procedure tf(); CREATE TRIGGER
Time: 8.813 ms
Example:
db=# insert into r(t) select chr(g) from generate_series(70,75) g;
INSERT 0 6
Time: 12.621 ms
db=# update r set t = 'updated' where pk=2;
UPDATE 1
Time: 10.398 ms
db=# delete from r where pk=1;
DELETE 1
Time: 9.634 ms
db=# select * from r;
pk | t
----+---------
3 | H
4 | I
5 | J
6 | K
2 | updated
(5 rows)
Time: 9.026 ms
db=# select * from rl0;
pk | t
----+---------
3 | H
4 | I
5 | J
6 | K
2 | updated
(5 rows)
Time: 1.201 ms
Lastly, checking that we replicate to a different database:
db=# select *,current_setting('port') from dblink('upgrade.to.lO','select setting from pg_settings where name=$$port$$') as t(setting_10 text);
setting_10 | currerrt.setting
------------+------------------
5433 | 5432
(l row)
Time: 23.633 ms
I would call this method the most exotic. Both for the fact that with streaming replication and later with pglogical, the use of trigger based replication becomes less popular. It has a higher load on the master, increased complexity during setup and a lack of well structured documentation. There’s no preparation (as such) of the process here, as you just want to setup Slony on different major versions.
The advantages of using this method include…
- No backups need to be taken and no downtime required (especially you are behind some pgbouncer or haproxy).
The disadvantages of using this method include…
- High Complexity of setup
- Lack of structured documentation
- Not very popular – less user cases to study (and share)
Along the same lines, self written trigger replication is another possible way to upgrade. While the idea is the same (you spin up a fresh higher version database and set up triggers on lower version to send modified data to it), the self written set up will be clear to you. You won’t have any need for support, and thus potentially use less resources when running it. Of course, for the same reason you probably will end up with some features missing or not working as expected. If you have several tables to move to new versions, such an option will probably take you less time and, if done well, might be less resource consuming. As a bonus you can combine some ETL transformations with the upgrade, switching over to a new version without downtime.
Logical Replication With pglogical
This is a very promising new way of upgrading Postgres. The idea is to set up logical replication between different major versions and literally have a parallel, higher (or lower) version database running the same data. When you are ready, you just switch connections with your application from old to new.
The advantages of using this method include…
- Basically no downtime
- Extremely promising feature, much less effort than trigger based replication
The disadvantages of using this method include…
- Still highly complex to setup (especially for older versions)
- Lack of structured documentation
- Not very popular – less user cases to study (and share)
Both the trigger-based and pglogical replication major version migrations can be used to downgrade the version (up to some reasonable value of course, e.g., pglogical is available from 9.4 only and trigger replication becomes harder and harder to set up as the version you want to downgrade to gets older).
Actions to be Taken Before the Upgrade
- Take a backup
- Make sure there is enough disk space
- Check your extensions (important that any external modules are also binary compatible, though this cannot be checked by pg_upgrade)
- Make sure to use the same datcollate and datctype and so on (check pg_database) on new database
- Check (DDL + Drop) views, functions, extensions, types that could break the upgrade
- Use –check before really pg_upgrade
Actions to be Taken After the Upgrade
- Consult pg_upgrade_server.log (if you used pg_upgrade)
- Run analyze on upgraded databases (optional, as it would be done by autovacuum, but you can choose what relations should be analyzed first if you do it yourself)
- Prewarm popular pages (optional, but could boost performance at the beginning)
Conclusion
Here are some general notes that are good to know before you decide to go to PostgreSQL version 10…
- pg_sequences were introduced, changing the behaviour of previously popular SELECT * FROM sequence_name – now only last_value | log_cnt | is_called are returned, hiding from you “initial properties” (adjust any code that relies on changed behaviour)
- pg_basebackup streams WAL by default. After the upgrade you might need to modify your scripts (option -x removed)
- All pg_ctl actions are waiting for completion. Previously you had to add -w to avoid trying to connect to the database straight after pg_ctl start. Thus if you still want to use “async” start or stop, you have to explicitly mark it with -W. You might need to adjust your scripts so they behave as intended.
- All scripts for archiving WALs or monitoring/controlling streaming replication or PITR need to be reviewed, to adjust them to changed xlog names. Eg. select * from pg_is_xlog_replay_paused() will no longer show you the state of slave WALs replay – you have to use select * from pg_is_wal_replay_paused() instead. Also cp /blah/pg_xlog/* needs to be changed to /blah/pg_wal/* and so on basically for all occurrences of pg_xlog. The reason behind such a massive, non backward compatible change is to address the case when a newby removes write-ahead-logs to “clean some space” by removing logs, and loses the database.
- Adjust scripts using pg_stat_replication for new names (location changed to lsn)
- Adjust queries with set returning functions if needed
- If you used pglogical as extension before version 10, you might need to adjust pg_hba.conf changing value between “columns”
- Adjust scripts for a new name of pg_log which is log, so something like find /pg_data/pg_log/postgresql-* -mmin +$((60*48)) -type f -exec bash /blah/moveto.s3.sh {} ; would work. Of course you can create a symbolic link instead, but action would need to be taken to find logs in default location. Another small change to defaults is log_line_prefix – if your regular expression depended on a certain format, you need to adjust it.
- If you were still using unencrypted passwords in your Postgres databases, this release complete removes it. So it’s time to sort things out for those who relied on –unencrypted…
- The rest of the incompatible changes with previous releases are either too fresh to be referenced in lots of code (min_parallel_relation_size) or too ancient (external tsearch2) or are too exotic (removal of floating-point timestamps support in build), so we will skip them. Of course they are listed on the release page.
- As it was with 9.5 to 9.6, you might need to adjust your scripts for querying pg_stat_activity (one new column and new possible values)
- If you were saving/analyzing vacuum verbose output, you might need to adjust your code
- Also you might want to take a look at the new partitioning implementation – you might want to refactor your existing “set” to comply with new “standards”
- check timeline (will be reset for the new database if you pg_upgrade)
Apart of these steps that you have to know to upgrade to 10, there are plenty of things that make this release a highly anticipated one. Please read the section on changes in the release notes or depesz blog.