Severalnines Blog
The automation and management blog for open source databases

PostgreSQL Audit Logging Best Practices

In every IT system where important business tasks take place, it is important to have an explicit set of policies and practices, and to make sure those are respected and followed.

Introduction to Auditing

An Information Technology system audit is the examination of the policies, processes, procedures, and practices of an organization regarding IT infrastructure against a certain set of objectives. An IT audit may be of two generic types:

  • Checking against a set of standards on a limited subset of data
  • Checking the whole system

An IT audit may cover certain critical system parts, such as the ones related to financial data in order to support a specific set of regulations (e.g. SOX), or the entire security infrastructure against regulations such as the new EU GDPR regulation which addresses the need for protecting privacy and sets the guidelines for personal data management. The SOX example is of the former type described above whereas GDPR is of the latter.

The Audit Lifecycle


The scope of an audit is dependent on the audit objective. The scope may cover a special application identified by a specific business activity, such as a financial activity, or the whole IT infrastructure covering system security, data security and so forth. The scope must be correctly identified beforehand as an early step in the initial planning phase. The organization is supposed to provide to the auditor all the necessary background information to help with planning the audit. This may be the functional/technical specifications, system architecture diagrams or any other information requested.

Control Objectives

Based on the scope, the auditor forms a set of control objectives to be tested by the audit. Those control objectives are implemented via management practices that are supposed to be in place in order to achieve control to the extent described by the scope. The control objectives are associated with test plans and those together constitute the audit program. Based on the audit program the organization under audit allocates resources to facilitate the auditor.


The auditor tries to get evidence that all control objectives are met. If for some control objective there is no such evidence, first the auditor tries to see if there is some alternative way that the company handles the specific control objective, and in case such a way exists then this control objective is marked as compensating and the auditor considers that the objective is met. If however there is no evidence at all that an objective is met, then this is marked as a finding. Each finding consists of the condition, criteria, cause, effect and recommendation. The IT manager must be in close contact with the auditor in order to be informed of all potential findings and make sure that all requested information are shared between the management and the auditor in order to assure that the control objective is met (and thus avoid the finding).

The Assessment Report

At the end of the audit process the auditor will write an assessment report as a summary covering all important parts of the audit, including any potential findings followed by a statement on whether the objective is adequately addressed and recommendations for eliminating the impact of the findings.

What is Audit Logging and Why Should You Do It?

The auditor wants to have full access to the changes on software, data and the security system. He/she not only wants to be able to track down any change to the business data, but also track changes to the organizational chart, the security policy, the definition of roles/groups and changes to role/group membership. The most common way to perform an audit is via logging. Although it was possible in the past to pass an IT audit without log files, today it is the preferred (if not the only) way.

Typically the average IT system comprises of at least two layers:

  • Database
  • Application (possibly on top of an application server)

The application maintains its own logs covering user access and actions, and the database and possibly the application server systems maintain their own logs. Clean, readily usable information in log files which has real business value from the auditor perspective is called an audit trail. Audit trails differ from ordinary log files (sometimes called native logs) in that:

  • Log files are dispensable
  • Audit trails should be kept for longer periods
  • Log files add overhead to the system’s resources
  • Log files’ purpose is to help the system admin
  • Audit trails’ purpose is to help the auditor

We summarise the above in the following table:

Log type App/System Audit Trail friendly
App logs App Yes
App server logs System No
Database logs System No

App logs may be easily tailored to be used as audit trails. System logs not so easily because:

  • They are limited in their format by the system software
  • They act globally on the whole system
  • They don’t have direct knowledge about specific business context
  • They usually require additional software for later offline parsing/processing in order to produce usable audit-friendly audit trails.

However on the other hand App logs place an additional software layer on top of the actual data, thus:

  • Making the audit system more vulnerable to application bugs/misconfiguration
  • Creating a potential hole in the logging process if someone tries to access data directly on the database bypassing the app logging system, such as a privileged user or a DBA
  • Making the audit system more complex and harder to manage and maintain in case we have many applications or many software teams.

So, ideally we would be looking for the best of the two: Having usable audit trails with the greatest coverage on the whole system including database layer, and configurable in one place, so that the logging itself can be easily audited by means of other (system) logs.

Audit Logging with PostgreSQL

The options we have in PostgreSQL regarding audit logging are the following:

Exhaustive logging at least for standard usage in OLTP or OLAP workloads should be avoided because:

  • Produces huge files, increases load
  • Does not have inner knowledge of tables being accessed or modified, just prints the statement which might be a DO block with a cryptic concatenated statement
  • Needs additional software/resources for offline parsing and processing (in order to produce the audit trails) which in turn must be included in the scope of the audit, to be considered trustworthy

In the rest of this article we will try the tools provided by the community. Let’s suppose that we have this simple table that we want to audit:

myshop=# \d orders
                                       Table "public.orders"
   Column   |           Type           | Collation | Nullable |              Default               
 id         | integer                  |           | not null | nextval('orders_id_seq'::regclass)
 customerid | integer                  |           | not null |
 customer   | text                     |           | not null |
 xtime      | timestamp with time zone   |           | not null | now()
 productid  | integer                  |           | not null |
 product    | text                     |           | not null |
 quantity   | integer                  |           | not null |
 unit_price | double precision         |           | not null |
 cur        | character varying(20)    |           | not null | 'EUR'::character varying
    "orders_pkey" PRIMARY KEY, btree (id)

audit-trigger 91plus

The docs about using the trigger can be found here: First we download and install the provided DDL (functions, schema):

$ wget
$ psql myshop
psql (10.3 (Debian 10.3-1.pgdg80+1))
Type "help" for help.
myshop=# \i audit.sql

Then we define the triggers for our table orders using the basic usage:

myshop=# SELECT audit.audit_table('orders');

This will create two triggers on table orders: a insert_update_delere row trigger and a truncate statement trigger. Now let’s see what the trigger does:

myshop=# insert into orders (customer,customerid,product,productid,unit_price,quantity) VALUES('magicbattler',1,'some fn skin 2',2,5,2);      
myshop=# update orders set quantity=3 where id=2;
myshop=# delete from orders  where id=2;
myshop=# select table_name, action, session_user_name, action_tstamp_clk, row_data, changed_fields from audit.logged_actions;
-[ RECORD 1 ]-----+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
table_name        | orders
action            | I
session_user_name | postgres
action_tstamp_clk | 2018-05-20 00:15:10.887268+03
row_data          | "id"=>"2", "cur"=>"EUR", "xtime"=>"2018-05-20 00:15:10.883801+03", "product"=>"some fn skin 2", "customer"=>"magicbattler", "quantity"=>"2", "productid"=>"2", "customerid"=>"1", "unit_price"=>"5"
changed_fields    |
-[ RECORD 2 ]-----+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
table_name        | orders
action            | U
session_user_name | postgres
action_tstamp_clk | 2018-05-20 00:16:12.829065+03
row_data          | "id"=>"2", "cur"=>"EUR", "xtime"=>"2018-05-20 00:15:10.883801+03", "product"=>"some fn skin 2", "customer"=>"magicbattler", "quantity"=>"2", "productid"=>"2", "customerid"=>"1", "unit_price"=>"5"
changed_fields    | "quantity"=>"3"
-[ RECORD 3 ]-----+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
table_name        | orders
action            | D
session_user_name | postgres
action_tstamp_clk | 2018-05-20 00:16:24.944117+03
row_data          | "id"=>"2", "cur"=>"EUR", "xtime"=>"2018-05-20 00:15:10.883801+03", "product"=>"some fn skin 2", "customer"=>"magicbattler", "quantity"=>"3", "productid"=>"2", "customerid"=>"1", "unit_price"=>"5"
changed_fields    |

Note the changed_fields value on the Update (RECORD 2). There are more advanced uses of the audit trigger, like excluding columns, or using the WHEN clause as shown in the doc. The audit trigger sure seems to do the job of creating useful audit trails inside the audit.logged_actions table. However there are some caveats:

  • No SELECTs (triggers do not fire on SELECTs) or DDL are tracked
  • Changes by table owners and super users can be easily tampered
  • Best practices must be followed regarding the app user(s) and app schema and tables owners
Download the Whitepaper Today
PostgreSQL Management & Automation with ClusterControl
Learn about what you need to know to deploy, monitor, manage and scale PostgreSQL


Pgaudit is the newest addition to PostgreSQL as far as auditing is concerned. Pgaudit must be installed as an extension, as shown in the project’s github page: Pgaudit logs in the standard PostgreSQL log. Pgaudit works by registering itself upon module load and providing hooks for the executorStart, executorCheckPerms, processUtility and object_access. Therefore pgaudit (in contrast to trigger-based solutions such as audit-trigger discussed in the previous paragraphs) supports READs (SELECT, COPY). Generally with pgaudit we can have two modes of operation or use them combined:

  • SESSION audit logging
  • OBJECT audit logging

Session audit logging supports most DML, DDL, privilege and misc commands via classes:

  • READ (select, copy from)
  • WRITE (insert, update, delete, truncate, copy to)
  • FUNCTION (function calls and DO blocks)
  • ROLE (grant, revoke, create/alter/drop role)
  • DDL (all DDL except those in ROLE)
  • MISC (discard, fetch, checkpoint, vacuum)

Metaclass “all” includes all classes. - excludes a class. For instance let us configure Session audit logging for all except MISC, with the following GUC parameters in postgresql.conf:

pgaudit.log_catalog = off
pgaudit.log = 'all, -misc'
pgaudit.log_relation = 'on'
pgaudit.log_parameter = 'on'

By giving the following commands (the same as in the trigger example)

myshop=# insert into orders (customer,customerid,product,productid,unit_price,quantity) VALUES('magicbattler',1,'some fn skin 2',2,5,2);
myshop=# update orders set quantity=3 where id=2;
myshop=# delete from orders  where id=2;

We get the following entries in PostgreSQL log:

% tail -f data/log/postgresql-22.log | grep AUDIT:
[local] [55035] 5b03e693.d6fb 2018-05-22 12:46:37.352 EEST psql postgres@testdb line:7 LOG:  AUDIT: SESSION,5,1,WRITE,INSERT,TABLE,public.orders,"insert into orders (customer,customerid,product,productid,unit_price,quantity) VALUES('magicbattler',1,'some fn skin 2',2,5,2);",<none>
[local] [55035] 5b03e693.d6fb 2018-05-22 12:46:50.120 EEST psql postgres@testdb line:8 LOG:  AUDIT: SESSION,6,1,WRITE,UPDATE,TABLE,public.orders,update orders set quantity=3 where id=2;,<none>
[local] [55035] 5b03e693.d6fb 2018-05-22 12:46:59.888 EEST psql postgres@testdb line:9 LOG:  AUDIT: SESSION,7,1,WRITE,DELETE,TABLE,public.orders,delete from orders  where id=2;,<none>

Note that the text after AUDIT: makes up a perfect audit trail, almost ready to ship to the auditor in spreadsheet-ready csv format. Using session audit logging will give us audit log entries for all operations belonging to the classes defined by pgaudit.log parameter on all tables. However there are cases that we wish only a small subset of the data i.e. only a few tables to be audited. In such cases we may prefer object audit logging which gives us fine grained criteria to selected tables/columns via the PostgreSQL’s privilege system. In order to start using Object audit logging we must first configure the pgaudit.role parameter which defines the master role that pgaudit will use. It makes sense not to give this user any login rights.

CREATE ROLE auditor;

The we specify this value for pgaudit.role in postgresql.conf:

pgaudit.log = none # no need for extensive SESSION logging
pgaudit.role = auditor

Pgaudit OBJECT logging will work by finding if user auditor is granted (directly or inherited) the right to execute the specified action performed on the relations/columns used in a statement. So if we need to ignore all tables, but have detailed logging to table orders, this is the way to do it:

grant ALL on orders to auditor ;

By the above grant we enable full SELECT, INSERT, UPDATE and DELETE logging on table orders. Let’s give once again the INSERT, UPDATE, DELETE of the previous examples and watch the postgresql log:

% tail -f data/log/postgresql-22.log | grep AUDIT:
[local] [60683] 5b040125.ed0b 2018-05-22 14:41:41.989 EEST psql postgres@testdb line:7 LOG:  AUDIT: OBJECT,2,1,WRITE,INSERT,TABLE,public.orders,"insert into orders (customer,customerid,product,productid,unit_price,quantity) VALUES('magicbattler',1,'some fn skin 2',2,5,2);",<none>
[local] [60683] 5b040125.ed0b 2018-05-22 14:41:52.269 EEST psql postgres@testdb line:8 LOG:  AUDIT: OBJECT,3,1,WRITE,UPDATE,TABLE,public.orders,update orders set quantity=3 where id=2;,<none>
[local] [60683] 5b040125.ed0b 2018-05-22 14:42:03.148 EEST psql postgres@testdb line:9 LOG:  AUDIT: OBJECT,4,1,WRITE,DELETE,TABLE,public.orders,delete from orders  where id=2;,<none>

We observe that the output is identical to the SESSION logging discussed above with the difference that instead of SESSION as audit type (the string next to AUDIT: ) now we get OBJECT.

One caveat with OBJECT logging is that TRUNCATEs are not logged. We have to resort to SESSION logging for this. But in this case we end up getting all WRITE activity for all tables. There are talks among the hackers involved to make each command a separate class.

Another thing to keep in mind is that in the case of inheritance if we GRANT access to the auditor on some child table, and not the parent, actions on the parent table which translate to actions on rows of the child table will not be logged.

In addition to the above, the IT people in charge for the integrity of the logs must document a strict and well defined procedure which covers the extraction of the audit trail from the PostgreSQL log files. Those logs might be streamed to an external secure syslog server in order to minimize the chances of any interference or tampering.