blog

How to Benchmark PostgreSQL Performance

Severalnines AC

Published

The purpose of benchmarking a database is not only to check capability of database, but also the behavior of a particular database against your application. Different hardwares provide different results based on the benchmarking plan that you set. It is very important to isolate the server (the actual one being benchmarked) from other elements like the servers driving the load, or the servers used to collect and store performance metrics. As part of the benchmarking exercise, you must get the application characteristics like a) Is the application is read or write intensive? or b) what is the read/write split (e.g. 80:20)? or c) How large is the dataset?, is the data and structure representative of the actual production database, etc.

PostgreSQL is world’s most advanced open source database. If any enterprise RDBMS customer wants to migrate their database to opensource, then PostgreSQL would be the first option to evaluate.

This post covers the following:

  • How to benchmark PostgreSQL
  • What are the key performance factors in PostgreSQL
  • What are levers you can pull to increase performance
  • What are performance pitfalls to avoid
  • What are common mistakes people make?
  • How do you know if your system is performing? What tools can you use?

How to Benchmark PostgreSQL

The standard tool to benchmark PostgreSQL is pgbench. By default, pgbench tests are based on TPC-B. It involves 5 SELECT, INSERT, and UPDATE commands per transaction. However, depending on your application behavior, you can write your own script files. Let us look into the default and some script oriented test results. We are going to use the latest version of PostgreSQL for these tests, which is PostgreSQL 10 at the time of writing. You can install it using ClusterControl, or using the instructions here: https://www.openscg.com/bigsql/package-manager/.

Specs of machine

Version: RHEL 6 – 64 bit
Memory : 4GB
Processors: 4
Storage: 50G
PostgreSQL version: 10.0
Database Size: 15G

Before you run benchmarking with pgbench tool, you would need to initialize it below command:

-bash-4.1$ ./pgbench -i -p 5432 -d postgres
NOTICE:  table "pgbench_history" does not exist, skipping
NOTICE:  table "pgbench_tellers" does not exist, skipping
NOTICE:  table "pgbench_accounts" does not exist, skipping
NOTICE:  table "pgbench_branches" does not exist, skipping
creating tables…
100000 of 100000 tuples (100%) done (elapsed 0.18 s, remaining 0.00 s)
Vacuum…
set primary keys…
done.

As shown in the NOTICE messages, it creates pgbench_history, pgbench_tellers, pgbench_accounts, and pgbench_branches tables to run the transactions for benchmarking.

Here is a simple test with 10 clients:

-bash-4.1$ ./pgbench -c 10
starting vacuum...end.
transaction type: 
scaling factor: 1
query mode: simple
number of clients: 10
number of threads: 1
number of transactions per client: 10
number of transactions actually processed: 100/100
latency average = 13.516 ms
tps = 739.865020 (including connections establishing)
tps = 760.775629 (excluding connections establishing)

As you see, it ran with 10 clients and 10 transaction per client. It gave you 739 transactions/sec.It gave you 739 transactions/sec. If you want to run it for specific amount of time, you can use “-T” option. In general, a 15 mins or 30 mins run is sufficient.

As of now, we talked about how to run pgbench, however not about what should be options. Before you start the benchmarking, you should get proper details from application team on:

  • What type of workload?
  • How many concurrent sessions?
  • What is the average result set of queries?
  • What are the expected tps(transaction per sec)?

Here is an example for read-only work loads. You can use “-S” option to use only SELECTs which falls under read-only. Note that -n is to skip vacuuming on tables.

-bash-4.1$ ./pgbench -c 100 -T 300 -S -n
transaction type: 
scaling factor: 1000
query mode: simple
number of clients: 100
number of threads: 1
duration: 300 s
number of transactions actually processed: 15741
latency average = 1916.650 ms
tps = 52.174363 (including connections establishing)
tps = 52.174913 (excluding connections establishing)
-bash-4.1$

Latency here is the average elapsed transaction time of each statement executed by every client. It gives 52 tps with the hardware given. As this benchmark is for a read-only environment, let us try tweaking shared_buffers and effective_cache_size parameters in postgresql.conf file and check the tps count. They are at default values in the above test, try increasing the values, and check the results.

-bash-4.1$ ./pgbench -c 100 -T 300 -S -n
transaction type: 
scaling factor: 1000
query mode: simple
number of clients: 100
number of threads: 1
duration: 300 s
number of transactions actually processed: 15215
latency average = 1984.255 ms
tps = 68.396758 (including connections establishing)
tps = 68.397322 (excluding connections establishing)

Changing the parameters improved performance by 30%.

pgbench typically runs transactions on its own tables. If you have a workload of 50% reads and 50% writes (or a 60:40 environment), you can create a script file with a set of statements to achieve the expected workload.

-bash-4.1$ cat /tmp/bench.sql 
INSERT INTO test_bench VALUES(1,'test');
INSERT INTO test_bench VALUES(1,'test');
SELECT * FROM test_bench WHERE id=1;
SELECT * FROM test_bench WHERE id=2;
-bash-4.1$ ./pgbench -c 100 -T 300 -S -n -f /tmp/bench.sql 
transaction type: multiple scripts
scaling factor: 1000
query mode: simple
number of clients: 100
number of threads: 1
duration: 300 s
number of transactions actually processed: 25436
latency average = 1183.093 ms
tps = 84.524217 (including connections establishing)
tps = 84.525206 (excluding connections establishing)
SQL script 1: 
 - weight: 1 (targets 50.0% of total)
 - 12707 transactions (50.0% of total, tps = 42.225555)
 - latency average = 914.240 ms
 - latency stddev = 558.013 ms
SQL script 2: /tmp/bench.sql
 - weight: 1 (targets 50.0% of total)
 - 12729 transactions (50.0% of total, tps = 42.298662)
 - latency average = 1446.721 ms
 - latency stddev = 765.933 ms
Download the Whitepaper Today
 
PostgreSQL Management & Automation with ClusterControl
Learn about what you need to know to deploy, monitor, manage and scale PostgreSQL

What are the key Performance Factors in PostgreSQL

If we consider a real production environment, it is consolidated with different components at application level, hardware like CPU and memory, and the underlying operating system. We install PostgreSQL on top of the operating system to communicate with other components of the production environment. Every environment is different and overall performance will be degraded if it is not properly configured. In PostgreSQL, some queries run faster and some slow, however it depends on configuration that has been set. The goal of database performance optimization is to maximize the database throughput and minimize connections to achieve the largest possible throughput. Below are few key performance factors that affect the database:

  • Workload
  • Resource
  • Optimization
  • Contention

Workload consists of batch jobs, dynamic queries for online transactions, data analytics queries which are used for generating reports. Workload may be different during the period of the day, week or month, and depends on applications. Optimization of every database is unique. It can be database level configuration or query level optimization. We will be covering more about optimization in further sections of the post. Contention is the condition where two or more components of the workload are attempting to use a single resource in a conflicting way. As contention increases, throughput decreases.

What are Tips and Best Practices

Here are few tips and best practices that you can follow to avoid performance issues:

  • You can consider running maintenance activities like VACUUM and ANALYZE after a large modification in your database. This helps the planner to come up with the best plan to execute queries.
  • Look for any need to index tables. It makes queries run much faster, rather than having to do full table scans.
  • To make an index traversal much faster, you can use CREATE TABLE AS or CLUSTER commands to cluster rows with similar key values.
  • When you see a performance problem, use the EXPLAIN command to look at the plan on how the optimizer has decided to execute your query.
  • You can try changing the plans by influencing the optimizer by modifying query operators. For example, if you see a sequential scan for your query, you can disable seq scan using “SET ENABLE_SEQSCAN TO OFF”. There is no guarantee that the optimizer would not choose that operator if you disable it. The optimizer just considers the operator to be much more expensive. More details are here: https://www.postgresql.org/docs/current/static/runtime-config-query.html
  • You can also try changing the costs parameters like CPU_OPERATOR_COST, CPU_INDEX_TUPLE_COST, CPU_TUPLE_COST, RANDOM_PAGE_COST, and EFFECTIVE_CACHE_SIZE to influence the optimizer. More details are here: https://www.postgresql.org/docs/current/static/runtime-config-query.html#RUNTIME-CONFIG-QUERY-CONSTANTS
  • Always filter data on the server rather than in client application. It will minimize the network traffic and gives better performance.
  • To perform common operations, it is always recommended to use server-side procedures (triggers and functions). Server-side triggers or functions are parsed, planned, and optimized the first time they are used, not every time.

What are Common Mistakes People Make

One of the common mistakes that people do is running the database server and database with default parameters. The PostgreSQL default configuration is tested in few environments, however not every application would find those values optimal. So you need to understand your application behavior and based on it, set your configuration parameters. You can use the pgTune tool to get values for your parameters based on the hardware that you are using. You can have a look at: http://pgtune.leopard.in.ua/. However, keep in mind that you will have to test your application with changes that you make, to see if there are any performance degradation with the changes.

Another thing to consider would be indexing the database. Indexes help to fetch the data faster, however more indexes create issues with loading the data. So always check if any unused indexes are there in the database, and get rid of those to reduce the maintenance of those indexes and improve loading of data.

Subscribe below to be notified of fresh posts