blog

Query observability and performance tuning with pg_stat_monitor and pg_stat_statements

Paul Namuag

Published:

With the rapid evolution of database technologies and the constant demand for quick solutions, being able to understand the internal state of a system is crucial to achieving success. Observability tools give you a comprehensive view of your database by collecting logs, metrics, and traces. This allows you to quickly get to the root cause of issues, making it much easier to diagnose and solve problems than relying on a limited set of health checks.

In PostgreSQL, it’s highly common that slow queries can bottleneck applications, leading to poor user experiences and increased operational costs. Fortunately, PostgreSQL offers powerful extensions like pg_stat_statements and pg_stat_monitor to provide deep insights into query performance. These tools allow DBAs and developers to track execution statistics, identify bottlenecks, and optimize queries effectively.

The evolution of PostgreSQL query monitoring

PostgreSQL’s journey in query monitoring started with pg_stat_activity, a simple view that showed what queries were running in real time. pg_stat_activity is a built-in view that provides a real-time snapshot of what’s happening in your database right now. It shows you the current query, state, and other details for each active session. While useful for immediate troubleshooting, it doesn’t provide historical data or aggregate statistics. Once a query is finished, its information disappears from this view. While useful, it only provided a snapshot and lacked historical insights. 

pg_stat_monitor vs. pg_stat_statements

To address the shortcomings of pg_stat_activity, the community introduced pg_stat_statements, an extension that aggregated and normalized query statistics, making it possible to analyze performance trends over time. The pg_stat_statements extension was introduced as a way to overcome the limitations of pg_stat_activity. pg_stat_statements is a more advanced and comprehensive tool for performance analysis compared to pg_stat_activity. While pg_stat_activity provides a snapshot of currently running queries, pg_stat_statements aggregates historical data, which is far more useful for identifying performance bottlenecks over time. This made pg_stat_statements a game-changer in the world of PostgreSQL when this was introduced. 

The pg_stat_statements is a persistent, cumulative record of all executed queries. It aggregates statistics for identical queries, allowing you to see which ones are the most resource-intensive over the long run, even if they are fast individually. pg_stat_statements normalizes queries by replacing literal values with placeholders (e.g., $1). This allows it to group similar queries together and track their collective performance, regardless of the specific data they were run against. This is crucial for identifying patterns and optimizing frequently executed queries.

Building on that foundation, Percona introduced pg_stat_monitor. A newer, more feature-rich alternative offering richer metrics, time-based buckets, and greater flexibility for deeper query observability. It provides a more comprehensive view by also capturing query plan, command type, and other detailed metrics. Unlike pg_stat_statements, which only aggregates normalized queries, pg_stat_monitor can also track individual, un-normalized queries, offering a more granular look at performance issues. It also tracks additional metrics like query latency, IO, and the number of rows returned, which are critical for deep performance analysis. 

It is an open-source project and is not part of the standard PostgreSQL distribution (contrib modules), but it is widely available through various package repositories (like Percona’s own and the PostgreSQL Global Development Group’s PGDG yum/apt repos). Together, these advancements reflect the continuous evolution from basic monitoring to true database observability.

Comparing pg_stat_statements and pg_stat_monitor

Featurepg_stat_statementspg_stat_monitor
Data CollectionCumulative, ever-increasing counters.Time-based buckets. Data is collected for a defined time interval, then aggregated into a new bucket.
Query GroupingGroups by userid, dbid, and queryid.Multi-dimensional grouping by userid, dbid, queryid, client_ip, and planid. This provides finer-grained analysis.
Query ParametersNormalizes queries by replacing literals with placeholders (e.g., $1).Can be configured to show actual query parameters for easier debugging.
HistogramsNot supported. You have to manually calculate statistics to understand the distribution of query execution times.Built-in histograms. It provides a visual representation of the distribution of query execution times, making it easy to spot outliers or performance variability.
Query CountersProvides basic counters like calls, total_time, min_time, max_time, and I/O stats.Provides a more comprehensive set of counters, including separate counters for successful and failed queries, and histogram_counts.
Query PlanNot supported. You must manually run EXPLAIN to get the query plan.Can be configured to capture and store the query plan for each statement, which is a significant advantage for performance analysis.
Use CaseIdentifying the most expensive queries over a long period. Useful for high-level, long-term performance trends.Detailed real-time and historical performance analysis. Ideal for identifying short-term performance anomalies, debugging specific issues, and understanding query variability.

Enabling pg_stat_statements and pg_stat_monitor

pg_stat_statements is part of the core extension, therefore, it is part of the installation package for your PostgreSQL server — enabling it is straightforward and can be achieved by just two-process. 

First, modify the postgresql.conf file. You need to tell PostgreSQL to preload the extension. This requires a server restart. Make sure you know the location of your postgresql.conf file. Mostly, this is located in your data_directory for RHEL/CentOS/Rocky/Alma/Oracle Linux or /etc/postgresql/<version>/main for Ubuntu/Debian environment. You can also find its path by running SHOW config_file; through psql. Once you have it, add pg_stat_statements to the shared_preload_libraries parameter. If other libraries are already listed, separate them with a comma.

E.g.,

# Edit your postgresql.conf and locate shared_preload_libraries parameter 
shared_preload_libraries = 'pg_stat_statements'

It is also recommended to adjust a few other parameters for better tracking:

# Controls how many statements are tracked
pg_stat_statements.max = 10000
# Controls whether to save statistics across server restarts
pg_stat_statements.track_utility = on
# Controls the level of statement normalization
pg_stat_statements.track = all
# Enables tracking of per-query I/O statistics
track_io_timing = on

Lastly, restart postgresql,

sudo systemctl restart postgresql

For pg_stat_monitor, you need to install the package from Percona since this is a third-party extension. Either you can install through the package, easy and recommended, or build it from source. Take note that, pg_stat_monitor is tested with PostgreSQL 11–17 and Percona Distribution for PostgreSQL. Ensure your PostgreSQL version matches the package version.

For RHEL/CentOS/Rocky/Alma/Oracle Linux, you can install with the following steps:

sudo dnf install -y 
https://repo.percona.com/yum/percona-release-latest.noarch.rpm 

Enable the repository for the desired PostgreSQL version (replace <VersNum> with the version, e.g., 16 for PostgreSQL 16):

sudo percona-release setup ppg<VersNum>

For Ubuntu/Debian Linux, download the Percona release package and install the package, 

wget 
https://repo.percona.com/apt/percona-release_latest.generic_all.deb
sudo dpkg -i percona-release_latest.generic_all.deb

Then, enable the repository for the desired PostgreSQL version (replace <VersNum> with the version, e.g., 16 for PostgreSQL 16):

sudo percona-release setup ppg<VersNum>

And run, 

sudo apt update

Once the Percona repository is set up, install the pg_stat_monitor package for your PostgreSQL version.

For  RHEL/CentOS/Rocky/Alma/Oracle Linux, install the pg_stat_monitor package (replace <VersNum> with your PostgreSQL version, e.g., 16):

sudo dnf install -y percona-pg-stat-monitor<VersNum>

For Debian/Ubuntu, install the pg_stat_monitor package (replace <VersNum> with your PostgreSQL version, e.g., 16:

sudo apt install -y percona-pg-stat-monitor<VersNum>

Now that you have installed pg_stat_monitor, edit your postgresql.conf and load it by adding it through the parameter shared_preload_libraries

# Edit your postgresql.conf and locate shared_preload_libraries parameter 
shared_preload_libraries = 'pg_stat_statements, pg_stat_monitor';

Then restart your service daemon,

sudo systemctl restart postgresql

Additionally, for the sake of query execution performance, Percona recommends disabling the pgsm_track_application_names feature. To do this, we recommend you set it with an ALTER statement by using psql CLI.

ALTER SYSTEM SET pg_stat_monitor.pgsm_track_application_names = 'no';

Running the statement above shall add it to your data_directory/postgresql.auto.conf file with the following parameter:

## For example, in my environment:
root@primary-n1:~# cat /opt/postgresql/postgresql.auto.conf
# Do not edit this file manually!
# It will be overwritten by the ALTER SYSTEM command.
pg_stat_monitor.pgsm_track_application_names = 'no'

Now that you have pg_stat_monitor installed, you can now create an extension for your database. For example,

CREATE DATABASE s9s_db;
\c s9s_db;
CREATE EXTENSION pg_stat_monitor;

Then verify the version number,

SELECT pg_stat_monitor_version();

Manual vs. ClusterControl

ClusterControl’s approach is more comprehensive, providing advanced management of database clusters through tons of enterprise-grade features that alleviate the daily problems faced in database operations and operability. It’s an automation platform that is commonly nowadays served as a control plane or the heart and brain of automation, management, and observability for your database clusters, and not only that, also your systems, network, and load balancers that comprise your production environment. 

ClusterControl here is basically the Orchestration layer as it automates the arrangement, coordination, and management of complex computer systems, middleware, and services. Where pg_stat_statements and pg_stat_monitor are merely monitoring tools, ClusterControl reacts on what it has observed to avoid cluster degradation and failure.

Mainly, what you can achieve with pg_stat_statements and pg_stat_monitor is purely metrics or statistics to monitor performance and is basically for developers who want to get more refined information or understanding of what’s going on and what queries the database is primarily digesting and processing. Let’s look at how a manual approach (i.e. using and relying with pg_stat_statements and/or pg_stat_monitor) and ClusterControl compare:

Manual Approach (pg_stat_statements and/or pg_stat_monitor)

  • Targeted to DBAs and developers with small Postgres deployments
  • For query performance monitoring, pg_stat_monitor is more advanced
  • Requires SQL-level understanding and expertise to extract and analyze the data
  • Requires manual setup and environment analysis as well to suit your requirements
  • Little to no visualization support. pg_stat_monitor provides a basic histogram feature  through the resp_calls parameter that can help identify problematic queries

ClusterControl

  • Targeted to DBAs, DevOps, and enterprises with complex or multi-database setups
  • Provides comprehensive database management for a wide-array of database clusters, offering enhanced capabilities, e.g. deployment, load balancing, failover and recovery, backup management, monitoring, security and compliance, user management, etc.
  • Provides database, system and queries metrics, alerting, visualizations and reporting
  • User-friendly web interface and CLI; minimal SQL knowledge needed
  • Offers broader query insights than pg_stat_monitor
  • Installation can be completed in multiple ways

Collecting & analyzing data with pg_state_statements and pg_stat_monitor

As we have discussed their installation earlier with pg_stat_statements and pg_stat_monitor our side-by-side comparison, we’ll assume that both extensions are already enabled.

To ensure they’re active, verify that both extensions are listed under the shared_preload_libraries of your postgresql.conf file:

shared_preload_libraries = 'pg_stat_statements,pg_stat_monitor'

Using pg_stat_statements

When using pg_stat_statements, it aggregates the collected stats over time. In fact, it can affect performance if data gets huge. If old data is not needed you may trim it or reset the stats if needed. Resetting the stats can be done by,

SELECT pg_stat_statements_reset();

This gives you a fresh, clean slate to measure performance over a specific time window. Set a max value by setting the pg_stat_statements.max parameter in your postgresql.conf file. This parameter limits the number of tracked unique queries. When the limit is reached, it automatically discards the stats for the least-executed queries to make room for new ones.

Make sure you also enable the extension within your database.

CREATE EXTENSION pg_stat_statements;

After enabling, you can take track by analyzing the queries, e.g. you can find the top 10 queries by total execution time using the query below:

SELECT calls, total_exec_time AS total_time_ms, mean_exec_time AS mean_time_ms, query FROM pg_stat_statements ORDER BY total_exec_time DESC LIMIT 10;

Analysis tips:

  • Identify slow queries: High total_exec_time indicates candidates for indexing or rewriting.
  • Spot I/O issues: Elevated shared_blks_read suggests disk-heavy operations; aim for more shared_blks_hit via caching.
  • Group by user or database: Join with pg_stat_database for per-DB insights.

Using pg_stat_monitor

pg_stat_monitor offers a more granular view, capturing statistics on a per-query basis with real-time snapshots. It provides a more comprehensive picture, including query plan details and I/O information.

Primarily, DBA’s and developers are leveraging this extension in contrast to pg_stat_statements as it offers more powerful features:

  • Query Plan: Offers detailed stats for DBA’s to easily spot queries that are performing sequential scan, when instead they should be using an index. With this feature, each SQL is accompanied by its actual plan that was constructed for its execution, making it easier to understand why a particular query is slower.
  • Time Interval Grouping (buckets): With time interval grouping, you can look back and see how query performance or load changed over time, not just in aggregate. In pg_stat_statements, you have this ever-increasing count in one set. But in pg_stat_monitor, it computes stats for a configured number of time intervals – time buckets. This allows for much better data accuracy, especially in the case of high resolution or unreliable networks. Imagine with this feature, you are able to determine a problem on which a query spiked 1,000 times between 11am to 3pm, where you cannot do this in pg_stat_statements but knowing only that your query ran for 10,000 times and that’s just what you get.
  • Multi-Dimensional Grouping: Instead of having a flat list, you have a richer, more granular view. While pg_stat_statements groups counters by userid, dbid, queryid, pg_stat_monitor uses a more detailed group for higher precision. This allows a user to drill down into the performance of queries. Think of this, in a single-dimensional grouping, all you can determine is a query is slow. However, you might not know if it goes faster or execution time is in fact very minimal when it was executed from this user or from this client (or your app server). But with multi-dimensional grouping, you’ll be able to determine and answer this specific problem:  This query is fast from app1, but slow from app2 — why?
  • Capture Actual Parameters in the Queries: this allows you to choose if you want to see queries with placeholders for parameters or actual parameter data. This is controlled by the configuration parameter pg_stat_monitor.pgsm_normalized_query. This simplifies debugging and analysis processes by enabling users to execute the same query.
  • Tables Access Statistics for a Statement: This allows us to easily identify all queries that accessed a given table. This set is at par with the information provided by the pg_stat_statements.
  • Histogram: Visual representation is very helpful as it can help identify issues. Take note that the histogram is exclusively tied to the resp_calls field in the pg_stat_monitor view. With the help of the histogram function, one can now view a timing/calling data histogram in response to an SQL query. And yes, it even works in psql.

Same as with the pg_stat_statements, you also need to enable the and create the extension for your database:

CREATE EXTENSION pg_stat_monitor;

Once enabled, you can proceed with analyzing with SQL query. For example, to find the top 10 queries with the highest blocks_fetched:

SELECT query, calls, plan_time, exec_time, blocks_fetched, blocks_hit, temp_bytes FROM pg_stat_monitor_detailsORDER BY blocks_fetched DESC LIMIT 10;

E.g. using the histogram to extract and analyze the histogram data:

SELECT 
    query,
    calls,
    total_plan_time,
    mean_plan_time,
    resp_calls,
    array_length(resp_calls, 1) AS bucket_count
FROM pg_stat_monitor
WHERE calls > 10
ORDER BY total_plan_time DESC
LIMIT 5;

Will give you a sample output:

-[ RECORD 1 ]---+------------------------------------------------------
query           | SELECT extname, extversion FROM pg_extension
calls           | 12
total_plan_time | 0
mean_plan_time  | 0
resp_calls      | {12,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}
bucket_count    | 22
-[ RECORD 2 ]---+------------------------------------------------------
query           | select extract(epoch from pg_postmaster_start_time())
calls           | 12
total_plan_time | 0
mean_plan_time  | 0
resp_calls      | {12,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}
bucket_count    | 22
-[ RECORD 3 ]---+------------------------------------------------------
query           | SELECT pg_database_size($1)
calls           | 16
total_plan_time | 0
mean_plan_time  | 0
resp_calls      | {15,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}
bucket_count    | 22
-[ RECORD 4 ]---+------------------------------------------------------
query           | show all
calls           | 12
total_plan_time | 0
mean_plan_time  | 0
resp_calls      | {12,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}
bucket_count    | 22
-[ RECORD 5 ]---+------------------------------------------------------
query           | SELECT 1
calls           | 18
total_plan_time | 0
mean_plan_time  | 0
resp_calls      | {18,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0}
bucket_count    | 22

Below is an example of a query If you need to identify or pinpoint the problematic queries that cause performance problems:

SELECT
    datname,
    queryid,
    query,
    calls,
    total_exec_time,
    total_plan_time,
    shared_blks_hit,
    shared_blks_read,
    temp_blks_read,
    temp_blks_written
FROM
    pg_stat_monitor
ORDER BY
    total_exec_time DESC
LIMIT 20;

What to analyze from the available columns:

  • Slow Queries: total_exec_time remains your go-to metric. A high value here means the query is a performance hog.
  • I/O Bottlenecks: Look at shared_blks_read and temp_blks_read and temp_blks_written.
    • High shared_blks_read: Indicates the query is reading a lot of data from disk instead of from the shared buffer cache. This often points to a lack of a proper index for the query.
    • High temp_blks_read and temp_blks_written: This is a clear sign that a query is creating large temporary files on disk, which typically happens during large sorts or hash joins that can’t fit in memory.

By examining these metrics, you can still gain valuable insights and identify areas for optimization, such as adding indexes or rewriting queries to avoid large sorts.

pg_stat_statements and pg_stat_monitor performance tuning 

These extensions introduce minimal overhead (typically <5% CPU); but, tuning the following parameters on high-throughput systems is helpful;

  • pg_stat_statements.track: Set to ‘top’ (default) to track only top-level statements; use ‘all’ for nested queries if needed, but it increases overhead.
  • pg_stat_statements.max: Default 5000; reduce to 1000-2000 if memory is constrained (each entry ~1KB).
  • pg_stat_monitor.bucket_time: Default 60s; increase to 300s for longer aggregation periods, reducing write frequency.
  • pg_stat_monitor.max: Similar to above, default 10000; lower if performance dips.
  • track_activity_query_size: Increase from 1024 to 4096 bytes if queries are truncated in views.

If overhead becomes problematic (e.g., via pg_stat_activity showing high CPU on tracking), you can temporarily disable:

  • Remove from shared_preload_libraries and restart.
  • Or set pg_stat_statements.track = 'none' dynamically: ALTER SYSTEM SET pg_stat_statements.track = 'none'; SELECT pg_reload_conf();

Regularly vacuum/analyze your database and reset stats periodically to keep data fresh. For production, implement tools like pgBadger for visualized reports. By leveraging these extensions, you can proactively tune your PostgreSQL instance, ensuring efficient query performance. And remember, experiment in a test environment first!

Proactive maintenance for pg_stat_statements and pg_stat_monitor

Both pg_stat_statements and pg_stat_monitor require some proactive maintenance to ensure optimal performance and manage resource usage effectively. Resetting the statistics cache is one such maintenance task, but it’s not automatic and depends on specific use cases. Both extensions provide unique methods for proactive maintenance. We will outline these below to help you better understand each one.

Using pg_stat_statements

  • Resetting the statistics: As mentioned earlier, resetting can be done through
SELECT pg_stat_statements_reset();

This is useful for clearing all data after a major application deployment, a performance tuning session, or a specific test run. It ensures you’re only looking at new, relevant data. A common practice is to schedule a weekly or monthly reset for databases with high query churn.

  • Filtering out unwanted queries: You can manage the size of pg_stat_statements by excluding certain queries from being tracked. The pg_stat_statements.track parameter controls what gets recorded.

pg_stat_statements.track = top: This is the default and tracks only top-level statements.

pg_stat_statements.track = all: This tracks all statements, including those executed inside functions. This can significantly increase the size of the view. pg_stat_statements.track = none: This disables tracking entirely.

You can also set pg_stat_statements.track_utility to off to prevent utility commands (like VACUUM or CREATE TABLE) from being tracked, which often don’t provide useful performance insights.

  • Monitoring and analyzing the view

Regularly querying pg_stat_statements is a proactive way to monitor its size and contents. You can identify queries that are running frequently or have high execution times.

You can use a query like the one below to find the queries with the highest memory consumption or execution counts.

SELECT query, calls, total_exec_time, mean_exec_time
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;

By analyzing this data, you can identify long-running queries that may need optimization, or you can find queries that are executed very frequently, which may indicate an opportunity for caching or other performance improvements. This monitoring process helps you decide when it’s time to reset the view or adjust your tracking parameters.

Combining these strategies — scheduled resets, careful configuration, and regular monitoring — forms a robust proactive maintenance plan for pg_stat_statements.

Pro-Tip: On servers where memory is tight, don’t overlook the pg_stat_statements.max setting. The default of 5000 is generous, but at ~1KB per entry, memory usage can add up quickly. Lowering this to a more conservative 1000-2000 is a common and effective optimization.

Using pg_stat_monitor

  • Resetting the Cache: Similar to pg_stat_statements, pg_stat_monitor supports resetting statistics, typically via a function like pg_stat_monitor_reset(). This is useful for similar reasons as above, such as after maintenance or to manage memory usage.
  • Configure Buckets: pg_stat_monitor organizes data into time-based buckets (controlled by pgsm_bucket_time). Adjust this setting to balance granularity and storage needs.
  • Tune Histogram Settings: Settings like pgsm_histogram_buckets, pgsm_histogram_max, and pgsm_histogram_min control histogram granularity for query latency. Adjust these to optimize data collection without overwhelming resources.
  • Manage Shared Memory: The pgsm_max parameter sets the maximum shared memory for pg_stat_monitor. If memory is constrained, consider resetting statistics or reducing tracked statements to prevent overflow into swap space (controlled by pgsm_enable_overflow).
  • Combine with Monitoring Tools: For long-term storage and analysis, integrate pg_stat_monitor with time-series databases (e.g., ClickHouse) to store bucketed data and avoid data loss on resets.
  • Permissions: Only superusers or users with the pg_read_all_stats role can view sensitive data (e.g., query text, client IP). Other users can see statistics if the extension is installed in their database

Reset frequency

pg_stat_statements: There’s no strict rule, but resets are often performed monthly, weekly, or daily, depending on the system’s needs. For production systems with time-series monitoring (e.g., pgwatch2 with Grafana), resets may be unnecessary if snapshots are stored externally.

pg_stat_monitor: Resets may align with bucket intervals or maintenance schedules. Since pg_stat_monitor uses buckets, resets can be more targeted to specific time periods.

N.B. Frequent resets can disrupt query planning by removing historical data, so use them judiciously during maintenance or when old statistics are no longer relevant.

Wrapping up

While pg_stat_statements and pg_stat_monitor are invaluable tools for database professionals, providing essential insights into query performance, they represent just one part of a complete database management solution. pg_stat_monitor, as a more advanced version of pg_stat_statements, offers enhanced query performance insights through features like time-based buckets, multi-dimensional grouping, and the ability to capture actual query parameters and execution plans. This makes it a crucial tool for DBAs, developers, and DevOps engineers, as it simplifies the process of identifying and debugging poorly performing queries and helps to optimize database performance.

However, a comprehensive platform like ClusterControl elevates database management to a new level. While pg_stat_monitor focuses on a single aspect, query performance within PostgreSQL, ClusterControl provides a unified, full-lifecycle database orchestration platform that automates and streamlines everything from deployment and security to backup, scaling, and high availability across multiple database technologies. 

Its broader capabilities, including automated failover, comprehensive monitoring and alerting, and robust backup and recovery features, make it a superior choice for managing complex, mission-critical environments. Essentially, while pg_stat_monitor is a powerful lens for examining query performance, ClusterControl is the complete toolkit for managing and maintaining the entire database ecosystem.

Get started in minutes: Install ClusterControl directly from this post and secure your PostgreSQL deployments with a free 30-day Enterprise trial.

Script Installation Instructions

The installer script is the simplest way to get ClusterControl up and running. Run it on your chosen host, and it will take care of installing all required packages and dependencies.

Offline environments are supported as well. See the Offline Installation guide for more details.

On the ClusterControl server, run the following commands:

wget https://severalnines.com/downloads/cmon/install-cc
chmod +x install-cc

With your install script ready, run the command below. Replace S9S_CMON_PASSWORD and S9S_ROOT_PASSWORD placeholders with your choice password, or remove the environment variables from the command to interactively set the passwords. If you have multiple network interface cards, assign one IP address for the HOST variable in the command using HOST=<ip_address>.

S9S_CMON_PASSWORD=<your_password> S9S_ROOT_PASSWORD=<your_password> HOST=<ip_address> ./install-cc # as root or sudo user

After the installation is complete, open a web browser, navigate to https://<ClusterControl_host>/, and create the first admin user by entering a username (note that “admin” is reserved) and a password on the welcome page. Once you’re in, you can deploy a new database cluster or import an existing one.

The installer script supports a range of environment variables for advanced setup. You can define them using export or by prefixing the install command.

See the list of supported variables and example use cases to tailor your installation.

Other Installation Options

Helm Chart

Deploy ClusterControl on Kubernetes using our official Helm chart.

Ansible Role

Automate installation and configuration using our Ansible playbooks.

Puppet Module

Manage your ClusterControl deployment with the Puppet module.

ClusterControl on Marketplaces

Prefer to launch ClusterControl directly from the cloud? It’s available on these platforms:

Subscribe below to be notified of fresh posts