Download instructions

Choose your preferred installation method:

Want to see additional installation methods? Go here!

Thank you for choosing ClusterControl

Below, you’ll find step-by-step download instructions to guide you through the installation process. For your convenience, we recommend saving these instructions tailored to your selected installation method. Additionally, we’ll email you a copy of these instructions. Please keep them handy and secure for future reference.

Watch Now: Your ClusterControl Installation Walkthrough

Script Installation Instructions for ClusterControl

Welcome to ClusterControl! Before you get started, make sure you have the proper environment, such as a supported Linux OS and a minimum system requirement for your ClusterControl host (node): Arch: x86_64 only, RAM: >2 GB, CPU: >2 cores, Disk space: >40 GB.

The full list of requirements can be found here.

Step 1: Download and run the ClusterControl install script

The provided script below installs all the components that make up ClusterControl, which includes ClusterControl’s Controller (cmon), GUI, etc. Alongside all the required dependencies. The script is personalized with your registration details.

In any choice directory, download your unique ClusterControl install script with the following command:

wget -O install-cc,

After the download is complete, make the script executable with the following command:

chmod +x install-cc

Running the install script

With your install script ready, run the command below. Replace S9S_CMON_PASSWORD and S9S_ROOT_PASSWORD placeholders with your choice password, or remove the environment variables from the command to interactively set the passwords.

If you have multiple network interface cards, assign one IP address for the HOST variable in the command using HOST=<ip_address>.

S9S_CMON_PASSWORD=<your_password> S9S_ROOT_PASSWORD=<your_password> HOST=<ip_address> ./install-cc # as root or sudo user

ClusterControl relies on a MySQL server as a data repository for the clusters it manages and an Apache server for the User Interface. The installation script will always install an Apache server on the host. An existing MySQL server can be used or a new MySQL server install is configured for minimum system requirements. If you have a larger server please make the necessary changes to the my.cnf file and restart the MySQL server after the installation.

If you want to explicitly set the Apache webserver's 'servername' then set S9S_WEB_SERVER_HOST=<hostname>.

S9S_WEB_SERVER_HOST=cc.mylocaldomain.local ./install-cc

If the installer script fails to start the MySQL server on the ClusterControl host refer to this troubleshooting guide.
Need additional help? contact our support team.

Step 2: Create your default Admin user

At the end of the installation, you will see a URL – https://[ClusterControl IP/hostname] – to access your ClusterControl GUI like in the image below.

Open your web browser to the URL, accept the “self-signed TLS/SSL certificate” and then create your default Admin user by entering a username and password.

Step 3: Setup passwordless SSH

If you are using cloud infrastructure, since all nodes you create may be configured with a key pair (private key) you may skip the step below.

You only need to upload the key pair on your ClusterControl host – for example to /root/.ssh/id_rsa/clustercontrol.pem – so the CMON controller can access it to connect to the other nodes.

To setup passwordless SSH to all target nodes (ClusterControl and all database nodes). Run the following commands on your ClusterControl host:

If you use a non-root user, it needs to be configured in the sudoers file on all nodes. See details here.
ssh-keygen -t rsa # press enter on all prompts
ssh-copy-id -i ~/.ssh/id_rsa [ClusterControl IP address]
ssh-copy-id -i ~/.ssh/id_rsa [Database nodes IP address] # repeat this to all target database nodes

The command ssh-copy-id will simply copy the public key from the source server and add it to the destination server’s authorized key list, default to ~/.ssh/authorized_keys of the authenticated SSH user. If password authentication is disabled, then a manual copy is required. On the ClusterControl node, copy the content of SSH public key located at ~/.ssh/ and paste it into ~/.ssh/authorized_keys on all managed nodes (including ClusterControl server).

Docker Image Installation Instructions for ClusterControl

The Docker Image comes with ClusterControl installed and configured with all of its components, so you can immediately use it to manage and monitor your existing databases. The Docker Image for ClusterControl is a convenient way to quickly get up and running and it’s 100% reproducible. All you need to do is pull the image from the Docker Hub and then launch the software.

Image Description

To Pull ClusterControl images, simply

docker run -d severalnines/clustercontrol

The image is based on CentOS 7 with Apache 2.4, which consists of ClusterControl packages and prerequisite componentsD

  • ClusterControl controller, UI, cloud, notification and web ssh packages installed via Severalnines repository.
  • MySQL, CMON database, cmon user grant and dcps database for ClusterControl UI.
  • Apache, file and directory permission for ClusterControl UI with SSL installed.
  • SSH key for ClusterControl usage.

Run Container

To run a ClusterControl container, the simplest command would be:

docker run -d severalnines/clustercontrol

However, for production use, users are advised to run with sticky IP address/hostname and persistent volumes to survive across restarts, upgrades and rescheduling, as shown below:

# Create a Docker network
docker network create --subnet= db-cluster
# Start the container
docker run -d --name clustercontrol
--network db-cluster
-h clustercontrol
-p 5000:80
-p 5001:443
-v /storage/clustercontrol/cmon.d:/etc/cmon.d
-v /storage/clustercontrol/datadir:/var/lib/mysql
-v /storage/clustercontrol/sshkey:/root/ssh
-v /storage/clustercontrol/cmonlib:/var/lib/cmon
-v /storage/clustercontrol/backups:/root/backups

The recommended persistent volumes are

/etc/cmon.d – ClusterControl configuration files.

/var/lib/mysql – MySQL datadir to host cmon and dcps database.

/root/.ssh – SSH private and public keys.

/var/lib/cmon ClusterControl internal files.

/root/backups – Default backup directory only if ClusterControl is the backup destination

After a moment, you should able to access the ClusterControl Web UI at {host’s IP address}:{host’s port}, for example:



We have built a complement image called centos ssh to simplify database deployment with ClusterControl. It supports automatic deployment (Galera Cluster) or it can also be used as a base image for database containers (all cluster types are supported). Details at here

Environment Variables

  • CMON_PASSWORD={string}
    • MySQL password for user `cmon`. Default to `cmon`. Use docker secret is recommended.
    • Example: CMON_PASSWORD=cmonP4s5
    • MySQL root password for the ClusterControl container.
      Default to `password`. Use docker secret is recommended.
    • Example: MYSQL_ROOT_PASSWORD=MyPassW0rd

Github Instructions

Service Management

Starting from version 1.4.2, ClusterControl requires a number of processes to be runningD

  • sshd – SSH daemon. The main communication channel.
  • mysqld – MySQL backend runs on Percona Server 5.6.
  • httpd – Web server running on Apache 2.4.
  • cmon – ClusterControl backend daemon. The brain of ClusterControl. It depends on mysqld and sshd.
  • cmon-ssh – ClusterControl web-based SSH daemon, which depends on cmon and httpd.
  • cmon-events – ClusterControl notifications daemon, which depends on cmon and httpd.
  • cmon-cloud – ClusterControl cloud integration daemon, which depends on cmon and httpd.
  • cc-auto-deployment – ClusterControl automatic deployment script, running as a background process, which depends on cmon

These processes are being controlled by Supervisord, a process control system. To manage a process, one would use supervisorctl client as shown in the following example:

[root@physical-host] docker exec it clustercontrol /bin/bash
[root@clustercontrol /]# supervisorctl
cc-auto-deployment RUNNING pid 570, uptime 2 days, 19:11:54
cmon RUNNING pid 573, uptime 2 days, 19:11:54
cmon-events RUNNING pid 576, uptime 2 days, 19:11:54
cmon-ssh RUNNING pid 575, uptime 2 days, 19:11:54
httpd RUNNING pid 571, uptime 2 days, 19:11:54
mysqld RUNNING pid 577, uptime 2 days, 19:11:54
sshd RUNNING pid 572, uptime 2 days, 19:11:54
supervisor> restart cmon
cmon: stopped
cmon: started
supervisor> status cmon
cmon RUNNING pid 2838, uptime 0:11:12


  • Standalone Docker
  • Kubernetes


Please report bugs, improvements or suggestions via our support channel:

If you have any questions, you are welcome to get in touch via our contact us page or email us at

[email protected].

Desktop GUI Installation Instructions for ClusterControl

The ClusterControl GUI Installer is a cross-platform desktop web application for Linux, MacOS and Windows. It uses the electron framework to create a native ClusterControl Installer application. The installer provides an easy to use graphical user interface for the bash script. Download the installer to to your desktop and launch the application to get started.

Download the installer package for your desktop operating system of choice.

Linux Distributions: Ubuntu 18.04|20.04, Debian 10|11, Redhat 8, RockyLinux 8, AlmaLinux 8

# download
# unpack with
tar -zxvf clustercontrol-installer-linux-x64-v2.0.2.tar.gz

MacOS 11+

# download
# Double-click the .zip file. The unzipped item appears in the same folder as the .zip file

Windows 10+

# download
# Open File Explorer and find the zipped folder.
# To unzip the entire folder, right-click to select Extract All, and then follow the instructions.
# To unzip a single file or folder, double-click the zipped folder to open it. Then, drag or copy the item from the zipped folder to a new location

Start GUI Installer

Open the ClusterControl Installer by double clicking the application.

ClusterControl will be installed on a host using a SSH connection from the Installer application. This requires that you have SSH credentials to connect from the Installer host to the ClusterControl server where it will be installed. The ClusterControl server requires internet connectivity and If you use a non-root user, then that user must be in the sudoers on the host. Check the full requirements here before you start.

SSH Configuration

You can use password authentication however we recommend using a password-less connection with a SSH key.

  • ClusterControl Server
    This is the server hostname / IP address where ClusterControl will be installed on.
  • SSH Port
    Default port 22.
  • SSH User
    Use the ‘root’ user or a user that has sudo privileges on the server.
  • SSH Password
    Enter a password if you have a password based login to the server. We recommend that you use password-less login with an SSH key and then leave this field empty.
  • SSH Private Key
    The SSH key to use with password-less login to the server.

If you have issues connecting to the ClusterControl server then enable the ‘Debug Mode’ checkbox at the end of the form to capture more detailed logs.

ClusterControl Configuration

  • Email Address
    Enter the email address of the Admin user which will be created at first login.
  • MySQL CMON Password
    ClusterControl requires a database user, ‘cmon’ by default and this sets the password for the ‘cmon’ database user.
  • MySQL Root Password
    The password for the ‘root’ user of the MySQL database server.
  • MySQL Port 3306
    Default local MySQL server port
  • MySQL InnoDB Buffer Pool Size (MB)
    This setting depends on the ClusterControl server’s available memory. 256-512 MB is a good starting point with servers that has 2-4GB of memory. You can try to later change this to 60-80% of the available memory on the server if you experience performance issues with the web application.

Log Installation Information

By default, we capture and send some data (such as os, memory and install success) during the installation to help with troubleshooting and improve our installation script. None of the collected data identifies you personally. You can opt-out of this by disabling ‘Send the installation diagnostic information’.

ClusterControl v2 – Next Generation Web Application

The next generation of the ClusterControl web application is in active development and you can opt to install it with the current version of the ClusterControl (v1) web application. Opt-out by disabling ‘Do you want to install CCv2’.

Next, click on ‘Install’ after you have filled in the SSH and ClusterControl configuration settings to start the installation process.

After the installation script has successfully completed, click on the ClusterControl (v1) web application’s URL – to finish setting up the installation by creating the first Admin user. You can also use the HTTPS endpoint – however since we by default generate a temporary self-signed certificate, you will need to make a certificate exception if you want to continue.

Note: For now, you cannot start by opening the ClusterControl (v2) web application for this initial Admin user creation. It needs to be done from the ClusterControl (v1) web application until it’s supported with upcoming releases.

Create the initial Admin User

Open the ClusterControl (v1) web application, enter a new password for your Admin user and click ‘Register’ to get started with ClusterControl!

Please check the requirements for password-less setup before you start to deploy databases – Passwordless SSH .

ClusterControl on Kubernetes Installation with Helm

The ClusterControl Helm chart is designed to provide everything you need to get ClusterControl running in a vanilla Kubernetes cluster. This includes dependencies like:

  • NGINX Ingress Controller
  • MySQL Operator and InnoDB Cluster
  • VictoriaMetrics

If you already have Oracle MySQL Operator or NGINX Ingress controller installed in your cluster, or wish to use your own VictoriaMetrics or other Prometheus compatible monitoring system, refer to the dependencies documentation.

Watch Now: Your ClusterControl Installation Walkthrough

Add the Severalnines Helm chart repository

The following commands adds the Severalnines Helm chart repository to your Helm installation:

helm repo add severalnines
helm repo update

Create a namespace

It is required to create and run ClusterControl installation in a custom namespace as the MySQL Operator cannot be installed in the “default” namespace. Run the following commands to create a namespace called “clustercontrol” and make it default for the installation.

kubectl create ns clustercontrol
kubectl config set-context --current --namespace=clustercontrol

Installing ClusterControl with Helm

To install ClusterControl with Helm, it is highly recommended you use a fully qualified domain name (FQDN) for external access instead of the default localhost. To install ClusterControl with your FQDN override, run the following command.

helm install clustercontrol severalnines/clustercontrol --debug --set fqdn=

After the installation is completed and all the pods are running, point your domain to the external IP address of the ingress address shown by running kubectl get ingress. See an example below.

kubectl get ingress

After that, you should be able to access your ClusterControl dashboard via the domain.

Create your default admin user

Open your web browser to the domain, accept the “self-signed TLS/SSL certificate” and then create your default admin user by entering a username, email, password, and other registration info.

Providing your own SSH keys for ClusterControl to use

The ClusterControl Helm chart provides an example SSH key for you to use. However, you should provide your SSH keys for ClusterControl to use and connect to your target database servers. These should already be configured on the target server’s ~/.ssh/authorized_keys file.

Create Kubernetes secrets with your SSH keys

kubectl create secret generic my-ssh-keys --from-file=key1=/path/to/my/.ssh/id_rsa

key1 is the filename of your ssh key in ClusterControl – this will be created under /root/.ssh-keys-user

NOTE: You can use multiple –from-file – be sure to provide unique key names – key1, key2, key3.

Upgrade Cluster Control

Providing cmon.sshKeysSecretName value with the secret name you created.

helm upgrade --install clustercontrol severalnines/clustercontrol --set cmon.sshKeysSecretName=my-ssh-keys

Next Steps

Deploy a new server/cluster or import an existing database server/cluster into ClusterControl via:

Start managing and monitoring your database instances:


ClusterControl Installation Resources

Need Additional Help? Contact Our Support Team