ClusterControl Documentation
Use the menu below or the search below to learn everything you need to know about ClusterControl

5. User Guide

This documentation provides a detailed user guide for ClusterControl UI.

5.1. Dashboard

This page is the landing page once you are logged in. It provides a summary of database clusters monitored by ClusterControl.

../_images/cc_cluster_list_172.png

Top Menu

ClusterControl’s top menu.

Sidebar

Left-side navigation provides quick access to ClusterControl administration menu. See Sidebar.

Cluster List

List of database clusters managed by ClusterControl with summarized status. Database cluster deployed by (or imported into) ClusterControl will be listed in this page. See Database Cluster List.

Cluster Actions

Provides shortcuts to main cluster functionality. Every supported database cluster has its own set of menu:

5.1.1. Activity

Clicking on it will expand the activity tab which consists of Alarms, Jobs and Logs. Click once more to collapse the content. If you rolled over the menu icon, you would see a counter summary for every component.

5.1.1.1. Alarms

Shows aggregated view of alarms raised for all clusters monitored by ClusterControl. Each alarm entry has a header, details on the issue, severity level, category, corresponding cluster name, corresponding host and timestamp. All the alarms listed here are also accessible directly under individual cluster main menu available at Alarms > Alarms.

Click on the alarm entry itself to see the full details and recommendation. Furthermore, you can click on the “Full Alarm Details” to see the full information, recommendation and to send out this alarm as an email to the recipients configured under Settings > CMON Settings > Email Notification Settings. Click “Ignore Alarm” to silent the respective alarm from appearing in the list again.

5.1.1.2. Jobs

Shows aggregated view of jobs that have been initiated and performed by ClusterControl across clusters (e.g., deploying a new cluster, adding an existing cluster, cloning, creating backup, etc). Each job entry has a job status, a cluster name, which user started the job and also timestamp. All the jobs listed here are also accessible directly under individual cluster main menu available at Logs > Jobs.

Click on the job entry itself to see its most recent job messages. Furthermore, you can click on the Full Job Details to see the full job specification and messages. Under the Full Job Details popup, you have the ability to see the full command sent to the controller service for that particular job by clicking on Expand Job Specs button. Underneath it is the full job messages in descending order (newer first) returned by the controller service. Copy to clipboard button will copy the content of the job messages to the clipboard.

Note

Starting from v1.6, ClusterControl has a better support for parallelism, where you can perform multiple deployments simultaneously.

The job status:

Job status Description
FINISHED The job is successfully executed.
FAILED The job is executed but failed.
RUNNING The job is started and in progress.
ABORTED The job is started but terminated.
DEFINED The job is defined but yet to start.

5.1.1.3. Logs

Shows aggregated view of ClusterControl logs which require user’s attention across clusters (logs with severity WARNING and ERROR). Each log entry has a message subject, severity level, component, the corresponding cluster name and also timestamp. All the logs listed here are also accessible directly under individual cluster at Logs > CMON Logs.

5.1.2. Global Settings

Provides interface to register clusters, repositories and subscriptions inside ClusterControl.

5.1.2.1. Repositories

Manages provider’s repository for database servers and clusters. You can have three types of repository when deploying database server/cluster using ClusterControl:

  1. Use Vendor Repositories
    • Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
  2. Do Not Setup Vendor Repositories
    • Provision software by using the pre-existing software repository already setup on the nodes. User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
  3. Use Mirrored Repositories (Create new repository)
    • Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository.
    • This allows you to “freeze” the current versions of the software packages used to provision a database cluster for a specific vendor and you can later use that mirrored repository to provision the same set of versions when adding more nodes or deploying other clusters.
    • ClusterControl sets up the mirrored repository under {wwwroot}/cmon-repos/, which is accessible via HTTP at http://ClusterControl_host/cmon-repos/.

Only Local Mirrored Repository will be listed and manageable here.

  • Remove Repositories
    • Remove the selected repository.
  • Filter by cluster type
    • Filter the repository list by cluster type.

For reference purpose, following is an example of yum definition if Local Mirrored Repository is configured on the database nodes:

$ cat /etc/yum.repos.d/clustercontrol-percona-5.6-yum-el7.repo
[percona-5.6-yum-el7]
name = percona-5.6-yum-el7
baseurl = http://10.0.0.10/cmon-repos/percona-5.6-yum-el7
enabled = 1
gpgcheck = 0
gpgkey = http://10.0.0.10/cmon-repos/percona-5.6-yum-el7/localrepo-gpg-pubkey.asc

5.1.2.2. Cluster Registrations

Registers a database cluster managed by ClusterControl Controller (CMON) to be viewed by ClusterControl UI. Each database cluster can be registered with the UI through this interface or you may skip doing this and use ClusterControl CLI instead. By default, all clusters deployed by and imported into ClusterControl through the web interface will be automatically registered with the UI. This effectively establishes the communication between the UI and the controller.

5.1.2.3. Subscriptions

Attention

ClusterControl introduces new license format on v1.7.1 (the new license key format contains only a long-encrypted string). If you are having the older format, contact the account manager or email our sales department at sales@severalnines.com to get a new license.

For users with a valid subscription (Advanced and Enterprise), enter your license key here to unlock additional features based on the subscription. The license string contains information about the license type, company/affilication, email, expiration date and total number of licensed nodes.

The following example is the license information that one would get from us:

Email: test@severalnines.com
Company: Severalnines AB
Expiration Date: 2019-04-23
Licensed Nodes: 1
License Key: ewogICAgImNvbXBhbnkiOiAiU2V2ZXJhbG5pbmPkIEFCIiwKICAgICJlbWFpbF9hZGRyZXNzIjog
InRlc3RAc2VRZXJhbG5pbmVzLmNvbSIsCiAgICAiZXhwaXJhdGlvbl9kYXRlIjogIjIwMTktMDQt
MjNUMDA6MDA6MDAuMDAxWiIsCiAgICAibGljZW5zZWRfbm9kZXMiOiAtMSwKICAgICJ0eXBlIjog
IkVudGVycHJpc2UiCn0Ke3M5cy1zaWduYXR1cmUtc2VwYXJhdG9yfQp0RUew5OZKV8SqmmwiQEzT
P+qTNnmCphirIVm7MriA/aCdlJYQcr1NJr4nvTNcSmgu4uFVf3Ufv4olHr4wrBq0/Js9Rm8bJWZo
BO8svHzQhCmIVEWcTYub362krjRyREnOGXaqWwUnvkZ0uUCT+WDaM1P9qn/HawoNd0e8E0+7WiZK
CpwjH+ESqSEppeu/Ewzf3p0C0e8WZbwHtZ9UmX2qJNQq9NDlByrO8FtbVjOOL4zTbc8jV0W2DWzY
1swzOgeyk+7N2eGVRWfdUSzudQSXkT3LA4cdV2HAsU5QLnmKxSCgg+jq+RQJiPwdPXEr3gzjzJzV
+qhmOZ5tTN+WABPy9l3kpztlCbkfzO84/4lM7Z3c4rQ8snMTu6RvD2M+oh/4lhvR8M9RrQQcl8JF
RX2Ak1ZAKxAXkJ97Z5U7nIzuyUGuMTCXdKGEtQkBXzpIcYFvXDeWu0MUks+EULpqG+OFnl+rSZa0
nNTSW3mR/f9B+4e2mK4y2OpJhh4rWPXR1DLpLVLk/2p0o64aEizA+IPe0TP+ox7bFzEfAXirVWfC
/Ol7m1k6arRbl8PSV1DRRcefM+UsABa6jypoiit+JXNPOajdjY1WBgEekCn/jeXBBoPM2k26274u
br0BuHULLkxGSpC8I2/nW6s84E653FO1Kpbvyx+2SKJxwUxLiuEZ2g==

Only paste the license key string (starting after “License Key: ” part) into the license text field. Once applied, restart CMON service to load the new license information:

$ systemctl restart cmon # systemd
$ service cmon restart # SysVinit

When the license expires, ClusterControl defaults back to the Community Edition. For features comparison, please refer to ClusterControl product page.

If you would like to force the existing enteprise edition to community edition (commonly to test out and compare different editions during trial), you can truncate the license table on the ClusterControl host manually. On ClusterControl server, run:

$ mysql -uroot -p cmon
mysql> truncate table cmon.license
$ systemctl restart cmon

Warning

Once a trial license is truncated and cmon is restarted, there is no way you can re-activate a trial licence anymore. Only a working enterprise license will be working as trial license can’t be applied more than once.

5.1.2.4. Configure Mail Server

Configures how email notifications should be sent out. ClusterControl supports two options for sending email notifications, either using local mail commands via local MTA (Sendmail/Postfix/Exim) or using an external SMTP server. Make sure the local MTA is installed and verified using Test Email button.

5.1.2.4.2. Use Sendmail
  • Use sendmail
    • Use this option to use sendmail to send notifications. See Installing Sendmail if you haven’t installed Sendmail. If you want to use Postfix, see Using Postfix.
  • Reply-to/From
    • Specify the sender of the email. This will appear in the ‘From’ field of mail header.
5.1.2.4.2.1. Installing Sendmail

On ClusterControl server, install the following packages:

$ apt-get install sendmail mailutils #Debian/Ubuntu
$ yum install sendmail mailx #RHEL/CentOS

Start the sendmail service:

$ systemctl start sendmail #systemd
$ service sendmail start #sysvinit

Verify if it works:

$ echo "test message" | mail -s "test subject" [email protected]

Replace myemail@example.com with your email address.

5.1.2.4.2.2. Using Postfix

Many of Linux distributions come with Sendmail as default MTA. To replace Sendmail and use other MTA, e.g Postfix, you just need to uninstall Sendmail, install Postfix and start the service. Following example shows commands that need to be executed on ClusterControl node as root user for RHEL:

$ service sendmail stop
$ yum remove sendmail -y
$ yum install postfix mailx cronie -y
$ chkconfig postfix on
$ service postfix start

5.1.2.5. Runtime Configurations

A shortcut to ClusterControl Controller runtime configurations per cluster. Runtime configuration shows the active ClusterControl Controller (CMON) runtime configuration parameters and displays the versions of ClusterControl Controller and ClusterControl UI packages. All parameters listed are loaded directly from cmon.cmon_configuration table, grouped by cluster ID.

Clicking on any of the list will redirect user to the Runtime Configurations page for that particular cluster.

5.1.3. Database Cluster List

Each row represents the summarized status of a database cluster:

Field Description
Cluster Name The cluster name, configured under ClusterControl > Settings > CMON Settings > General Settings > Name.
ID The cluster identifier number.
Version Database server major version.
Database Vendor Database vendor icon.
Cluster Type

The database cluster type:

  • MYSQL_SERVER - Standalone MySQL server.
  • REPLICATION - MySQL/MariaDB Replication.
  • GALERA - MySQL Galera Cluster, Percona XtraDB Cluster, MariaDB Galera Cluster.
  • GROUP_REPLICATION - MySQL Group Replication.
  • MYSQLCLUSTER - MySQL Cluster (NDB).
  • MONGODB - MongoDB ReplicaSet, MongoDB Sharded Cluster, MongoDB Replicated Sharded Cluster.
  • POSTGRESQL - PostgreSQL Standalone or Replication.
Cluster Status

The cluster status:

  • ACTIVE (green) - The cluster is up and running. All cluster nodes are running normally.
  • DEGRADED (yellow) - The full set of nodes in a cluster is not available. One or more nodes is down or unreachable.
  • FAILURE (red) - The cluster is down. Probably that all or most of the nodes are down or unreachable, resulting the cluster fails to operate as expected.
Auto Recovery

The auto recovery status of Galera Cluster:

  • Cluster - If sets to ON, ClusterControl will perform automatic recovery if it detects cluster failure.
  • Node - If sets to ON, ClusterControl will perform automatic recovery if it detects node failure.
Node Type and Status See table on node status indicators below.

Node status indicator:

Indicator Description
Green (tick) OK: Indicates the node is working fine.
Yellow (exclamation) WARNING: Indicates the node is degraded and not fully performing as expected.
Red (wrench) MAINTENANCE: Indicates that maintenance mode is on for this node.
Dark red (cross) PROBLEMATIC: Indicates the node is down or unreachable.

5.1.4. Deploy Database Cluster

Opens a step-by-step modal dialog to deploy a new set of database cluster. The following database cluster types are supported:

  • MySQL Replication

  • MySQL Galera
    • Percona XtraDB Cluster
    • MariaDB Galera Cluster
  • MySQL Cluster (NDB)

  • TimeScaleDB (standalone or streaming replication)

  • PostgreSQL (standalone or streaming replication)

  • MongoDB ReplicaSet

  • MongoDB Sharded Cluster

There are prerequisites that need to be fulfilled prior to the deployment:

  • Passwordless SSH is configured from ClusterControl node to all database nodes. See Passwordless SSH.
  • Verify that sudo is working properly if you are using a non-root user. See Operating System User.

ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl > Activity > Jobs.

5.1.4.1. MySQL Replication

Deploys a new MySQL Replication or a standalone MySQL server. The database cluster will be automatically added into ClusterControl once deployed. Minimum of two nodes is required for MySQL replication. If only one MySQL IP address or hostname is defined, ClusterControl will deploy it as a standalone MySQL server with binary log enabled.

By default, ClusterControl deploys MySQL replication with the following configurations:

  • GTID with log_slave_updates enabled (MySQL and Percona only).
  • Start all database nodes with read_only=ON and super_read_only=ON (if supported). The chosen master will be promoted by disabling read-only dynamically.
  • PERFORMANCE_SCHEMA disabled.
  • ClusterControl will create and grant necessary privileges for 2 MySQL users - cmon for monitoring and management and backupuser for backup and restore purposes.
  • Generated account credentials are stored inside /etc/mysql/secrets-backup.cnf.
  • ClusterControl will configure semi-synchronous replication.

If you would like to customize the above configurations, modify the template base file to suit your needs before proceed to the deployment. See Base Template Files for details.

Starting from version 1.4.0, it’s possible to setup a master-master replication from scratch under ‘Define Topology’ tab. You can add more slaves later after the deployment completes.

Caution

ClusterControl sets read_only=1 on all slaves but a privileged user (SUPER) can still write to a slave (except for MySQL versions that support super_read_only).

5.1.4.1.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for the target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software - Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.4.1.2. 2) Define MySQL Servers
  • Vendor
    • Percona - Percona Server by Percona
    • MariaDB - MariaDB Server by MariaDB
    • Oracle - MySQL Server by Oracle
  • Version
    • Select the MySQL version for new deployment. For Oracle, only 5.7 and 8.0 are supported. For Percona, 5.6, 5.7 and 8.0 while MariaDB, 10.1, 10.2 and 10.3 are supported.
  • Server Data Directory
    • Location of MySQL data directory. Default is /var/lib/mysql.
  • Server Port
    • MySQL server port. Default is 3306.
  • my.cnf Template
    • MySQL configuration template file under /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
  • Admin/Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the cluster in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
5.1.4.1.3. 3) Define Topology
  • Master A - IP/Hostname
    • Specify the IP address of the primary MySQL master node.
  • Add slaves to master A
    • Add a slave node connected to master A. Press enter to add more slaves.
  • Add Second Master Node
    • Opens the add node wizard for secondary MySQL master node.
  • Master B - IP/Hostname
    • Only available if you click Add Second Master Node.
    • Specify the IP address of the other MySQL master node. ClusterControl will setup a master-master replication between these nodes. Master B will be read-only once deployed (secondary master), letting Master A to hold the write role (primary master) for the replication chain.
  • Add slaves to master B
    • Only available if you click Add Second Master Node.
    • Add a slave node connected to master B. Press ‘Enter’ to add more slave.
  • Deploy
    • Starts the MySQL Replication deployment.

5.1.4.2. MySQL Galera

Deploys a new MySQL Galera Cluster. The database cluster will be automatically added into ClusterControl once deployed. A minimal setup is comprised of one Galera node (no high availability, but this can later be scaled with more nodes). However, the minimum of three nodes is recommended for high availability. Garbd (an arbitrator) can be added later after the deployment completes if needed.

By default, ClusterControl deploys MySQL Galera with the following configurations:

  • Use xtrabackup-v2 or mariabackup (depending on the vendor chosen) for wsrep_sst_method.
  • PERFORMANCE_SCHEMA disabled.
  • Binary logging disabled.
  • ClusterControl will create and grant necessary privileges for 2 MySQL users - cmon for monitoring and management and backupuser for backup and restore purposes.
  • Generated account credentials are stored inside /etc/mysql/secrets-backup.cnf.
5.1.4.2.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software - Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.4.2.2. 2) Define MySQL Servers
  • Vendor
    • Percona - Percona XtraDB Cluster by Percona
    • MariaDB - MariaDB Server (Galera embedded) by MariaDB
  • Version
    • Select the MySQL version for new deployment. For Percona, 5.6 and 5.7 are supported while MariaDB, 10.1, 10.2 and 10.3 are supported.
  • Server Data Directory
    • Location of MySQL data directory. Default is /var/lib/mysql.
  • Server Port
    • MySQL server port. Default is 3306.
  • my.cnf Template
    • MySQL configuration template file under /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
  • Admin/Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the Galera Cluster in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
  • Add Node
    • Specify the IP address or hostname of the MySQL nodes. Press ‘Enter’ once specified so ClusterControl can verify the node reachability via passwordless SSH. Minimum of three nodes is recommended.
  • Deploy
    • Starts the Galera Cluster deployment.

5.1.4.3. MySQL Cluster (NDB)

Deploys a new MySQL Cluster (NDB) by Oracle. The cluster consists of management nodes, MySQL API nodes and data nodes. The database cluster will be automatically added into ClusterControl once deployed. Minimum of 4 nodes (2 SQL and management + 2 data nodes) is recommended.

Attention

Every data node must have at least 1.5 GB of RAM for the deployment to succeed.

5.1.4.3.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software - Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.4.3.2. 2) Define Management Servers
  • Server Port
    • MySQL Cluster management port. Default to 1186.
  • Server Data Directory
    • MySQL Cluster data directory for NDB. Default is /var/lib/mysql-cluster.
  • Management Server 1
    • Specify the IP address or hostname of the first management server.
  • Management Server 2
    • Specify the IP address or hostname of the second management server.
5.1.4.3.3. 3) Define Data Nodes
  • Server Port
    • MySQL Cluster data node port. Default to 2200.
  • Server Data Directory
    • MySQL Cluster data directory for NDB. Default is /var/lib/mysql-cluster.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL Cluster data node. It’s recommended to have data nodes in pair. You can add up to 14 data nodes to your cluster. Every data node must have at least 1.5GB of RAM.
5.1.4.3.4. 4) Define MySQL Servers
  • my.cnf Template
    • MySQL configuration template file under /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
  • Server Port
    • MySQL server port. Default to 3306.
  • Server Data Directory
    • MySQL data directory. Default is /var/lib/mysql.
  • Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all nodes in the cluster.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL Cluster API node. You can use the same IP address with management node, co-locate both roles in a same host.
  • Deploy
    • Starts the MySQL Cluster deployment.

5.1.4.4. TimeScaleDB

Deploys a new TimeScaleDB standalone or streaming replication cluster. Only TimeScaleDB 9.6 and later are supported. Minimum of two nodes

5.1.4.4.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the database.
  • Install Software - Check the box if you use clean and minimal VMs. Existing PostgreSQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.4.4.2. 2) Define PostgreSQL Servers
  • Server Port
    • PostgreSQL server port. Default is 5432.
  • User
    • Specify the PostgreSQL super user for example, postgres.
  • Password
    • Specify the password for User.
  • Version
    • Supported versions are 9.6 and 10.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Create New Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the PostgreSQL in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
5.1.4.4.3. 3) Define Topology
  • Master A - IP/Hostname
    • Specify the IP address of the TimeScaleDB master node. Press ‘Enter’ once specified so ClusterControl can verify the node reachability via passwordless SSH.
  • Add slaves to master A
    • Add a slave node connected to master A. Press ‘Enter’ to add more slave.
5.1.4.4.4. 4) Deployment Summary
  • Synchronous Replication
    • Toggle on if you would like to use synchronous streaming replication between the master and the chosen slave. Synchronous replication can be enabled per individual slave node with considerable performance overhead.
  • Deploy
    • Starts the TimeScaleDB standalone or replication deployment.

5.1.4.5. PostgreSQL

Deploys a new PostgreSQL standalone or streaming replication cluster from ClusterControl. Only PostgreSQL 9.6 and later are supported.

5.1.4.5.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the database.
  • Install Software - Check the box if you use clean and minimal VMs. Existing PostgreSQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.4.5.2. 2) Define PostgreSQL Servers
  • Server Port
    • PostgreSQL server port. Default is 5432.
  • User
    • Specify the PostgreSQL super user for example, postgres.
  • Password
    • Specify the password for User.
  • Version
    • Supported versions are 9.6 and 10.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Create New Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the PostgreSQL in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
5.1.4.5.3. 3) Define Topology
  • Master A - IP/Hostname
    • Specify the IP address of the PostgreSQL master node. Press ‘Enter’ once specified so ClusterControl can verify the node reachability via passwordless SSH.
  • Add slaves to master A
    • Add a slave node connected to master A. Press ‘Enter’ to add more slave.
5.1.4.5.4. 4) Deployment Summary
  • Synchronous Replication
    • Toggle on if you would like to use synchronous streaming replication between the master and the chosen slave. Synchronous replication can be enabled per individual slave node with considerable performance overhead.
  • Deploy
    • Starts the PostgreSQL standalone or replication deployment.

5.1.4.6. MongoDB ReplicaSet

Deploys a new MongoDB Replica Set. The database cluster will be automatically added into ClusterControl once deployed. Minimum of three nodes (including mongo arbiter) is recommended.

Attention

It is possible to deploy only 2 MongoDB nodes (without arbiter). The caveat of this approach is no automatic failover. If the primary node goes down then manual failover is required to make the other server as primary. Automatic failover works fine with 3 nodes and more.

5.1.4.6.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software - Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.4.6.2. 2) Define MongoDB Servers
  • Vendor
    • Percona - Percona Server for MongoDB by Percona.
    • MongoDB - MongoDB Server by MongoDB Inc.
  • Version
    • The supported MongoDB versions are 3.2, 3.4, 3.6 and 4.0 (MongoDB only).
  • Server Data Directory
    • Location of MongoDB data directory. Default is /var/lib/mongodb.
  • Admin User
    • MongoDB admin user. ClusterControl will create this user and enable authentication.
  • Admin Password
    • Password for MongoDB Admin User.
  • Server Port
    • MongoDB server port. Default is 27017.
  • mongodb.conf Template
    • MongoDB configuration template file under /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
  • ReplicaSet Name
    • Specify the name of the replica set, similar to replication.replSetName option in MongoDB.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the MongoDB in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
  • Add Nodes
    • Specify the IP address or hostname of the MongoDB nodes. Minimum of three nodes is required.
  • Deploy
    • Starts the MongoDB ReplicaSet deployment.

5.1.4.7. MongoDB Shards

Deploys a new MongoDB Sharded Cluster. The database cluster will be automatically added into ClusterControl once deployed. Minimum of three nodes (including mongo arbiter) is recommended.

Warning

It is possible to deploy only 2 MongoDB nodes (without arbiter). The caveat of this approach is no automatic failover. If the primary node goes down then manual failover is required to make the other server as primary. Automatic failover works fine with 3 nodes and more.

5.1.4.7.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software - Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.4.7.2. 2) Configuration Servers and Routers

Configuration Server

  • Server Port
    • MongoDB config server port. Default is 27019.
  • Add Configuration Servers
    • Specify the IP address or hostname of the MongoDB config servers. Minimum of one node is required, recommended to use three nodes.

Routers/Mongos

  • Server Port
    • MongoDB mongos server port. Default is 27017.
  • Add More Routers
    • Specify the IP address or hostname of the MongoDB mongos.
5.1.4.7.3. 3) Define Shards
  • Replica Set Name
    • Specify a name for this replica set shard.
  • Server Port
    • MongoDB shard server port. Default is 27018.
  • Add Node
    • Specify the IP address or hostname of the MongoDB shard servers. Minimum of one node is required, recommended to use three nodes.
  • Advanced Options
    • Click on this to open set of advanced options for this particular node in this shard:
      • Add slave delay - Specify the amount of delayed slave in milliseconds format.
      • Act as an arbiter - Toggle to ‘Yes’ if the node is arbiter node. Otherwise, choose ‘No’.
  • Add Another Shard - Create another shard. You can then specify the IP address or hostname of MongoDB server that falls under this shard.

5.1.4.7.4. 4) Database Settings
  • Vendor
    • Percona - Percona Server for MongoDB by Percona
    • MongoDB - MongoDB Server by MongoDB Inc
  • Version
    • The supported MongoDB versions are 3.2, 3.4, 3.6 and 4.0 (MongoDB only).
  • Server Data Directory
    • Location of MongoDB data directory. Default is /var/lib/mongodb.
  • Admin User
    • MongoDB admin user. ClusterControl will create this user and enable authentication.
  • Admin Password
    • Password for MongoDB Admin User.
  • Server Port
    • MongoDB server port. Default is 27017.
  • mongodb.conf Template
      • MongoDB configuration template file under /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the MongoDB in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
  • Deploy
    • Starts the MongoDB Sharded Cluster deployment.

5.1.5. Import Existing Server/Cluster

Opens a wizard to import the existing database setup into ClusterControl. The following database cluster types are supported:

  • MySQL Replication

  • MySQL Galera
    • Percona XtraDB Cluster
    • MariaDB Galera Cluster
  • MySQL Cluster (NDB)

  • MongoDB ReplicaSet

  • MongoDB Shards

  • PostgreSQL (standalone or streaming replication)

  • TimeScaleDB (standalone or streaming replication)

There are some prerequisites that need to be fulfilled prior to adding the existing setup. The existing database cluster/server must:

  • Verify that sudo is working properly if you are using a non-root user. See Operating System User.
  • Passwordless SSH from ClusterControl node to database nodes has been configured correctly. See Passwordless SSH.
  • The target cluster must not be in a degraded state. For example, if you have a three-node Galera cluster, all nodes must be alive, accessible and in synced.

For more details, refer to the Requirements section. Each time you add an existing cluster or server, ClusterControl will trigger a job under ClusterControl > Settings > Cluster Jobs. You can see the progress and status under this page. A window will also appear with messages showing the progress.

5.1.5.1. Import Existing MySQL Replication

ClusterControl is able to manage and monitor an existing set of MySQL servers (standalone or replication). Individual hosts specified in the same list will be added to the same server group in the UI. ClusterControl assumes that you are using the same MySQL root password for all instances specified in the group and it will attempt to determine the server role as well (master, slave, multi or standalone).

When importing an existing MySQL Replication, ClusterControl will do the following:

  • Verify SSH connectivity to all nodes.
  • Detect the host environment and operating system.
  • Discover the database role of each node (master, slave, multi, standalone).
  • Pull the configuration files.
  • Generate the authentication key and register the node into ClusterControl.
5.1.5.1.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.5.1.2. 2) Define MySQL Servers
  • Vendor
    • Percona for Percona Server
    • MariaDB for MariaDB Server
    • Oracle for MySQL Server
  • MySQL Version
    • Supported version:
      • Percona Server (5.5, 5.6, 5.7, 8.0)
      • MariaDB Server (10.1, 10.2, 10.3)
      • Oracle MySQL Server (5.7, 8.0)
  • Basedir
    • MySQL base directory. Default is /usr. ClusterControl assumes all MySQL nodes are using the same base directory.
  • Server Port
    • MySQL port on the target server/cluster. Default to 3306. ClusterControl assumes MySQL is running on the same port on all nodes.
  • Admin/Root User
    • MySQL user on the target server/cluster. This user must able to perform GRANT statement. Recommended to use MySQL ‘root’ user.
  • Admin/Root Password
    • Password for MySQL User. ClusterControl assumes that you are using the same MySQL root password for all instances specified in the group.
  • “information_schema” Queries
    • Toggle on to enable information_schema queries to track databases and tables growth. Queries to the information_schema may not be suitable when having many database objects (hundreds of databases, hundreds of tables in each database, triggers, users, events, stored procedures, etc). If disabled, the query that would be executed will be logged so it can be determined if the query is suitable in your environment.
    • This is not recommended for clusters with more than 2000 database objects.
  • Import as Standalone Nodes
    • Toggle on if you only importing a standalone node (by specifying only one node under ‘Add Nodes’ section).
  • Node AutoRecovery
    • ClusterControl will perform automatic recovery if it detects any of the nodes in the cluster is down.
  • Cluster AutoRecovery
    • ClusterControl will perform automatic recovery if it detects the cluster is down or degraded.
  • Add Nodes
    • Enter the MySQL single instances’ IP address or hostname that you want to group under this cluster.
  • Import
    • Click the button to start the import. ClusterControl will connect to the MySQL instances, import configurations and start managing them.

5.1.5.2. Import Existing MySQL Galera

5.1.5.2.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.5.2.2. 2) Define MySQL Servers
  • Vendor
    • Percona XtraDB - Percona XtraDB Cluster by Percona
    • MariaDB - MariaDB Galera Cluster by MariaDB
  • Version
    • Supported version:
      • Percona Server (5.5, 5.6, 5.7)
      • MariaDB Server (10.1, 10.2, 10.3)
  • Basedir
    • MySQL base directory. Default is /usr. ClusterControl assumes MySQL is having the same base directory on all nodes.
  • Port
    • MySQL port on the target cluster. Default to 3306. ClusterControl assumes MySQL is running on the same port on all nodes.
  • Admin/Root User
    • MySQL user on the target cluster. This user must be able to perform GRANT statement. Recommended to use MySQL ‘root’ user.
  • Admin/Root Password
    • Password for MySQL User. The password must be the same on all nodes that you want to add into ClusterControl.
  • “information_schema” Queries
    • Toggle on to enable information_schema queries to track databases and tables growth. Queries to the information_schema may not be suitable when having many database objects (hundreds of databases, hundreds of tables in each database, triggers, users, events, stored procedures, etc). If disabled, the query that would be executed will be logged so it can be determined if the query is suitable in your environment.
    • This is not recommended for clusters with more than 2000 database objects.
  • Node AutoRecovery
    • Toggle on so ClusterControl will perform automatic recovery if it detects any of the nodes in the cluster is down.
  • Cluster AutoRecovery
    • Toggle on so ClusterControl will perform automatic recovery if it detects the cluster is down or degraded.
  • Automatic Node Discovery
    • If toggled on, you only need to specify ONE Galera node and ClusterControl will discover the remaining nodes based on the hostname/IPs used for Galera’s intra-node communication. Replication slaves, load balancers, and other supported services connected to the Galera Cluster can be added after the import has finished.
  • Add Node
    • Specify the target node and press ‘Enter’ for each of them. If you have Automatic Node Discovery enabled, you only need to specify only one node.
  • Import
    • Click the button to start the import. ClusterControl will connect to the Galera node, discover the configuration for the rest of the members and start managing/monitoring the cluster.

5.1.5.3. Import Existing MySQL Cluster

ClusterControl is able to manage and monitor an existing production deployed MySQL Cluster (NDB). Minimum of 2 management nodes and 2 data nodes is required.

5.1.5.3.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.5.3.2. 2) Define Management Server
  • Management server 1
    • Specify the IP address or hostname of the first MySQL Cluster management node.
  • Management server 2
    • Specify the IP address or hostname of the second MySQL Cluster management node.
  • Server Port
    • MySQL Cluster management port. The default port is 1186.
5.1.5.3.3. 3) Define Data Nodes
  • Port
    • MySQL Cluster data node port. The default port is 2200.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL Cluster data node.
5.1.5.3.4. 4) Define MySQL Servers
  • Root Password
    • MySQL root password.
  • Server Port
    • MySQL port. Default to 3306.
  • MySQL Installation Directory
    • MySQL server installation path where ClusterControl can find the mysql binaries.
  • Enable information_schema Queries
    • Toggle on to enable information_schema queries to track databases and tables growth. Queries to the information_schema may not be suitable when having many database objects (hundreds of databases, hundreds of tables in each database, triggers, users, events, stored procedures, etc). If disabled, the query that would be executed will be logged so it can be determined if the query is suitable in your environment.
    • This is not recommended for clusters with more than 2000 database objects.
  • Enable Node AutoRecovery
    • ClusterControl will perform automatic recovery if it detects any of the nodes in the cluster is down.
  • Enable Cluster AutoRecovery
    • ClusterControl will perform automatic recovery if it detects the cluster is down or degraded.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL Cluster API/SQL node.
  • Import
    • Click the button to start the import. ClusterControl will connect to the MySQL Cluster nodes, discover the configuration for the rest of the nodes and start managing/monitoring the cluster.

5.1.5.4. Import Existing MongoDB ReplicaSet

ClusterControl is able to manage and monitor an existing MongoDB/Percona Server for MongoDB 3.x and 4.0 replica set.

5.1.5.4.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.5.4.2. 2) Define MongoDB Servers
  • Vendor
    • Percona - Percona Server for MongoDB by Percona.
    • MongoDB - MongoDB Server by MongoDB Inc (formerly 10gen).
  • Version
    • The supported MongoDB version are 3.2, 3.4 and 3.6.
  • Server Port
    • MongoDB server port. Default is 27017.
  • Admin User
    • MongoDB admin user.
  • Admin Password
    • Password for MongoDB Admin User.
  • MongoDB Auth DB
    • MongoDB database to authenticate against. Default is admin.
  • Hostname
    • Specify one IP address or hostname of the MongoDB replica set member. ClusterControl will automatically discover the rest.
  • Import
    • Click the button to start the import. ClusterControl will connect to the specified MongoDB node, discover the configuration for the rest of the nodes and start managing/monitoring the cluster.

5.1.5.5. Import Existing MongoDB Shards

ClusterControl is able to manage and monitor an existing MongoDB/Percona Server for MongoDB 3.x and 4.0 sharded cluster setup.

5.1.5.5.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port Number
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.5.5.2. 2) Set Routers/Mongos

Configuration Server

  • Server Port
    • MongoDB mongos server port. Default is 27017.
  • Add More Routers
    • Specify the IP address or hostname of the MongoDB mongos.
5.1.5.5.3. 3) Database Settings
  • Vendor
    • Percona - Percona Server for MongoDB by Percona
    • MongoDB - MongoDB Server by MongoDB Inc
  • Version
    • The supported MongoDB version are 3.2, 3.4 and 3.6.
  • Admin User
    • MongoDB admin user.
  • Admin Password
    • Password for MongoDB Admin User.
  • MongoDB Auth DB
    • MongoDB database to authenticate against. Default is admin.
  • Import
    • Click the button to start the import. ClusterControl will connect to the specified MongoDB mongos, discover the configuration for the rest of the members and start managing/monitoring the cluster.

5.1.5.6. Import Existing PostgreSQL/TimeScaleDB

ClusterControl is able to manage/monitor an existing set of PostgreSQL/TimeScaleDB version 9.6 and later. Individual hosts specified in the same list will be added to the same server group in the UI. ClusterControl assumes that you are using the same database admin password for all instances specified in the group.

5.1.5.6.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.5.6.2. 2) Define PostgreSQL Servers
  • Server Port
    • PostgreSQL port on the target server/cluster. Default to 5432. ClusterControl assumes PostgreSQL/TimeScaleDB is running on the same port on all nodes.
  • User
    • PostgreSQL user on the target server/cluster. Recommended to use PostgreSQL/TimeScaleDB ‘postgres’ user.
  • Password
    • Password for User. ClusterControl assumes that you are using the same admin password for all instances under this group.
  • Version
    • PostgreSQL/TimeScaleDB server version on the target server/cluster. Supported versions are 9.6, 10.x and 11.x.
  • Basedir
    • PostgreSQL/TimeScaleDB base directory. Default is /usr. ClusterControl assumes all PostgreSQL/TimeScaleDB nodes are using the same base directory.
  • Add Node
    • Specify all PostgreSQL/TimeScaleDB instances that you want to group under this cluster.
  • Import
    • Click the button to start the import. ClusterControl will connect to the PostgreSQL/TimeScaleDB instances, import configurations and start managing them.

5.1.6. Deploy in the Cloud

Opens a step-by-step modal dialog to deploy a new set of database cluster in the cloud. Supported cloud providers are:

  • Amazon Web Service
  • Google Cloud Platform
  • Microsoft Azure

The following database cluster types are supported:

  • MySQL Replication:
    • Percona Server 8.0
    • Oracle MySQL Server 8.0
    • MariaDB Server 10.3
  • MySQL Galera:
    • Percona XtraDB Cluster 5.7
    • MariaDB 10.2
    • MariaDB 10.3
  • MongoDB ReplicaSet:
    • Percona Server for MongoDB 3.6
    • MongoDB 3.6
    • MongoDB 4.0
  • PostgreSQL 11 Streaming Replication

  • TimeScaleDB 11 Streaming Replication

There are prerequisites that need to be fulfilled prior to the deployment:

  • A working cloud credential profile on the supported cloud platform. See Cloud Providers.
  • The date and time for ClusterControl node must be synced with NTP server. See Timezone.
  • If the cloud instance is inside a private network, the network must support auto-assign public IP address. ClusterControl only connects to the created cloud instance via public network.

Under the hood, the deployment process does the following:

  1. Create cloud instances.
  2. Configure security groups and networking.
  3. Verify the SSH connectivity from ClusterControl to all created instances.
  4. Deploy database on every instance.
  5. Configure the clustering or replication links.
  6. Register the deployment into ClusterControl.

ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl > Activity > Jobs.

Attention

This feature is still in beta. See Known Limitations for details.

5.1.6.1. Cluster Details

  • Select Cluster Type
    • Choose a cluster.
  • Select Vendor and Version
    • MySQL Replication Cluster - Percona Server 8.0, Oracle MySQL Server 8.0, MariaDB Server 10.3
    • MySQL Galera - Percona XtraDB Cluster 5.7, MariaDB 10.2, MariaDB 10.3
    • MongoDB Replica Set - MongoDB 3.4 and MongoDB 4.0 by MongoDB, Inc and Percona Server for MongoDB 3.4 by Percona (replica set only).
    • PostgreSQL Streaming Replication - PostgreSQL 11.0 (streaming replication only).
    • TimeScaleDB - TimeScaleDB 11.0 (streaming replication only).

5.1.6.2. Configure Cluster

5.1.6.2.1. MySQL Replication
  • Select Number of Nodes
    • How many nodes for the database cluster. You can start with one but two is recommended.
  • Cluster Name
    • This value will be used as the instance name or tag. No space is allowed.
  • MySQL Server Port
    • MySQL server port. Default is 3306.
  • MySQL Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
  • my.cnf Template
    • MySQL configuration template file under /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
  • MySQL Server Data Directory
    • Location of MySQL data directory. Default is /var/lib/mysql.
5.1.6.2.2. MySQL Galera Cluster
  • Select Number of Nodes
    • How many nodes for the database cluster. You can start with one but three (or bigger odd number) is recommended.
  • Cluster Name
    • This value will be used as the instance name or tag. No space is allowed.
  • MySQL Server Port
    • MySQL server port. Default is 3306.
  • MySQL Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
  • my.cnf Template
    • MySQL configuration template file under /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
  • MySQL Server Data Directory
    • Location of MySQL data directory. Default is /var/lib/mysql.
5.1.6.2.3. MongoDB Replica Set
  • Select Number of Nodes
    • How many nodes for the database cluster. You can start with one but three (or bigger odd number) is recommended.
  • Cluster Name
    • This value will be used as the instance name or tag. No space is allowed.
  • Admin User
    • MongoDB admin user. ClusterControl will create this user and enable authentication.
  • Admin Password
    • Password for MongoDB Admin User.
  • Server Data Directory
    • Location of MongoDB data directory. Default is /var/lib/mongodb.
  • Server Port
    • MongoDB server port. Default is 27017.
  • mongodb.conf Template
    • MongoDB configuration template file under /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
  • ReplicaSet Name
    • Specify the name of the replica set, similar to replication.replSetName option in MongoDB.
5.1.6.2.4. PostgreSQL Streaming Replication
  • Select Number of Nodes
    • How many nodes for the database cluster. You can start with one but two or more is recommended.

Note

The first virtual machine that comes up will be configured as a master.

  • Cluster Name
    • This value will be used as the instance name or tag. No space is allowed.
  • User
    • Specify the PostgreSQL super user for example, postgres.
  • Password
    • Specify the password for User.
  • Server Port
    • PostgreSQL server port. Default is 5432.
5.1.6.2.5. TimeScaleDB Streaming Replication
  • Select Number of Nodes
    • How many nodes for the database cluster. You can start with one but two or more is recommended.

Note

The first virtual machine that comes up will be configured as a master.

  • Cluster Name
    • This value will be used as the instance name or tag. No space is allowed.
  • User
    • Specify the TimeScaleDB super user for example, postgres.
  • Password
    • Specify the password for User.
  • Server Port
    • TimeScaleDB server port. Default is 5432.

5.1.6.3. Select Credential

Select one of the existing cloud credentials or you can create a new one by clicking on the Add New Credential button.

  • Add New Credential

5.1.6.4. Select Virtual Machine

Most of the settings in this step are dynamically populated from the cloud provider by the chosen credentials.

  • Operating System
    • Choose a supported operating system from the dropdown.
  • Instance Size
    • Choose an instance size for the cloud instance.
  • Virtual Private Cloud (VPC)
    • Exclusive for AWS. Choose a virtual private cloud network for the cloud instance.
  • Add New
    • Opens the Add VPC wizard. Specify the tag name and IP address block.
  • SSH Key
    • SSH key location on the ClusterControl host. This key must be able to authenticate to the created cloud instances passwordlessly.
  • Storage Type
    • Choose the storage type for the cloud instance.
  • Allocate Storage
    • Specify the storage size for the cloud instance in GB.

5.1.6.5. Deployment Summary

  • Subnet
    • Choose one existing subnet for the selected network.
  • Add New Subnet
    • Opens the Add Subnet wizard. Specify the subnet name, availability zone and IP CIDR block address. E.g: 10.0.10.0/24

5.1.6.6. Known Limitations

There are known limitations for the cloud deployment feature:

  • There is currently no ‘accounting’ in place for the cloud instances. You will need to manually remove created cloud instances.
  • You cannot deploy a load balancer automatically with a cloud instance.

We appreciate your feedbacks, feature requests and bug reports. Contact us via the support channel or create a feature request. See FAQ for details.

5.3. User Guide for MySQL

This user guide covers ClusterControl with MySQL-based clusters, namely:

  • Galera Cluster for MySQL
    • Percona XtraDB Cluster
    • MariaDB Galera Cluster
  • MySQL Cluster (NDB)

  • MySQL/MariaDB Replication

  • MySQL/MariaDB single instance

  • MySQL Group Replication

Contents: