5. User Guide

This documentation provides detailed user guide for ClusterControl UI.

5.1. Dashboard

This page is the landing page once you logged in. It provides a summary of database clusters monitored under ClusterControl.

../_images/cc_cluster_list_140.png

Section 1

ClusterControl’s top menu.

  • Activity
    • Provides global list of activities that have been performed across clusters (e.g., deploying a new cluster, adding an existing cluster and cloning). Pick a job to see its running messages or job details. It also shows aggregated view of all alarms raised across and ClusterControl logs (with severity) for all clusters monitored by ClusterControl.
    • Job status indicator:
Job status Description
FINISHED The job is successfully executed.
FAILED The job is executed but failed.
RUNNING The job is started and in progress.
ABORTED The job is started but terminated.
DEFINED The job is defined and yet to start.

Section 2

List of database clusters managed under ClusterControl with summarized status. Database cluster deployed by (or added into) ClusterControl will be listed in this page. See Database Cluster List section.

5.1.1. Deploy Database Cluster

Opens a wizard to deploy a new set of database cluster. The following database cluster types are supported:

  • MySQL Replication

  • MySQL Galera
    • MySQL Galera Cluster
    • Percona XtraDB Cluster
    • MariaDB Galera Cluster
  • MySQL Group Replication (beta)

  • MySQL Cluster (NDB)

  • MongoDB ReplicaSet

  • MongoDB Shards

There are some prerequisites that need to be fulfilled prior to the deployment:

  • Verify that sudo is working properly if you are using a non-root user.
  • Passwordless SSH is configured correctly from ClusterControl node to all database nodes.

For more details, refer to the Requirement section. Each time you create a database cluster, ClusterControl will trigger a job under ClusterControl > Activity > Jobs. You can see the progress and status under this page.

5.1.1.1. MySQL Replication

Deploys a new MySQL Replication. The database cluster will be automatically added into ClusterControl once deployed. Minimum of two nodes is required. The first node in the list is the master. You can add more slaves later after the deployment completes. Starting from version 1.4.0, it’s possible to setup a master-master replication from scratch under ‘Define Topology’ tab.

Caution

ClusterControl sets read_only=1 on all slaves but a priviledged user (SUPER) can still write to a slave.

5.1.1.1.1. 1) SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port Number
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software
    • Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software.
    • If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.
  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (Redhat/CentOS) if enabled (recommended).
5.1.1.1.2. 2) Define MySQL Servers
  • Vendor
    • Percona XtraDB - Percona Server by Percona
    • MariaDB - MariaDB Server by MariaDB
    • Oracle - MySQL Server by Oracle
  • Version
    • Select the MySQL version. For Oracle, only 5.7 is supported. For Percona, 5.6 and 5.7 are available. If you choose MariaDB, only 10.1 is supported.
  • Server Data Directory
    • Location of MySQL data directory. Default is /var/lib/mysql.
  • Server Port
    • MySQL server port. Default is 3306.
  • my.cnf Template
    • MySQL configuration template file under /usr/share/cmon/templates. Default is my.cnf.repl[version]. Keep the default is recommended.
  • Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the Galera Cluster in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
5.1.1.1.3. 3) Define Topology
  • Master A - IP/Hostname
    • Specify the IP address of the MySQL master node.
  • Add slaves to master A
    • Add a slave node connected to master A. Press enter to add more slave.
  • Add Second Master Node
    • Open the add node wizard for secondary master.
  • Master B - IP/Hostname
    • Only available if you click Add Second Master Node.
    • Specify the IP address of the other MySQL master node. ClusterControl will setup a master-master replication between these nodes. Master B will be read-only once deployed (secondary master), letting Master A to hold the write role (primary master) for the replication chain.
  • Add slaves to master B
    • Only available if you click Add Second Master Node.
    • Add a slave node connected to master B. Press enter to add more slave.
  • Deploy
    • Starts the MySQL Replication deployment.

5.1.1.2. MySQL Galera

Deploys a new MySQL Galera Cluster. The database cluster will be automatically added into ClusterControl once deployed. A minimal setup is comprised of one Galera node (no high availability, but this can later be scaled with more nodes). However, the recommendation is a minimum of three nodes for high availability. Garbd (an arbitrator) can be added later after the deployment completes if needed.

5.1.1.2.1. 1) SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port Number
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software
    • Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software.
    • If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.
  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (Redhat/CentOS) if enabled (recommended).
5.1.1.2.2. 2) Define MySQL Servers
  • Vendor
    • Percona XtraDB - Percona XtraDB Cluster by Percona
    • MariaDB - MariaDB Galera Cluster by MariaDB
    • Codership - MySQL Galera Cluster by Codership
  • Version
    • Select the MySQL version. For Codership, 5.5 and 5.6 are available, while Percona supports 5.5, 5.6 and 5.7. If you choose MariaDB, 5.5 and 10.1 are available.
  • Server Data Directory
    • Location of MySQL data directory. Default is /var/lib/mysql.
  • Server Port
    • MySQL server port. Default is 3306.
  • my.cnf Template
    • MySQL configuration template file under /usr/share/cmon/templates. Default is my.cnf.galera. Keep it default is recommended.
  • Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the Galera Cluster in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL nodes. Minimum of three nodes is recommended.
  • Deploy
    • Starts the Galera Cluster deployment.

5.1.1.3. MySQL Group Replication

Deploys a new MySQL Group Replication by Oracle. This is beta feature introduced in version 1.4.0. The database cluster will be added into ClusterControl automatically once deployed. A minimum of three nodes is required.

5.1.1.3.1. 1) SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port Number
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software
    • Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software.
    • If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.
  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (Redhat/CentOS) if enabled (recommended).
5.1.1.3.2. 2) Define MySQL Servers
  • Vendor
    • Oracle - MySQL Group Replication by Oracle.
  • Version
    • Select the MySQL version. Group Replication is only available on MySQL 5.7.
  • Server Data Directory
    • Location of MySQL data directory. Default is /var/lib/mysql.
  • Server Port
    • MySQL server port. Default is 3306.
  • my.cnf Template
    • MySQL configuration template file under /usr/share/cmon/templates. Default is my.cnf.grouprepl. Keep it default is recommended.
  • Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the Galera Cluster in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL nodes. Minimum of three nodes is recommended.
  • Deploy
    • Starts the MySQL Group Replication deployment.

5.1.1.4. MySQL/NDB Cluster

Deploys a new MySQL (NDB) Cluster by Oracle. The cluster consists of management nodes, MySQL API nodes and data nodes. The database cluster will be automatically added into ClusterControl once deployed. Minimum of 4 nodes (2 API/mgmd + 2 data nodes) is recommended.

5.1.1.4.1. 1) SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port Number
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software
    • Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software.
    • If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.
  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (Redhat/CentOS) if enabled (recommended).
5.1.1.4.2. 2) Define Management Servers
  • Server Port
    • MySQL Cluster management port. Default to 1186.
  • Server Data Directory
    • MySQL Cluster data directory for NDB. Default is /var/lib/mysql-cluster.
  • Management Server 1
    • Specify the IP address or hostname of the first management server.
  • Management Server 2
    • Specify the IP address or hostname of the second management server.
5.1.1.4.3. 3) Define Data Nodes
  • Server Port
    • MySQL Cluster data node port. Default to 2200.
  • Server Data Directory
    • MySQL Cluster data directory for NDB. Default is /var/lib/mysql-cluster.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL Cluster data node. It’s recommended to have data nodes in pair. You can add up to 14 data nodes to your cluster.
5.1.1.4.4. 4) Define MySQL Servers
  • my.cnf Template
    • MySQL configuration template file under /usr/share/cmon/templates. The default is my.cnf.mysqlcluster. Keep it default is recommended.
  • Server Port
    • MySQL server port. Default to 3306.
  • Server Data Directory
    • MySQL data directory. Default is /var/lib/mysql.
  • Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all nodes in the cluster.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL Cluster API node. You can use the same IP address with management node, co-locate both roles in a same host.
  • Deploy
    • Starts the MySQL Cluster deployment.

5.1.1.5. MongoDB ReplicaSet

Deploys a new MongoDB Replica Set. The database cluster will be automatically added into ClusterControl once deployed. Minimum of three nodes (including mongo arbiter) is recommended.

Warning

It is possible to deploy only 2 MongoDB nodes (without arbiter) but it is highly not recommended. The caveat of this approach is no automatic failover. If the primary node goes down then manual failover is required to make the other server as primary. Automatic failover works fine with 3 nodes and more.

5.1.1.5.1. 1) SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port Number
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software
    • Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software.
    • If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.
  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (Redhat/CentOS) if enabled (recommended).
5.1.1.5.2. 2) Define MongoDB Servers
  • Vendor
    • Percona - Percona Server for MongoDB by Percona.
    • MongoDB - MongoDB Server by MongoDB Inc.
  • Version
    • The supported version is 3.2.
  • Server Data Directory
    • Location of MongoDB data directory. Default is /var/lib/mongodb.
  • Admin User
    • MongoDB admin user. ClusterControl will create this user and enable authentication.
  • Admin Password
    • Password for MongoDB Admin User.
  • Server Port
    • MongoDB server port. Default is 27017.
  • mongodb.conf Template
    • MongoDB configuration template file under /usr/share/cmon/templates. Default is mongodb.conf.[vendor]. Keep it default is recommended.
  • ReplicaSet Name
    • Specify the name of the replica set, similar to replSet option in MongoDB.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the MongoDB in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
  • Add Nodes
    • Specify the IP address or hostname of the MongoDB nodes. Minimum of three nodes is required.
  • Deploy
    • Starts the MongoDB ReplicaSet deployment.

5.1.1.6. MongoDB Shards

Deploys a new MongoDB Sharded Cluster. The database cluster will be automatically added into ClusterControl once deployed. Minimum of three nodes (including mongo arbiter) is recommended.

Warning

It is possible to deploy only 2 MongoDB nodes (without arbiter) but it is highly not recommended. The caveat of this approach is no automatic failover. If the primary node goes down then manual failover is required to make the other server as primary. Automatic failover works fine with 3 nodes and more.

5.1.1.6.1. 1) SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port Number
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software
    • Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software.
    • If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.
  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (Redhat/CentOS) if enabled (recommended).
5.1.1.6.2. 2) Configuration Servers and Routers

Configuration Server

  • Server Port
    • MongoDB config server port. Default is 27019.
  • Add Configuration Servers
    • Specify the IP address or hostname of the MongoDB config servers. Minimum of one node is required, recommended to use three nodes.

Routers/Mongos

  • Server Port
    • MongoDB mongos server port. Default is 27017.
  • Add More Routers
    • Specify the IP address or hostname of the MongoDB mongos.
5.1.1.6.3. 3) Define Shards
  • Replica Set Name
    • Specify a name for this replica set shard.
  • Server Port
    • MongoDB shard server port. Default is 27018.
  • Add Node
    • Specify the IP address or hostname of the MongoDB shard servers. Minimum of one node is required, recommended to use three nodes.
  • Advanced Options
    • Click on this to open set of advanced options for this particular node in this shard:
      • Add slave delay - Specify the amount of delayed slave in miliseconds format.
      • Act as an arbiter - Toggle to ‘Yes’ if the node is arbiter node. Otherwise, choose ‘No’.
  • Add Another Shard - Create another shard. You can then specify the IP address or hostname of MongoDB server that falls under this shard.

5.1.1.6.4. 4) Database Settings
  • Vendor
    • Percona - Percona Server for MongoDB by Percona
    • MongoDB - MongoDB Server by MongoDB Inc
  • Version
    • The supported version is 3.2.
  • Server Data Directory
    • Location of MongoDB data directory. Default is /var/lib/mongodb.
  • Admin User
    • MongoDB admin user. ClusterControl will create this user and enable authentication.
  • Admin Password
    • Password for MongoDB Admin User.
  • Server Port
    • MongoDB server port. Default is 27017.
  • mongodb.conf Template
    • MongoDB configuration template file under /usr/share/cmon/templates. Default is mongodb.conf.[vendor]. Keep it default is recommended.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the MongoDB in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
  • Deploy
    • Starts the MongoDB Sharded Cluster deployment.

5.1.2. Import Existing Server/Database Cluster

Opens a single-page wizard to import the existing database setup into ClusterControl. The following database cluster types are supported:

  • MySQL Replication

  • MySQL Galera
    • MySQL Galera Cluster
    • Percona XtraDB Cluster
    • MariaDB Galera Cluster
  • MySQL Cluster (NDB)

  • MongoDB ReplicaSet

  • MongoDB Shards

  • PostgreSQL (single-instance)

There are some prerequisites that need to be fulfilled prior to adding the existing setup. The existing database cluster/server must:

  • Verify that sudo is working properly if you are using a non-root user
  • Passwordless SSH from ClusterControl node to database nodes has been configured correctly
  • The target server/cluster must not in degraded state. For example, if you have a three-node Galera cluster, all nodes must alive and in synced.

For more details, refer to the Requirement section. Each time you add an existing cluster or server, ClusterControl will trigger a job under ClusterControl > Settings > Cluster Jobs. You can see the progress and status under this page. A window will also appear with messages showing the progress.

5.1.2.1. Add Existing MySQL Replication

ClusterControl is able to manage/monitor an existing set of MySQL servers (standalone or replication). Individual hosts specified in the same list will be added to the same server group in the UI. ClusterControl assumes that you are using the same MySQL root password for all instances specified in the group and it will attempt to determine the server role as well (master, slave, multi or standalone).

Choose MySQL Replication as the database type. Fill in all required information.

5.1.2.1.1. 1) SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.2.1.2. 2) Define MySQL Servers
  • Vendor
    • Percona for Percona Server
    • MariaDB for MariaDB Server
    • Oracle for MySQL Server
  • MySQL Version
    • Supported version:
      • Percona Server (5.5, 5.6, 5.7)
      • MariaDB Server (10.1)
      • MySQL Server (5.7)
  • Basedir
    • MySQL base directory. Default is /usr. ClusterControl assumes all MySQL nodes are using the same base directory.
  • Port
    • MySQL port on the target server/cluster. Default to 3306. ClusterControl assumes MySQL is running on the same port on all nodes.
  • User
    • MySQL user on the target server/cluster. This user must able to perform GRANT statement. Recommended to use MySQL ‘root’ user.
  • Root Password
    • Password for MySQL User. ClusterControl assumes that you are using the same MySQL root password for all instances specified in the group.
  • Add Nodes
    • Enter the MySQL single instances’ IP address or hostname that you want to group under this cluster.
  • Import
    • Click the button to start the import. ClusterControl will connect to the MySQL instances, import configurations and start managing them.

5.1.2.2. Import Existing MySQL Galera

Choose MySQL Galera Cluster as the database type. Fill in all required information.

5.1.2.2.1. 1) SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.2.2.2. 2) Define MySQL Servers
  • Vendor
    • Percona XtraDB - Percona XtraDB Cluster by Percona
    • MariaDB - MariaDB Galera Cluster by MariaDB
    • Codership - MySQL Galera Cluster by Codership
  • MySQL Version
    • Select MySQL version of the target cluster.
  • Basedir
    • MySQL base directory. Default is /usr. ClusterControl assumes MySQL is having the same base directory on all nodes.
  • Port
    • MySQL port on the target server/cluster. Default to 3306. ClusterControl assumes MySQL is running on the same port on all nodes.
  • User
    • MySQL user on the target server/cluster. This user must able to perform GRANT statement. Recommended to use MySQL ‘root’ user.
  • Password
    • Password for MySQL User. The password must be the same on all nodes that you want to add into ClusterControl.
  • Enable information_schema Queries
    • Use information_schema to query MySQL statistics. This is not recommended for clusters with more than 2000 tables/databases.
  • Enable Node AutoRecovery
    • ClusterControl will perform automatic recovery if it detects any of the nodes in the cluster is down.
  • Enable Cluster AutoRecovery
    • ClusterControl will perform automatic recovery if it detects the cluster is down or degraded.
  • Hostname
    • Please note that you only need to specify ONE Galera node and ClusterControl will discover the rest based on wsrep_cluster_address.
  • Import
    • Click the button to start the import. ClusterControl will connect to the Galera node, discover the configuration for the rest of the nodes and start managing/monitoring the cluster.

5.1.2.3. Import Existing MySQL Cluster

ClusterControl is able to manage and monitor an existing production deployed MySQL Cluster (NDB). Minimum of 2 management nodes and 2 data nodes is required.

Choose MySQL/NDB Cluster as the database type. Fill in all required information.

5.1.2.3.1. 1) SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.2.3.2. 2) Define Management Server
  • Management server 1
    • Specify the IP address or hostname of the first MySQL Cluster management node.
  • Management server 2
    • Specify the IP address or hostname of the second MySQL Cluster management node.
  • Server Port
    • MySQL Cluster management port. The default port is 1186.
5.1.2.3.3. 3) Define Data Nodes
  • Port
    • MySQL Cluster data node port. The default port is 2200.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL Cluster data node.
5.1.2.3.4. 4) Define MySQL Servers
  • Root Password
    • MySQL root password.
  • Server Port
    • MySQL port. Default to 3306.
  • MySQL Installation Directory
    • MySQL server installation path where ClusterControl can find the mysql binaries.
  • Enable information_schema Queries
    • Use information_schema to query MySQL statistics. This is not recommended for clusters with more than 2000 tables or databases.
  • Enable Node AutoRecovery
    • ClusterControl will perform automatic recovery if it detects any of the nodes in the cluster is down.
  • Enable Cluster AutoRecovery
    • ClusterControl will perform automatic recovery if it detects the cluster is down or degraded.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL Cluster API/SQL node.
  • Import
    • Click the button to start the import. ClusterControl will connect to the MySQL Cluster nodes, discover the configuration for the rest of the nodes and start managing/monitoring the cluster.

5.1.2.4. Import Existing MongoDB ReplicaSet

ClusterControl is able to manage and monitor an existing MongoDB/Percona Server for MongoDB 3.x replica set.

5.1.2.4.1. 1) SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.2.4.2. 2) Define MongoDB Servers
  • Vendor
    • Percona - Percona Server for MongoDB by Percona. (formerly Tokutek)
    • MongoDB - MongoDB Server by MongoDB Inc. (formerly 10gen)
  • Version
    • The supported version is 3.2.
  • Server Port
    • MongoDB server port. Default is 27017.
  • Admin User
    • MongoDB admin user.
  • Admin Password
    • Password for MongoDB Admin User.
  • Hostname
    • Specify one IP address or hostname of the MongoDB replica set member. ClusterControl will automatically discover the rest.
  • Import
    • Click the button to start the import. ClusterControl will connect to the specified MongoDB node, discover the configuration for the rest of the nodes and start managing/monitoring the cluster.

5.1.2.5. MongoDB Shards

ClusterControl is able to manage and monitor an existing MongoDB/Percona Server for MongoDB 3.x sharded cluster setup.

5.1.2.5.1. 1) SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port Number
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.2.5.2. 2) Set Router/Mongos

Configuration Server

  • Server Port
    • MongoDB mongos server port. Default is 27017.
  • Add More Routers
    • Specify the IP address or hostname of the MongoDB mongos.
5.1.2.5.3. 3) Database Settings
  • Vendor
    • Percona - Percona Server for MongoDB by Percona
    • MongoDB - MongoDB Server by MongoDB Inc
  • Version
    • The supported version is 3.2.
  • Admin User
    • MongoDB admin user.
  • Admin Password
    • Password for MongoDB Admin User.
  • Import
    • Click the button to start the import. ClusterControl will connect to the specified MongoDB node, discover the configuration for the rest of the nodes and start managing/monitoring the cluster.

5.1.2.6. Add existing PostgreSQL servers

ClusterControl is able to manage/monitor an existing set of PostgreSQL 9.x servers (standalone). Individual hosts specified in the same list will be added to the same server group in the UI. ClusterControl assumes that you are using the same postgres password for all instances specified in the group.

Choose Postgres Server as the database type. Fill in all required information.

5.1.2.6.1. 1) SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.2.6.2. 2) Define PostgreSQL Server
  • Server Port
    • PostgreSQL port on the target server/cluster. Default to 5432. ClusterControl assumes PostgreSQL is running on the same port on all nodes.
  • User
    • PostgreSQL user on the target server/cluster. Recommended to use PostgreSQL ‘postgres’ user.
  • Password
    • Password for Postgres User. ClusterControl assumes that you are using the same postgres password for all instances specified in the group.
  • Basedir
    • PostgreSQL base directory. Default is /usr. ClusterControl assumes all PostgreSQL nodes are using the same base directory.
  • Add Nodes
    • Specify all PostgreSQL single instances that you want to group under this cluster.
  • Import
    • Click the button to start the import. ClusterControl will connect to the PostgreSQL instances, import configurations and start managing them.

5.1.3. Deploy Database Node

This page provides ability to create a new single node of following database in your environment:

  • PostgreSQL

Once a single node is deployed, it can then be managed from the ClusterControl interface. Single node can be scaled into clusters with a single click of a button. You can scale PostgreSQL into a master-slave replication at later stage via Add Node.

5.1.3.1. PostgreSQL

Deploys a new PostgreSQL standalone or replication cluster from ClusterControl. One would start by creating a PostgreSQL master node under this tab. Only PostgreSQL 9.x is supported in this version.

5.1.3.1.1. 1) SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port Number
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the database.
  • Install Software
    • Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software.
    • If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.
5.1.3.1.2. 2) Define PostgreSQL Servers
  • Server Port
    • PostgreSQL server port. Default is 5432.
  • User
    • Specify the PostgreSQL root user for example, postgres.
  • Password
    • Specify the PostgreSQL root password.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the PostgreSQL in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
  • Hostname
    • Specify the IP address or hostname of the PostgreSQL node.
  • Create
    • Starts the PostgreSQL deployment for single instance.

5.1.4. Database Cluster List

Each row represents the summarized status of a database cluster:

Field Description
Cluster Name The cluster name, configured under ClusterControl > Settings > General Settings > Cluster Settings > Cluster Name
ID The cluster identifier number
Version Database server major version
Database Vendor Database vendor icon
Cluster Type

The database cluster type:

  • MYSQL_SERVER - Standalone MySQL server
  • REPLICATION - MySQL replication
  • GALERA - MySQL Galera Cluster, Percona XtraDB Cluster, MariaDB Galera Cluster
  • GROUP REPLICATION - MySQL Group Replication
  • MYSQL CLUSTER - MySQL Cluster
  • MONGODB - MongoDB replica Set, MongoDB Sharded Cluster, MongoDB Replicated Sharded Cluster
  • POSTGRESQL - Standalone or Replicated PostgreSQL server
Cluster Status

The cluster status:

  • ACTIVE - The cluster is up and running. All cluster nodes are running normally.
  • DEGRADED - The full set of nodes in a cluster is not available. One or more nodes is down or unreachable.
  • FAILURE - The cluster is down. Probably that all or most of the nodes are down or unreachable, resulting the cluster fails to operate as expected.
Auto Recovery

The auto recovery status of Galera Cluster:

  • Cluster - If sets to ON, ClusterControl will perform automatic recovery if it detects cluster failure.
  • Node - If sets to ON, ClusterContorl will perform automatic recovery if it detects node failure.
Node Type and Status See table on node status indicators further below.

Node status indicator:

Indicator Description
Green (tick) OK: Indicates the node is working fine.
Yellow (exclamation) WARNING: Indicates the node is degraded and not fully performing as expected.
Red (wrench) MAINTENANCE: Indicates that maintenance mode is on for this node.
Dark red (cross) PROBLEMATIC: Indicates the node is down or unreachable.

5.2. Settings

Admin page provides interface to manage clusters, organizations, users, roles and authentication options inside ClusterControl.

../_images/admin_index2.png

5.2.1. Clusters

Manage database clusters inside ClusterControl.

  • Delete
    • Unregister selected database cluster from the ClusterControl UI. This action will NOT delete the actual database cluster.
  • Change Organization
    • Change organization for selected database cluster from the organization list created in Organizations/Users tab.

5.2.2. User Management

5.2.2.1. Organizations/Users

Manage organizations and users under ClusterControl. Take note that only the first user created with ClusterControl will be able to create the organizations. You can have one or more organization and each organization consists of zero or more clusters and users. You can have many roles defined under ClusterControl and a user must be assigned with one role.

As a roundup, here is how the different entities relate to each other:

../_images/cc_erd.png

Note

ClusterControl creates ‘Admin’ organization by default.

5.2.2.1.1. Users

A user belongs to one organization and assigned with a role. Users created here will be able to login and see specific cluster(s), depending on their organization and the cluster they have been assigned to.

Each role is defined with specific privileges under Access Control. ClusterControl default roles are Super Admin, Admin and User:

Role Description
Super Admin Able to see all clusters that are registered in the UI. The Super Admin can also create organizations and users. Only the Super Admin can transfer a cluster from one organization to another.
Admin Belongs to a specific organization, and is able to see all clusters registered in that organization.
User Belongs to a specific organization, and is only able to see the cluster(s) that he/she registered.

5.2.2.2. Access Control

ClusterControl uses Role-Based Access Control (RBAC) to restrict access to clusters and their respective deployment, management and monitoring features. This ensures that only authorised user requests are allowed. Access to functionality is fine-grained, allowing access to be defined by organisation or user. ClusterControl uses a permissions framework to define how a user may interact with the management and monitoring functionality, after they have been authorised to do so.

You can create a custom role with its own set of access levels. Assign the role to specific user under Organizations/Users tab.

Note

The Super Admin role is not listed since it is a default role and has the highest level of privileges in ClusterControl.

5.2.2.2.1. Privileges
Privilege Description
Allow Allow access without modification. Similar to read-only mode.
Deny Deny access. The selected feature will not appear in the UI.
Manage Allow access with modification.
Modify Similar to manage, for certain features that required modification.
5.2.2.2.2. Feature Description
Feature Description
Overview Overview tab - ClusterControl > Overview
Nodes Nodes tab - ClusterControl > Nodes
Configuration Management Configuration management page - ClusterControl > Manage > Configurations
Query Monitor Query Monitor tab - ClusterControl > Query Monitor
Performance Performance tab - ClusterControl > Performance
Backup Backup tab - ClusterControl > Backup
Manage Manage tab - ClusterControl > Manage
Alarms Alarms tab - ClusterControl > Alarms
Jobs Jobs tab - ClusterControl > Jobs
Settings Settings tab - ClusterControl > Settings
Add Existing Cluster Add Existing Cluster button and page - ClusterControl > Add Existing Server/Cluster
Create Cluster Create Database Cluster button and page - ClusterControl > Create Database Cluster
Add Load Balancer Add Load Balancer page - ClusterControl > Actions > Add Load Balancer and ClusterControl > Manage > Load Balancer
Clone Clone Cluster page (Galera) - ClusterControl > Actions > Clone Cluster
Access All Clusters Access all clusters registered under the same organzation.
Cluster Registrations Cluster Registrations page - ClusterControl > Settings (top-menu) > Cluster Registrations
Service Providers Service Providers page - ClusterControl > Settings (top-menu) > Service Providers
Search Search button and page - ClusterControl > Search
Create Database Node Create Database Node button and page - ClusterControl > Create Database Node
Developer Studio Developer Studio page - ClusterControl > Manage > Developer Studio
MySQL User Management MySQL user management sections - ClusterControl > Settings (top-menu) > MySQL User Management and ClusterControl > Manage > Schema and Users
Operation Reports Operational reports page - ClusterControl > Settings (top-menu) > Operational Reports
Custom Advisors Custom Advisors page - ClusterControl > Manage > Custom Advisors
Manage SSL Key Management page - ClusterControl > Settings (top-menu) > Key Management

5.2.2.3. LDAP Access

ClusterControl supports Active Directory, FreeIPA and LDAP authentication. This allows users to log into ClusterControl by using their corporate credentials instead of a separate password. LDAP groups can be mapped onto ClusterControl user groups to apply roles to the entire group. It supports up to LDAPv3 protocol based on RFC2307.

When authenticating, ClusterControl will first bind to the directory tree server (‘LDAP Host’) using the specified ‘Login DN’ user and password, then it will check if the username you entered exists in the form of uid, cn or sAMAccountName of the ‘User DN’. If it exists, it will then use the username to bind against the LDAP server to check whether it has the configured group as in ‘LDAP Group Name’ in ClusterControl. If it has, ClusterControl will then map the user to the appropriate ClusterControl role and grant access to the UI.

The following flowchart summarizes the workflow:

../_images/ipaad_flowchart.png

You can map the LDAP group to corresponding ClusterControl role created under Access Control tab. This would ensure that ClusterControl authorizes the logged-in user based on the role assigned.

Once the LDAP settings are verified, login into ClusterControl by using the LDAP credentials (uid, cn or sAMAccountName with respective password). User will be authenticated and redirected to ClusterControl dashboard page based on the assigned role. From this point, both ClusterControl and LDAP authentications would work.

5.2.2.3.1. Users and Groups

If LDAP authentication is enabled, you need to map ClusterControl roles with their respective LDAP groups. You can configure this by clicking on ‘+’ icon to add a group:

Field Description Example
Organization The organization that you want the LDAP group to be assigned to. Admin
LDAP Group Name The distinguished name of the LDAP group. cn=Database Administrator,ou=group
Role User role in ClusterControl. Please refer to Organization/User section. SuperAdmin
5.2.2.3.2. Settings
  • Enable LDAP Authentication
    • Choose whether to enable or disable LDAP authentication.
  • LDAP Host
    • The LDAP server hostname or IP address. To use LDAP over SSL/TLS, specify LDAP URI, ldaps://[hostname/IP address]
  • LDAP Port
    • Default is 389 and 636 for LDAP over SSL. Make sure to allow connections from ClusterControl host for both TCP and UDP protocol.
  • Base DN
    • The root LDAP node under which all other nodes exist in the directory structure.
  • Login DN
    • The distinguished name used to bind to the LDAP server. This is often the administrator or manager user. It can also be a dedicated login with minimal access that should be able to return the DN of the authenticating users. ClusterControl must do an LDAP search using this DN before any user can log in. This field is case-senstive.
  • Password
    • The password for the binding user specified in ‘Login DN’.
  • User DN
    • The user’s relative distinguished name (RDN) used to bind to the LDAP server. For example, if the LDAP/AD user DN is CN=userA,OU=People,DC=ldap,DC=domain,DC=com, specify “OU=People,DC=ldap,DC=domain,DC=com”. This field is case-senstive.
  • Group DN
    • The group’s relative distinguished name (RDN) used to bind to the LDAP server. For example, if the LDAP/AD group DN is CN=DBA,OU=Group,DC=ldap,DC=domain,DC=com, specify “OU=Group,DC=ldap,DC=domain,DC=com”. This field is case-senstive.

Attention

ClusterControl does not support binding against a nested directory group. Ensure each LDAP user that authenticates to ClusterControl has a direct membership to the LDAP group.

5.2.2.3.3. FreeIPA

ClusterControl is able to bind to a FreeIPA server and perform lookups on compatible schema. Once the DN for that user is retrieved, it tries to bind using the full DN (in standard tree) with the entered password to verify the LDAP group of that user.

Thus, for FreeIPA, the user’s and group’s DN should use compatible schema, cn=compat replacing the default cn=accounts in ClusterControl LDAP Settings except for the ‘Login DN’, as shown in following screenshot:

../_images/ipaad_set_ipa.png
5.2.2.3.4. Active Directory

Please make sure Active Directory runs with ‘Identity Management for UNIX’ enabled. You can enable this under Server Manager > Roles > Active Directory Domain Services > Add Role Services. Detailed instructions on how to do this is explained in this article.

Once enabled, ensure that each group you want to authenticate from ClusterControl has a Group ID, and each user you want to authenticate from ClusterControl has a UID and is assigned with a GID.

Attention

For Active Directory, ensure you configure the exact distinguished name (with proper capitalization) since the LDAP interchange format (LDIF) fields are returned in capital letters.

For example on how to setup OpenLDAP autentication with ClusterControl, please refer to this blog post.

For example on integrating ClusterControl with FreeIPA and Windows Active Directory, please refer to this blog post.

5.2.3. MySQL User Management

Provides global MySQL user management interface across all MySQL-based cluster. Users and privileges can be set directly and retrieved from the cluster so ClusterControl is always in sync with the managed MySQL databases. Users can be created across more than one cluster at once.

You can choose individual node by clicking on the respective node or all nodes in the cluster by clicking on the respective cluster in the side menu.

5.2.3.1. Active Accounts

Shows all active accounts across clusters, which are currently active or were connected since the last server restart.

5.2.3.2. Inactive Accounts

Shows all accounts across clusters that are not been used since the last server restart. Server must have been running for at least 8 hours to check for inactives accounts.

You can drop particular accounts by clicking at the multiple checkboxes and click ‘Drop User’ button to initiate the action.

5.2.3.3. Create Accounts

Creates a new MySQL user for the chosen MySQL node or cluster.

Field Description
Server Hostname of the user. Wildcard (%) is permitted.
Username Specify the username.
Password Specify the password Username.
Verify Password Re-enter the same password for Username.
All Privileges Allow all privileges, similar to ‘ALL PRIVILEGES’ option.
Database Specify the database or table name. It can be either in ‘.‘, ‘db_name’, ‘db_name.*’ or ‘db_name.tbl_name’ format.

5.2.4. Email Notifications

Configures email notifications across clusters.

  • Save To
    • Save the settings to individual or all clusters.
  • Send digests at
    • Send a digested (summary) email at this time every dayf or the selected recipient.
  • Timezone
    • Timezone for the selected recipient.
  • Daily limit for non-digest email as
    • The maximum number of non-digest email notification should be sent per day for the selected recipient. -1 for unlimited.
  • Alarm/Event Category
    Event Description
    Network Network related messages, e.g host unreachable, SSH issues.
    CmonDatabase Internal CMON database related messages.
    Mail Mail system related messages.
    Cluster Cluster related messages, e.g cluster failed.
    ClusterConfiguration Cluster configuration messages, e.g software configuration messages.
    ClusterRecovery Recovery messages like cluster or node recovery failures.
    Node Message related to nodes, e.g node disconnected, missing GRANT, failed to start HAproxy, failed to start NDB cluster nodes.
    Host Host related messages, e.g CPU/disk/RAM/swap alarms.
    DbHealth Database health related messages, e.g memory usage of mysql servers, connections.
    DbPerformance Alarms for long running transactions and deadlocks
    SoftwareInstallation Software installation related messages.
    Backup Messages about backups.
    Unknown Other uncategorized messages.
  • Select how you wants alarms/events delivered
    Action Description
    Ignore Ignore if an alarm raised.
    Deliver Send notification immediately via email once an alarm raised.
    Digest Send a summary of alarms raised everyday at Send digests at

5.2.5. Operational Report

Generates or creates schedule of operational reports. The current default report shows a cluster’s health and performance at the time it was generated compared to 1 day ago.

The report provides information about:
  • Cluster Information
    • Cluster
    • Nodes
    • Backup Summary
    • Top Queries Summary
  • Node Status Overview
    • CPU Usage
    • Data Throughput
    • Load Average
    • Free Disk Space
    • RAM Usage
    • Network Throughput
    • Server Load
    • Handler
  • Package Upgrade Report (generating available software and security packages to upgrade)

5.2.5.1. Operational Reports

Provides list of generated operational reports. Click on any of them will open the operational report in a new tab.

  • Create
    • Create an operational report immediately.
    • Specify the cluster name and operational type. Optionally, you can click on ‘Add Email’ button to add recipients into the list.
  • Delete
    • Delete the selected operational report.
  • Refresh
    • Refresh the operational report list.

5.2.5.2. Schedules

List of scheduled operational report. Optionally, you can click on ‘Add Email’ button to add recipients into the list.

  • Schedule
    • Schedule an operational report at an interval. You can schedule it daily, weekly and monthly. Optionally, you can click on ‘Add Email’ button to add recipients into the list.
  • Edit
    • Edit the selected schedule.
  • Delete
    • Delete the selected schedule.
  • Refresh
    • Refresh the schedule list.

5.2.6. Key Management

Key Management allows you to manage a set of SSL certificates and keys that can be provisioned on your clusters. This feature allows you to create Certificate Authority (CA) and/or self-signed certificates and keys. Then, it can be easily enabled and disabled for MySQL and PostreSQL client-server connections using SSL encryption feature. See Enable SSL Encryption for details.

5.2.6.1. Manage

Manage existing keys and certificates generated by ClusterControl.

  • Revoke
    • Revoke the selected certificate. This will put an end to the validity of the certificate.
  • Generate
    • Re-generate an invalid or expired certificate. By using this, ClusterControl can generate a new key and certificate by using the same information used when it was generated for the first time.
  • Move
    • Move the selected certificate to another location. Clicking on this will open another dialog box where you can create/delete a directory under /var/lib/cmon/ca. Use this feature to organize and categorize the generated certificate per directory.

5.2.6.2. Generate

By default, the generated keys and cetificates will be created under default repository at /var/lib/cmon/ca.

  • New Folder
    • Create a new directory under the default repository.
  • Delete Folder
    • Delete the selected directory.
  • Refresh
    • Refresh the list.
5.2.6.2.1. Self-signed Certificate Authority and Key

Generate a self-signed Certificate Authority and Key. You can use this Certificate Authority (CA) to sign your client and server certificates.

  • Path
    • Certification repository path. To change the path, click on the file browser left-side menu. Default value is /var/lib/cmon/ca.
  • Certificate Authority and Key Name
    • Enter a name without extension. For example MyCA, ca-cert
  • Description
    • Put some description for the certificate authority.
  • Country
    • Choose a country name from the dropdown menu.
  • State
    • State or province name.
  • Locality
    • City name.
  • Organization
    • Organization name.
  • Organization unit
    • Unit or department name.
  • Common name
    • Specify server fully-qualified domain name (FQDN) or your name.
    • Common Name value used for the server and client certificates/keys must each differ from the Common Name value used for the CA certificate. Otherwise, the certificate and the key files will not work for the servers compiled using OpenSSL.
  • Email
    • Email address.
  • Key length (bits)
    • The key length in bits. 2048 and higher is recommended. The larger the public and private key length, the harder it is to crack.
  • Expiration Date (days)
    • Certificate expiration in days.
  • Generate
    • Generate certificate and key.
  • Reset
    • Reset the form.
5.2.6.2.2. Client/Server Certificates and Key

Sign with an existing CA or generate a self-signed certificate. ClusterControl generates certificate and key depending on the type, server or client. The generated server’s key and certificate can then be used by Enable SSL Encryption feature.

  • Certificate Authority
    • Select an existing CA (by clicking on any existing CA on the left-hand side menu) or leave it empty to generate a self-signed certificate.
  • Type
    • server - Generate certificate for server usage.
    • client - Generate certificate for client usage.
  • Certificate and Key Name
    • Enter the certificate and key name. The same name will be used by ClusterControl to generate certificate and key. For example, if you specify the name is “severalnines”, ClusterControl will generate severalnines.key and severalnines.crt respectively.
  • Description
    • Put some description for the certificate and key.
  • Country
    • Choose a country name from the dropdown menu.
  • State
    • State or province name.
  • Locality
    • City name.
  • Organization
    • Organization name.
  • Organization unit
    • Unit or department name.
  • Common name
    • Specify server fully-qualified domain name (FQDN) or your name.
    • Common Name value used for the server and client certificates/keys must each differ from the Common Name value used for the CA certificate. Otherwise, the certificate and the key files will not work for the servers compiled using OpenSSL.
  • Email
    • Email address.
  • Key length (bits)
    • The key length in bits. 2048 and higher is recommended.
  • Expiration Date (days)
    • Certificate expiration in days.
  • Generate
    • Generate certificate and key.
  • Reset
    • Reset the form.

5.2.6.3. Import

Import keys and certificates into ClusterControl’s certificate repository. The imported keys and certificates can then be used to enable SSL encryption for server-client connection or Galera replication at a later stage. Before you perform the import action, bear in mind to:

  1. Upload your certificate and key to a directory on the ClusterControl Controller host
  2. Uncheck the self-signed certificate checkbox if the certificate is not self-signed
  3. You need to also provide a CA certificate if the certificate is not self-signed
  4. Duplicate certificates will not be created
  • Destination Path - Where you want the certificate to be imported to. Click on the file explorer window on the left to change the path.
  • Save As - Certificate name.
  • Certificate File - Physical path to the certificate file. For example: /home/user/ssl/file.crt.
  • Private Key File - Physical path to the key file. For example: /home/user/ssl/file.key.
  • Self-signed certificate - Uncheck the self-signed certificate checkbox if the certificate is not self-signed.
  • Import - Start the import process.

5.2.7. Notification Services

Provides interface to manage notification service for Custom Advisor. At the moment, only external mail server and PagerDuty is supported. For list of supported plugin, you can find it at NPM page here.

This feature requires ClusterControl NodeJS package to be installed. If you have not configure this, please run the following command on ClusterControl node:

/usr/local/clustercontrol-nodejs/bin/install-cc-nodejs.sh

The above command will install and configure all dependencies including the NPM plugins. Take note that, your repository must have NPM package listed in order for the script to successfully install all the dependencies. For RHEL 7, you might need to have EPEL repository configured by using the following command:

rpm -Uhv http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm

Please refresh the ClusterControl UI after the installation is completed.

5.2.7.1. Create Service Config

  • Name
    • The configuration name.
  • Organization
    • Which organization that the service belongs to.
  • Plugin Type
    • Choose a plugin type. Details on this is explained on the next sub sections.
  • Enabled
    • Check the box if you want it available for Custom Advisors. If unchecked, you will not able to see this service when setting up Custom Advisors.
5.2.7.1.1. Mail Notifition Settings

Forwards the alarms raised by Custom Advisors to an email address. You can add multiple email addresses to this setting and once a defined Custom Advisors exceeds the threshold, all recipients will receive it.

5.2.7.1.2. PagerDuty Notification Settings

Forwards the alarms raised by Custom Advisors to PagerDuty notification service. ClusterControl connects through PagerDuty API via NodeJS.

  • PagerDuty Token
    • Log into your PagerDuty account and create a service for ClusterControl Custom Advisors. Go to PagerDuty > Services > Add New Service. Specify a service name and choose “Use our API” directly on “Integration Type”. Note the “Service API Key” from the Service summary page and specify it in this field.
  • PagerDuty Domain
    • PagerDuty domain name. For example, severalnines.pagerduty.com.

5.2.8. Repositories

Manages provider’s repository for database servers and clusters. You can have three types of repository when deploying database server/cluster using ClusterControl:

  1. Use Vendor Repositories
    • Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
  2. Do Not Setup Vendor Repositories
    • Provision software by using the pre-existing software repository already setup on the nodes. User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
  3. Use Mirrored Repositories (Create new repository)
    • Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository.
    • This allows you to “freeze” the current versions of the software packages used to provision a database cluster for a specific vendor and you can later use that mirrored repository to provision the same set of versions when adding more nodes or deploying other clusters.
    • ClusterControl sets up the mirrored repository under [Apache Document root]/cmon-repos/, which is accessible via HTTP at http://[ClusterControl IP address]/cmon-repos/ .

Only Local Mirrored Repository will be listed and manageable here.

  • Remove Repositories
    • Remove the selected repository.
  • Filter by cluster type
    • Filter the repository list by cluster type.

For reference purpose, following is an example of yum definition if Local Mirrored Repository is configured on the database nodes:

$ cat /etc/yum.repos.d/clustercontrol-percona-5.6-yum-el7.repo
[percona-5.6-yum-el7]
name = percona-5.6-yum-el7
baseurl = http://10.0.0.10/cmon-repos/percona-5.6-yum-el7
enabled = 1
gpgcheck = 0
gpgkey = http://10.0.0.10/cmon-repos/percona-5.6-yum-el7/localrepo-gpg-pubkey.asc

5.2.9. Cluster Registrations

From a ClusterControl UI instance, this enables the user to register a database cluster managed by ClusterControl. For each cluster, you need to provide a ClusterControl API URL and token. This effectively establishes the communication between the UI and the controller. The ClusterControl UI can connect to multiple CMON Controller servers (via the CMON REST API) and provide a centralized view of all databases. Users need to register the CMONAPI token and URL for each cluster.

Note

The CMONAPI token is critical and hidden under asterisk values. This token provides authentication access for ClusterControl UI to communicate with the CMON backend services directly. Please keep this token in a safe place.

You can retrieve the CMONAPI token manually at [wwwroot]/cmonapi/config/bootstrap.php on line containing CMON_TOKEN value, where [wwwroot] is location of Apache document root.

5.2.10. Service Providers

Manages resources and credentials for service providers.

5.2.10.1. AWS Credentials

Manage your AWS credentials under this tab. Fully working AWS credentials requires more than just a keypair. The stored AWS credential will be used by ClusterControl to list your available Amazon instances, spin new instances when deploying a cluster, uploading backups to S3 or Glacier, etc.

Field Description
Keypair Name Keypair name.
Access Key ID Your AWS Access Key ID as described on this page. You can get this from AWS IAM Management console.
Secret Access Key Your AWS Secret Access Key as described on this page. You can get this from AWS IAM Management console.
Private Key File Upload the private keypair file.
Comment (Optional) Description of the keypair.

To edit, double click on an item from the list. To remove the credential, choose an item and click on the ‘-’ icon.

Note

The saved key name must match the AWS keypair name in order to deploy on AWS. For example, if the keypair file is ‘severalnines.pem’, put ‘severalnines’ as keypair name.

5.2.10.1.1. Adding your AWS Credentials to ClusterControl

From AWS IAM Management Console, click Create User button. Enter the user name:

../_images/cc_aws_cre1.png

It will prompt for Security Credentials. Copy Access Key ID and Secret Access Key because these values are needed by ClusterControl’s AWS Credentials. You can also download the credential by clicking the Download Credentials button.

../_images/cc_aws_cre2.png

Next, select the create user, and go to Permissions > Attach User Policy.

../_images/cc_aws_cre3.png

Choose Power User Access and click Apply Policy.

../_images/cc_aws_cre4.png

You should see the policy has been assigned correctly under Permissions tab:

../_images/cc_aws_cre5.png

Go back to AWS EC2 Dashboard and create a Key Pair by clicking Create Key Pair button:

../_images/cc_aws_cre6.png

Enter the key pair name and click Create. It will force you to download the keypair file automatically:

../_images/cc_aws_cre7.png

Specify all information above under AWS Credentials window in ClusterControl. Make sure you specify the same key pair name as created on previous step and upload that key using the Browse button. The Comment field is optional.

../_images/cc_aws_cre8.png

5.2.10.2. AWS Instances

Lists your AWS instances. You can perform simple AWS instance management directly from ClusterControl, which uses your defined AWS credentials to connect to AWS API.

Field Description
KeyPair Choose which keypair to use to access your AWS resources.
Stop Shutdown the instance.
Reboot Restart the instance.
Terminate Shutdown and terminate the instance.

5.2.10.3. AWS VPC

This allows you to conveniently manage your VPC from ClusterControl, which uses your defined AWS credentials to connect to AWS VPC. Most of the functionalities are integrated and have the same look and feel as the AWS VPC console. Thus, you may refer to VPC User Guide for details on how to manage AWS VPC.

Field Description
Start VPC Wizard Open the VPC creation wizard. Please refer to Getting Started Guide for details on how to start creating a VPC.
KeyPair Choose which keypair to use to access your AWS resources.
Region Choose the AWS region for the VPC.
VPC

List of VPCs created under the selected region.

  • Create VPC - Create a new VPC.
  • Delete - Delete selected VPC.
  • DHCP Options Set - Specify the DHCP options for your VPC.
Subnet

List of VPC subnet created under the selected region.

  • Create - Create a new VPC subnet.
  • Delete - Delete selected subnet.
Route Tables List of routing tables created under the selected region.
Internet Gateway List of security groups created under the selected region.
Network ACL List of network Access Control Lists created under the selected region.
Security Group List of security groups created under the selected region.
Running Instances List of all running instances under the selected region.

5.2.10.4. On-Premise Credentials

When deploying on-Premise, ClusterControl uses your credentials to spin up the necessary resources for the database nodes. The following options are available if you click on ‘+’ button:

Field Description
Keypair Name Key file name.
Private key File Upload the private key pair file.
Comment (Optional) Description of the key pair.
Cluster Name Assign this key to specific cluster.

Note

Keep cluster unspecified for keys that you will use for new installation.

5.2.11. Subscriptions

For users with a valid subscription (Standalone, Advanced, Enterprise), enter your license information here to unlock additional features based on the subscription.

This functionality is also accessible per individual cluster under Settings > Subscription. Following screenshot shows example on filing up the license information:

../_images/subscription.png

Attention

Make sure you copy the subscription information as they are, with no leading/trailing spaces.

The license key is validated during runtime. Reload your web browser after registering the license.

Note

When the license expires, ClusterControl defaults back to the Community Edition. For features comparison, please refer to ClusterControl product page.

5.3. User Guide for MySQL

This user guide covers ClusterControl with MySQL-based clusters, namely:

  • Galera Cluster for MySQL
    • MySQL Galera Cluster
    • Percona XtraDB Cluster
    • MariaDB Galera Cluster
  • MySQL Cluster

  • MySQL replication

  • MySQL single instance

Contents: