ClusterControl Documentation
Use the menu below or the search below to learn everything you need to know about ClusterControl

5. User Guide

This documentation provides a detailed user guide for ClusterControl UI.

5.1. Dashboard

This page is the landing page once you are logged in. It provides a summary of database clusters monitored by ClusterControl.

../_images/cc_cluster_list_160.png

Top Menu

ClusterControl’s top menu.

Sidebar

Left-side navigation provides quick access to ClusterControl administration menu. See Sidebar.

Cluster List

List of database clusters managed by ClusterControl with summarized status. Database cluster deployed by (or imported into) ClusterControl will be listed in this page. See Database Cluster List.

Cluster Actions

Provides shortcuts to main cluster functionality. Every supported database cluster has its own set of menu:

5.1.1. Activity

Clicking on it will expand the activity tab which consists of Alarms, Jobs and Logs. Click once more to collapse the content. If you rolled over the menu icon, you would see a counter summary for every component.

5.1.1.1. Alarms

Shows aggregated view of alarms raised for all clusters monitored by ClusterControl. Each alarm entry has a header, details on the issue, severity level, category, corresponding cluster name, corresponding host and timestamp. All the alarms listed here are also accessible directly under individual cluster main menu available at Alarms > Alarms.

Click on the alarm entry itself to see the full details and recommendation. Furthermore, you can click on the “Full Alarm Details” to see the full information, recommendation and to send out this alarm as an email to the recipients configured under Settings > CMON Settings > Email Notification Settings. Click “Ignore Alarm” to silent the respective alarm from appearing in the list again.

5.1.1.2. Jobs

Shows aggregated view of jobs that have been initiated and performed by ClusterControl across clusters (e.g., deploying a new cluster, adding an existing cluster, cloning, creating backup, etc). Each job entry has a job status, a cluster name, which user started the job and also timestamp. All the jobs listed here are also accessible directly under individual cluster main menu available at Logs > Jobs.

Click on the job entry itself to see its most recent job messages. Furthermore, you can click on the Full Job Details to see the full job specification and messages. Under the Full Job Details popup, you have the ability to see the full command sent to the controller service for that particular job by clicking on Expand Job Specs button. Underneath it is the full job messages in descending order (newer first) returned by the controller service. Copy to clipboard button will copy the content of the job messages to the clipboard.

Note

Starting from v1.6, ClusterControl has a better support for parallelism, where you can perform multiple deployments simultaneously.

The job status:

Job status Description
FINISHED The job is successfully executed.
FAILED The job is executed but failed.
RUNNING The job is started and in progress.
ABORTED The job is started but terminated.
DEFINED The job is defined but yet to start.

5.1.1.3. Logs

Shows aggregated view of ClusterControl logs which require user’s attention across clusters (logs with severity WARNING and ERROR). Each log entry has a message subject, severity level, component, the corresponding cluster name and also timestamp. All the logs listed here are also accessible directly under individual cluster at Logs > CMON Logs.

5.1.2. Global Settings

Provides interface to register clusters, repositories and subscriptions inside ClusterControl.

5.1.2.1. Repositories

Manages provider’s repository for database servers and clusters. You can have three types of repository when deploying database server/cluster using ClusterControl:

  1. Use Vendor Repositories
    • Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
  2. Do Not Setup Vendor Repositories
    • Provision software by using the pre-existing software repository already setup on the nodes. User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
  3. Use Mirrored Repositories (Create new repository)
    • Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository.
    • This allows you to “freeze” the current versions of the software packages used to provision a database cluster for a specific vendor and you can later use that mirrored repository to provision the same set of versions when adding more nodes or deploying other clusters.
    • ClusterControl sets up the mirrored repository under {wwwroot}/cmon-repos/, which is accessible via HTTP at http://ClusterControl_host/cmon-repos/.

Only Local Mirrored Repository will be listed and manageable here.

  • Remove Repositories
    • Remove the selected repository.
  • Filter by cluster type
    • Filter the repository list by cluster type.

For reference purpose, following is an example of yum definition if Local Mirrored Repository is configured on the database nodes:

$ cat /etc/yum.repos.d/clustercontrol-percona-5.6-yum-el7.repo
[percona-5.6-yum-el7]
name = percona-5.6-yum-el7
baseurl = http://10.0.0.10/cmon-repos/percona-5.6-yum-el7
enabled = 1
gpgcheck = 0
gpgkey = http://10.0.0.10/cmon-repos/percona-5.6-yum-el7/localrepo-gpg-pubkey.asc

5.1.2.2. Cluster Registrations

From a ClusterControl UI instance, this enables the user to register a database cluster managed by ClusterControl. For each cluster, you need to provide a ClusterControl API URL and token. This effectively establishes the communication between the UI and the controller. The ClusterControl UI can connect to multiple CMON Controller servers (via the CMON REST API) and provide a centralized view of all databases. Users need to register the CMONAPI token and URL for each cluster.

Note

The CMONAPI token is critical and hidden under asterisk values. This token provides authentication access for ClusterControl UI to communicate with the CMON backend services directly. Please keep this token in a safe place.

You can retrieve the CMONAPI token manually at {wwwroot}/cmonapi/config/bootstrap.php on line containing CMON_TOKEN value, where {wwwroot} is location of Apache document root.

5.1.2.3. Subscriptions

For users with a valid subscription (Standalone, Pro, Advanced, Enterprise), enter your license information here to unlock additional features based on the subscription.

Following screenshot shows example on filing up the license information:

../_images/subscription16.png

Attention

Make sure to copy the subscription information as they are, with no leading/trailing spaces.

The license key is validated during runtime. Reload your web browser after registering a new license.

Note

When the license expires, ClusterControl defaults back to the Community Edition. For features comparison, please refer to ClusterControl product page.

5.1.3. Database Cluster List

Each row represents the summarized status of a database cluster:

Field Description
Cluster Name The cluster name, configured under ClusterControl > Settings > General Settings > Cluster Settings > Cluster Name
ID The cluster identifier number
Version Database server major version
Database Vendor Database vendor icon
Cluster Type

The database cluster type:

  • MYSQL_SERVER - Standalone MySQL server
  • REPLICATION - MySQL Replication
  • GALERA - MySQL Galera Cluster, Percona XtraDB Cluster, MariaDB Galera Cluster
  • GROUP REPLICATION - MySQL Group Replication
  • MYSQL CLUSTER - MySQL Cluster (NDB)
  • MONGODB - MongoDB ReplicaSet, MongoDB Sharded Cluster, MongoDB Replicated Sharded Cluster
  • POSTGRESQL - PostgreSQL Standalone or Replication
Cluster Status

The cluster status:

  • ACTIVE - The cluster is up and running. All cluster nodes are running normally.
  • DEGRADED - The full set of nodes in a cluster is not available. One or more nodes is down or unreachable.
  • FAILURE - The cluster is down. Probably that all or most of the nodes are down or unreachable, resulting the cluster fails to operate as expected.
Auto Recovery

The auto recovery status of Galera Cluster:

  • Cluster - If sets to ON, ClusterControl will perform automatic recovery if it detects cluster failure.
  • Node - If sets to ON, ClusterControl will perform automatic recovery if it detects node failure.
Node Type and Status See table on node status indicators further down.

Node status indicator:

Indicator Description
Green (tick) OK: Indicates the node is working fine.
Yellow (exclamation) WARNING: Indicates the node is degraded and not fully performing as expected.
Red (wrench) MAINTENANCE: Indicates that maintenance mode is on for this node.
Dark red (cross) PROBLEMATIC: Indicates the node is down or unreachable.

5.1.4. Deploy Database Cluster

Opens a step-by-step modal dialog to deploy a new set of database cluster. The following database cluster types are supported:

  • MySQL Replication

  • MySQL Galera
    • MySQL Galera Cluster
    • Percona XtraDB Cluster
    • MariaDB Galera Cluster
  • MySQL Group Replication (beta)

  • MySQL Cluster (NDB)

  • MongoDB ReplicaSet

  • MongoDB Sharded Cluster

  • PostgreSQL Replication

There are prerequisites that need to be fulfilled prior to the deployment:

  • Verify that sudo is working properly if you are using a non-root user. See Operating System User.
  • Passwordless SSH is configured from ClusterControl node to all database nodes. See Passwordless SSH.

ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl > Activity > Jobs.

5.1.4.1. MySQL Replication

Deploys a new MySQL Replication. The database cluster will be automatically added into ClusterControl once deployed. Minimum of two nodes is required. If only one MySQL IP address or hostname is defined, ClusterControl will deploy it as a standalone MySQL server.

By default, ClusterControl deploys MySQL replication with the following configurations:

  • GTID enabled (MySQL and Percona only).
  • Start all database nodes with read_only=ON. The chosen master will be promoted by disabling read-only dynamically.
  • PERFORMANCE_SCHEMA is disabled.
  • Generated account credentials are stored inside /etc/mysql/secrets-backup.cnf.

If you would like to customize the above configurations, modify the template base file to suit your needs before proceed to the deployment. See Base Template Files for details.

Starting from version 1.4.0, it’s possible to setup a master-master replication from scratch under ‘Define Topology’ tab. You can add more slaves later after the deployment completes.

Caution

ClusterControl sets read_only=1 on all slaves but a privileged user (SUPER) can still write to a slave (except for MySQL versions that support super_read_only).

5.1.4.1.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for the target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software - Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.4.1.2. 2) Define MySQL Servers
  • Vendor
    • Percona - Percona Server by Percona
    • MariaDB - MariaDB Server by MariaDB
    • Oracle - MySQL Server by Oracle
  • Version
    • Select the MySQL version for new deployment. For Oracle, only 5.7 is supported. For Percona, 5.6 and 5.7 while MariaDB, 10.1 and 10.2 are supported.
  • Server Data Directory
    • Location of MySQL data directory. Default is /var/lib/mysql.
  • Server Port
    • MySQL server port. Default is 3306.
  • my.cnf Template
    • MySQL configuration template file under /usr/share/cmon/templates. Default is my.cnf.{version}. Keep the default is recommended.
  • Admin/Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the cluster in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
5.1.4.1.3. 3) Define Topology
  • Master A - IP/Hostname
    • Specify the IP address of the primary MySQL master node.
  • Add slaves to master A
    • Add a slave node connected to master A. Press enter to add more slaves.
  • Add Second Master Node
    • Opens the add node wizard for secondary MySQL master node.
  • Master B - IP/Hostname
    • Only available if you click Add Second Master Node.
    • Specify the IP address of the other MySQL master node. ClusterControl will setup a master-master replication between these nodes. Master B will be read-only once deployed (secondary master), letting Master A to hold the write role (primary master) for the replication chain.
  • Add slaves to master B
    • Only available if you click Add Second Master Node.
    • Add a slave node connected to master B. Press ‘Enter’ to add more slave.
  • Deploy
    • Starts the MySQL Replication deployment.

5.1.4.2. MySQL Galera

Deploys a new MySQL Galera Cluster. The database cluster will be automatically added into ClusterControl once deployed. A minimal setup is comprised of one Galera node (no high availability, but this can later be scaled with more nodes). However, the minimum of three nodes is recommended for high availability. Garbd (an arbitrator) can be added later after the deployment completes if needed.

By default, ClusterControl deploys MySQL Galera with the following configurations:

  • Use xtrabackup-v2 or mariabackup for wsrep_sst_method.
  • PERFORMANCE_SCHEMA is disabled.
  • Generated account credentials are stored inside /etc/mysql/secrets-backup.cnf.
  • Binary logging is disabled.
5.1.4.2.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software - Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.4.2.2. 2) Define MySQL Servers
  • Vendor
    • Percona - Percona XtraDB Cluster by Percona
    • MariaDB - MariaDB Server (Galera embedded) by MariaDB
    • Codership - MySQL Galera Cluster by Codership
  • Version
    • Select the MySQL version for new deployment. For Codership, 5.7 is available, while Percona supports 5.6 and 5.7. If you pick MariaDB, 10.1 and 10.2 are supported.
  • Server Data Directory
    • Location of MySQL data directory. Default is /var/lib/mysql.
  • Server Port
    • MySQL server port. Default is 3306.
  • my.cnf Template
    • MySQL configuration template file under /usr/share/cmon/templates. Default is my.cnf.galera. Keep it default is recommended.
  • Admin/Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the Galera Cluster in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
  • Add Node
    • Specify the IP address or hostname of the MySQL nodes. Press enter to add more nodes. Minimum of three nodes is recommended.
  • Deploy
    • Starts the Galera Cluster deployment.

5.1.4.3. MySQL Cluster (NDB)

Deploys a new MySQL Cluster (NDB) by Oracle. The cluster consists of management nodes, MySQL API nodes and data nodes. The database cluster will be automatically added into ClusterControl once deployed. Minimum of 4 nodes (2 SQL and management + 2 data nodes) is recommended.

Attention

Every data node must have at least 1.5 GB of RAM for the deployment to succeed.

5.1.4.3.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software - Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.4.3.2. 2) Define Management Servers
  • Server Port
    • MySQL Cluster management port. Default to 1186.
  • Server Data Directory
    • MySQL Cluster data directory for NDB. Default is /var/lib/mysql-cluster.
  • Management Server 1
    • Specify the IP address or hostname of the first management server.
  • Management Server 2
    • Specify the IP address or hostname of the second management server.
5.1.4.3.3. 3) Define Data Nodes
  • Server Port
    • MySQL Cluster data node port. Default to 2200.
  • Server Data Directory
    • MySQL Cluster data directory for NDB. Default is /var/lib/mysql-cluster.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL Cluster data node. It’s recommended to have data nodes in pair. You can add up to 14 data nodes to your cluster. Every data node must have at least 1.5GB of RAM.
5.1.4.3.4. 4) Define MySQL Servers
  • my.cnf Template
    • MySQL configuration template file under /usr/share/cmon/templates. The default is my.cnf.mysqlcluster. Keep it default is recommended.
  • Server Port
    • MySQL server port. Default to 3306.
  • Server Data Directory
    • MySQL data directory. Default is /var/lib/mysql.
  • Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all nodes in the cluster.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL Cluster API node. You can use the same IP address with management node, co-locate both roles in a same host.
  • Deploy
    • Starts the MySQL Cluster deployment.

5.1.4.4. MySQL Group Replication (beta)

Deploys a new MySQL Group Replication cluster by Oracle. This is a beta feature introduced in version 1.4.0. The database cluster will be added into ClusterControl automatically once deployed. A minimum of three nodes is required.

5.1.4.4.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software - Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.4.4.2. 2) Define MySQL Servers
  • Vendor
    • Oracle - MySQL Group Replication by Oracle.
  • Version
    • Select the MySQL version. Group Replication is only available on MySQL 5.7+.
  • Server Data Directory
    • Location of MySQL data directory. Default is /var/lib/mysql.
  • Server Port
    • MySQL server port. Default is 3306.
  • my.cnf Template
    • MySQL configuration template file under /usr/share/cmon/templates. Default is my.cnf.grouprepl. Keep it default is recommended.
  • Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the cluster in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL nodes. Minimum of three nodes is recommended.
  • Deploy
    • Starts the MySQL Group Replication deployment.

5.1.4.5. MongoDB ReplicaSet

Deploys a new MongoDB Replica Set. The database cluster will be automatically added into ClusterControl once deployed. Minimum of three nodes (including mongo arbiter) is recommended.

Attention

It is possible to deploy only 2 MongoDB nodes (without arbiter). The caveat of this approach is no automatic failover. If the primary node goes down then manual failover is required to make the other server as primary. Automatic failover works fine with 3 nodes and more.

5.1.4.5.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software - Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.4.5.2. 2) Define MongoDB Servers
  • Vendor
    • Percona - Percona Server for MongoDB by Percona.
    • MongoDB - MongoDB Server by MongoDB Inc.
  • Version
    • The supported version is 3.2 and 3.4 for Percona and additional 3.6 for MongoDB.
  • Server Data Directory
    • Location of MongoDB data directory. Default is /var/lib/mongodb.
  • Admin User
    • MongoDB admin user. ClusterControl will create this user and enable authentication.
  • Admin Password
    • Password for MongoDB Admin User.
  • Server Port
    • MongoDB server port. Default is 27017.
  • mongodb.conf Template
    • MongoDB configuration template file under /usr/share/cmon/templates. Default is mongodb.conf.[vendor]. Keep it default is recommended.
  • ReplicaSet Name
    • Specify the name of the replica set, similar to replication.replSetName option in MongoDB.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the MongoDB in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
  • Add Nodes
    • Specify the IP address or hostname of the MongoDB nodes. Minimum of three nodes is required.
  • Deploy
    • Starts the MongoDB ReplicaSet deployment.

5.1.4.6. MongoDB Shards

Deploys a new MongoDB Sharded Cluster. The database cluster will be automatically added into ClusterControl once deployed. Minimum of three nodes (including mongo arbiter) is recommended.

Warning

It is possible to deploy only 2 MongoDB nodes (without arbiter). The caveat of this approach is no automatic failover. If the primary node goes down then manual failover is required to make the other server as primary. Automatic failover works fine with 3 nodes and more.

5.1.4.6.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software - Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.4.6.2. 2) Configuration Servers and Routers

Configuration Server

  • Server Port
    • MongoDB config server port. Default is 27019.
  • Add Configuration Servers
    • Specify the IP address or hostname of the MongoDB config servers. Minimum of one node is required, recommended to use three nodes.

Routers/Mongos

  • Server Port
    • MongoDB mongos server port. Default is 27017.
  • Add More Routers
    • Specify the IP address or hostname of the MongoDB mongos.
5.1.4.6.3. 3) Define Shards
  • Replica Set Name
    • Specify a name for this replica set shard.
  • Server Port
    • MongoDB shard server port. Default is 27018.
  • Add Node
    • Specify the IP address or hostname of the MongoDB shard servers. Minimum of one node is required, recommended to use three nodes.
  • Advanced Options
    • Click on this to open set of advanced options for this particular node in this shard:
      • Add slave delay - Specify the amount of delayed slave in milliseconds format.
      • Act as an arbiter - Toggle to ‘Yes’ if the node is arbiter node. Otherwise, choose ‘No’.
  • Add Another Shard - Create another shard. You can then specify the IP address or hostname of MongoDB server that falls under this shard.

5.1.4.6.4. 4) Database Settings
  • Vendor
    • Percona - Percona Server for MongoDB by Percona
    • MongoDB - MongoDB Server by MongoDB Inc
  • Version
    • The supported version is 3.2.
  • Server Data Directory
    • Location of MongoDB data directory. Default is /var/lib/mongodb.
  • Admin User
    • MongoDB admin user. ClusterControl will create this user and enable authentication.
  • Admin Password
    • Password for MongoDB Admin User.
  • Server Port
    • MongoDB server port. Default is 27017.
  • mongodb.conf Template
    • MongoDB configuration template file under /usr/share/cmon/templates. Default is mongodb.conf.[vendor]. Keep it default is recommended.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the MongoDB in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
  • Deploy
    • Starts the MongoDB Sharded Cluster deployment.

5.1.4.7. PostgreSQL

Deploys a new PostgreSQL standalone or streaming replication cluster from ClusterControl. Only PostgreSQL 9.x and 10 is supported.

5.1.4.7.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the database.
  • Install Software - Check the box if you use clean and minimal VMs. Existing PostgreSQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.4.7.2. 2) Define PostgreSQL Servers
  • Server Port
    • PostgreSQL server port. Default is 5432.
  • User
    • Specify the PostgreSQL super user for example, postgres.
  • Password
    • Specify the password for User.
  • Version
    • Supported versions are 9.6 and 10.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Create New Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the PostgreSQL in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
5.1.4.7.3. 3) Define Topology
  • Master A - IP/Hostname
    • Specify the IP address of the PostgreSQL master node. Press ‘Enter’ once specified so ClusterControl can verify the reachability via passwordless SSH.
  • Add slaves to master A
    • Add a slave node connected to master A. Press ‘Enter’ to add more slave.
5.1.4.7.4. 4) Deployment Summary
  • Synchronous Replication
    • Toggle on if you would like to use synchronous streaming replication between the master and the chosen slave. Synchronous replication can be enabled per individual slave node with considerable performance overhead.
  • Deploy
    • Starts the PostgreSQL standalone or replication deployment.

5.1.5. Import Existing Server/Cluster

Opens a wizard to import the existing database setup into ClusterControl. The following database cluster types are supported:

  • MySQL Replication

  • MySQL Galera
    • MySQL Galera Cluster
    • Percona XtraDB Cluster
    • MariaDB Galera Cluster
  • MySQL Cluster (NDB)

  • MongoDB ReplicaSet

  • MongoDB Shards

  • PostgreSQL (standalone or replication)

There are some prerequisites that need to be fulfilled prior to adding the existing setup. The existing database cluster/server must:

  • Verify that sudo is working properly if you are using a non-root user. See
  • Passwordless SSH from ClusterControl node to database nodes has been configured correctly. See
  • The target cluster must not be in degraded state. For example, if you have a three-node Galera cluster, all nodes must be alive, accessible and in synced.

For more details, refer to the Requirement section. Each time you add an existing cluster or server, ClusterControl will trigger a job under ClusterControl > Settings > Cluster Jobs. You can see the progress and status under this page. A window will also appear with messages showing the progress.

5.1.5.1. Import Existing MySQL Replication

ClusterControl is able to manage/monitor an existing set of MySQL servers (standalone or replication). Individual hosts specified in the same list will be added to the same server group in the UI. ClusterControl assumes that you are using the same MySQL root password for all instances specified in the group and it will attempt to determine the server role as well (master, slave, multi or standalone).

When importing an existing MySQL Replication, ClusterControl will do the following:

  • Verify SSH connectivity to all nodes.
  • Detect the host environment and operating system.
  • Discover the database role of each node (master, slave, multi, standalone).
  • Pull the configuration files.
  • Generate the authentication key and register the node into ClusterControl.
5.1.5.1.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.5.1.2. 2) Define MySQL Servers
  • Vendor
    • Percona for Percona Server
    • MariaDB for MariaDB Server
    • Oracle for MySQL Server
  • MySQL Version
    • Supported version:
      • Percona Server (5.5, 5.6, 5.7)
      • MariaDB Server (10.1, 10.2)
      • MySQL Server (5.7)
  • Basedir
    • MySQL base directory. Default is /usr. ClusterControl assumes all MySQL nodes are using the same base directory.
  • Server Port
    • MySQL port on the target server/cluster. Default to 3306. ClusterControl assumes MySQL is running on the same port on all nodes.
  • Admin/Root User
    • MySQL user on the target server/cluster. This user must able to perform GRANT statement. Recommended to use MySQL ‘root’ user.
  • Admin/Root Password
    • Password for MySQL User. ClusterControl assumes that you are using the same MySQL root password for all instances specified in the group.
  • “information_schema” Queries
    • Toggle on to enable information_schema queries to track databases and tables growth. Queries to the information_schema may not be suitable when having many database objects (hundreds of databases, hundreds of tables in each database, triggers, users, events, stored procedures, etc). If disabled, the query that would be executed will be logged so it can be determined if the query is suitable in your environment.
    • This is not recommended for clusters with more than 2000 database objects.
  • Node AutoRecovery
    • ClusterControl will perform automatic recovery if it detects any of the nodes in the cluster is down.
  • Cluster AutoRecovery
    • ClusterControl will perform automatic recovery if it detects the cluster is down or degraded.
  • Import as Standalone Nodes
    • Toggle on if you only importing a standalone node (by specifying only one node under ‘Add Nodes’ section).
  • Add Nodes
    • Enter the MySQL single instances’ IP address or hostname that you want to group under this cluster.
  • Import
    • Click the button to start the import. ClusterControl will connect to the MySQL instances, import configurations and start managing them.

5.1.5.2. Import Existing MySQL Galera

5.1.5.2.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.5.2.2. 2) Define MySQL Servers
  • Vendor
    • Percona XtraDB - Percona XtraDB Cluster by Percona
    • MariaDB - MariaDB Galera Cluster by MariaDB
    • Codership - MySQL Galera Cluster by Codership
  • MySQL Version
    • Select MySQL version of the target cluster.
  • Basedir
    • MySQL base directory. Default is /usr. ClusterControl assumes MySQL is having the same base directory on all nodes.
  • Port
    • MySQL port on the target cluster. Default to 3306. ClusterControl assumes MySQL is running on the same port on all nodes.
  • Admin/Root User
    • MySQL user on the target cluster. This user must be able to perform GRANT statement. Recommended to use MySQL ‘root’ user.
  • Admin/Root Password
    • Password for MySQL User. The password must be the same on all nodes that you want to add into ClusterControl.
  • “information_schema” Queries
    • Toggle on to enable information_schema queries to track databases and tables growth. Queries to the information_schema may not be suitable when having many database objects (hundreds of databases, hundreds of tables in each database, triggers, users, events, stored procedures, etc). If disabled, the query that would be executed will be logged so it can be determined if the query is suitable in your environment.
    • This is not recommended for clusters with more than 2000 database objects.
  • Node AutoRecovery
    • Toggle on so ClusterControl will perform automatic recovery if it detects any of the nodes in the cluster is down.
  • Cluster AutoRecovery
    • Toggle on so ClusterControl will perform automatic recovery if it detects the cluster is down or degraded.
  • Automatic Node Discovery
    • If toggled on, you only need to specify ONE Galera node and ClusterControl will discover the remaining nodes based on the hostname/IPs used for Galera’s intra-node communication. Replication slaves, load balancers, and other supported services connected to the Galera Cluster can be added after the import has finished.
  • Add Node
    • Specify the target node and press ‘Enter’ for each of them. If you have Automatic Node Discovery enabled, you only need to specify only one node.
  • Import
    • Click the button to start the import. ClusterControl will connect to the Galera node, discover the configuration for the rest of the members and start managing/monitoring the cluster.

5.1.5.3. Import Existing MySQL Cluster

ClusterControl is able to manage and monitor an existing production deployed MySQL Cluster (NDB). Minimum of 2 management nodes and 2 data nodes is required.

5.1.5.3.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.5.3.2. 2) Define Management Server
  • Management server 1
    • Specify the IP address or hostname of the first MySQL Cluster management node.
  • Management server 2
    • Specify the IP address or hostname of the second MySQL Cluster management node.
  • Server Port
    • MySQL Cluster management port. The default port is 1186.
5.1.5.3.3. 3) Define Data Nodes
  • Port
    • MySQL Cluster data node port. The default port is 2200.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL Cluster data node.
5.1.5.3.4. 4) Define MySQL Servers
  • Root Password
    • MySQL root password.
  • Server Port
    • MySQL port. Default to 3306.
  • MySQL Installation Directory
    • MySQL server installation path where ClusterControl can find the mysql binaries.
  • Enable information_schema Queries
    • Toggle on to enable information_schema queries to track databases and tables growth. Queries to the information_schema may not be suitable when having many database objects (hundreds of databases, hundreds of tables in each database, triggers, users, events, stored procedures, etc). If disabled, the query that would be executed will be logged so it can be determined if the query is suitable in your environment.
    • This is not recommended for clusters with more than 2000 database objects.
  • Enable Node AutoRecovery
    • ClusterControl will perform automatic recovery if it detects any of the nodes in the cluster is down.
  • Enable Cluster AutoRecovery
    • ClusterControl will perform automatic recovery if it detects the cluster is down or degraded.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL Cluster API/SQL node.
  • Import
    • Click the button to start the import. ClusterControl will connect to the MySQL Cluster nodes, discover the configuration for the rest of the nodes and start managing/monitoring the cluster.

5.1.5.4. Import Existing MongoDB ReplicaSet

ClusterControl is able to manage and monitor an existing MongoDB/Percona Server for MongoDB 3.x replica set.

5.1.5.4.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.5.4.2. 2) Define MongoDB Servers
  • Vendor
    • Percona - Percona Server for MongoDB by Percona (formerly Tokutek).
    • MongoDB - MongoDB Server by MongoDB Inc (formerly 10gen).
  • Version
    • The supported version is 3.2.
  • Server Port
    • MongoDB server port. Default is 27017.
  • Admin User
    • MongoDB admin user.
  • Admin Password
    • Password for MongoDB Admin User.
  • MongoDB Auth DB
    • MongoDB database to authenticate against. Default is admin.
  • Hostname
    • Specify one IP address or hostname of the MongoDB replica set member. ClusterControl will automatically discover the rest.
  • Import
    • Click the button to start the import. ClusterControl will connect to the specified MongoDB node, discover the configuration for the rest of the nodes and start managing/monitoring the cluster.

5.1.5.5. Import Existing MongoDB Shards

ClusterControl is able to manage and monitor an existing MongoDB/Percona Server for MongoDB 3.x sharded cluster setup.

5.1.5.5.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port Number
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.5.5.2. 2) Set Routers/Mongos

Configuration Server

  • Server Port
    • MongoDB mongos server port. Default is 27017.
  • Add More Routers
    • Specify the IP address or hostname of the MongoDB mongos.
5.1.5.5.3. 3) Database Settings
  • Vendor
    • Percona - Percona Server for MongoDB by Percona
    • MongoDB - MongoDB Server by MongoDB Inc
  • Version
    • The supported version is 3.2.
  • Admin User
    • MongoDB admin user.
  • Admin Password
    • Password for MongoDB Admin User.
  • MongoDB Auth DB
    • MongoDB database to authenticate against. Default is admin.
  • Import
    • Click the button to start the import. ClusterControl will connect to the specified MongoDB mongos, discover the configuration for the rest of the members and start managing/monitoring the cluster.

5.1.5.6. Import Existing PostgreSQL

ClusterControl is able to manage/monitor an existing set of PostgreSQL 9.x servers. Individual hosts specified in the same list will be added to the same server group in the UI. ClusterControl assumes that you are using the same postgres password for all instances specified in the group.

5.1.5.6.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.5.6.2. 2) Define PostgreSQL Servers
  • Server Port
    • PostgreSQL port on the target server/cluster. Default to 5432. ClusterControl assumes PostgreSQL is running on the same port on all nodes.
  • User
    • PostgreSQL user on the target server/cluster. Recommended to use PostgreSQL ‘postgres’ user.
  • Password
    • Password for User. ClusterControl assumes that you are using the same postgres password for all instances under this group.
  • Version
    • PostgreSQL server version on the target server/cluster.
  • Basedir
    • PostgreSQL base directory. Default is /usr. ClusterControl assumes all PostgreSQL nodes are using the same base directory.
  • Add Node
    • Specify all PostgreSQL instances that you want to group under this cluster.
  • Import
    • Click the button to start the import. ClusterControl will connect to the PostgreSQL instances, import configurations and start managing them.

5.1.6. Deploy in the Cloud

Opens a step-by-step modal dialog to deploy a new set of database cluster in the cloud. Supported cloud providers are:

  • Amazon Web Service
  • Google Cloud Platform
  • Microsoft Azure

The following database cluster types are supported:

  • MySQL Galera
    • Percona XtraDB Cluster
    • MariaDB Galera Cluster
  • MongoDB ReplicaSet

  • PostgreSQL Replication

There are prerequisites that need to be fulfilled prior to the deployment:

  • A working cloud credential profile on the supported cloud platform. See Cloud Providers.
  • If the cloud instance is inside a private network, the network must support auto-assign public IP address. ClusterControl only connects to the created cloud instance via public network.

Under the hood, the deployment process does the following:

  1. Create cloud instances.
  2. Configure security groups and networking.
  3. Verify the SSH connectivity from ClusterControl to all created instances.
  4. Deploy database on every instance.
  5. Configure the clustering or replication links.
  6. Register the deployment into ClusterControl.

ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl > Activity > Jobs.

Attention

This feature is still in beta. See Known Limitations for details.

5.1.6.1. Cluster Details

  • Select Cluster Type
    • Choose a cluster.
  • Select Vendor and Version
    • MySQL Galera Cluster - Percona XtraDB Cluster 5.7, MariaDB 10.2
    • MongoDB Cluster - MongoDB 3.4 by MongoDB, Inc and Percona Server for MongoDB 3.4 by Percona (replica set only).
    • PostgreSQL Cluster - PostgreSQL 10.0 (streaming replication only).

5.1.6.2. Configure Cluster

5.1.6.2.1. MySQL Galera Cluster
  • Select Number of Nodes
    • How many nodes for the database cluster. You can start with one but three (or bigger odd number) is recommended.
  • Cluster Name
    • This value will be used as the instance name or tag. No space is allowed.
  • MySQL Server Port
    • MySQL server port. Default is 3306.
  • MySQL Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
  • my.cnf Template
    • The template configuration file that ClusterControl will use to deploy the cluster. It must be located under /usr/share/cmon/templates on the ClusterControl host.
  • MySQL Server Data Directory
    • Location of MySQL data directory. Default is /var/lib/mysql.
5.1.6.2.2. MongoDB Replica Set
  • Select Number of Nodes
    • How many nodes for the database cluster. You can start with one but three (or bigger odd number) is recommended.
  • Cluster Name
    • This value will be used as the instance name or tag. No space is allowed.
  • Admin User
    • MongoDB admin user. ClusterControl will create this user and enable authentication.
  • Admin Password
    • Password for MongoDB Admin User.
  • Server Data Directory
    • Location of MongoDB data directory. Default is /var/lib/mongodb.
  • Server Port
    • MongoDB server port. Default is 27017.
  • mongodb.conf Template
    • The template configuration file that ClusterControl will use to deploy the cluster. It must be located under /usr/share/cmon/templates on the ClusterControl host.
  • ReplicaSet Name
    • Specify the name of the replica set, similar to replication.replSetName option in MongoDB.
5.1.6.2.3. PostgreSQL Streaming Replication
  • Select Number of Nodes
    • How many nodes for the database cluster. You can start with one but two or more is recommended.

Note

The first virtual machine that comes up will be configured as a master.

  • Cluster Name
    • This value will be used as the instance name or tag. No space is allowed.
  • User
    • Specify the PostgreSQL super user for example, postgres.
  • Password
    • Specify the password for User.
  • Server Port
    • PostgreSQL server port. Default is 5432.

5.1.6.3. Select Credential

Select one of the existing cloud credentials or you can create a new one by clicking on the Add New Credential button.

  • Add New Credential

5.1.6.4. Select Virtual Machine

Most of the settings in this step are dynamically populated from the cloud provider by the chosen credentials.

  • Operating System
    • Choose a supported operating system from the dropdown.
  • Instance Size
    • Choose an instance size for the cloud instance.
  • Virtual Private Cloud (VPC)
    • Exclusive for AWS. Choose a virtual private cloud network for the cloud instance.
  • Add New
    • Opens the Add VPC wizard. Specify the tag name and IP address block.
  • SSH Key
    • SSH key location on the ClusterControl host. This key must be able to authenticate to the created cloud instances passwordlessly.
  • Storage Type
    • Choose the storage type for the cloud instance.
  • Allocate Storage
    • Specify the storage size for the cloud instance in GB.

5.1.6.5. Deployment Summary

  • Subnet
    • Choose one existing subnet for the selected network.
  • Add New Subnet
    • Opens the Add Subnet wizard. Specify the subnet name, availability zone and IP CIDR block address. E.g: 10.0.10.0/24

5.1.6.6. Known Limitations

There are known limitations for the cloud deployment feature:

  • There is currently no ‘accounting’ in place for the cloud instances. You will need to manually remove created cloud instances.
  • You cannot add or remove a node automatically with cloud instances.
  • You cannot deploy a load balancer automatically with a cloud instance.
  • Scaling up is similar to the standard host, where you need to create the cloud instance manually and specify the host under scaling options (Add Node or Add Replication Slave).

We appreciate your feedbacks, feature requests and bug reports. Contact us via the support channel or create a feature request. See FAQ.

5.2. Sidebar

Left-side navigation menu provides shortcuts to manage clusters, roles, users, notifications, integration, reporting, authentication, keys and certificates.

5.2.1. Clusters

List of database clusters managed by ClusterControl with summarized status. Database cluster deployed by (or added into) ClusterControl will be listed in this page. See Database Cluster List section.

5.2.2. Operational Reports

Generates or creates schedule of operational reports. The current default report shows a cluster’s health and performance at the time it was generated compared to one day ago.

The report provides information about:

  • Cluster Information
    • Cluster
    • Nodes
    • Backup summary
    • Top queries summary
  • Node Status Overview
    • CPU usage
    • Data throughput
    • Load average
    • Free disk space
    • RAM usage
    • Network throughput
    • Server load
    • Handler
  • Schema Change Report
    • Detect schema changes (CREATE and ALTER TABLE. Drop table is not supported yet)
    • Need to set schema_change_detection_address=1 inside /etc/cmon.d/cmon_X.cnf.
  • Availability (All clusters)
    • Node availability summary
    • Cluster availability summary
    • Total uptime
    • Total downtime
    • Last state change
  • Backup (All clusters)
    • Backup list
    • Backup details
    • Backup policy
  • Package Upgrade Report (generate available software and security packages to upgrade)

  • Database Growth Report (beta)

5.2.2.1. Generated Reports

Provides list of generated operational reports. Click on any of the entries will open the operational report in a new window.

  • Create
    • Create an operational report immediately.
    • Specify the cluster name and operational type. Optionally, you can click on ‘Add Email’ button to add recipients into the list.
  • Delete
    • Delete the selected operational report.
  • Refresh
    • Refresh the operational report list.

5.2.2.2. Schedules

List of scheduled operational report.

  • Schedule
    • Schedule an operational report at an interval. You can schedule it daily, weekly and monthly. Optionally, you can click on ‘Add Email’ button to add recipients into the list.
  • Edit
    • Edit the selected schedule.
  • Delete
    • Delete the selected schedule.
  • Refresh
    • Refresh the schedule list.

5.2.3. Email Notifications

Configures global email notifications across clusters.

  • Add Recipient
    • Creates a new recipient by specifying an email address. A newly created recipient will be listed under ‘External’ organization.
  • Delete Recipient
    • Removes an existing recipient.
  • Save
    • Saves the settings to individual cluster.
  • Remove
    • Unassigns the settings for the individual cluster to the selected recipient.
  • Save to all Clusters
    • Save the settings to all clusters.
  • Send digests at
    • Send a digested (summary) email at this time every day to the selected recipient.
  • Time-zone
    • Timezone for the selected recipient.
  • Daily limit for non-digest email as
    • The maximum number of non-digest email notification should be sent per day for the selected recipient. -1 for unlimited.
  • Alarm/Event Category
    Event Description
    All Event Categories All events.
    Network Network related messages, e.g. host unreachable, SSH issues.
    CmonDatabase Internal CMON database related messages.
    Mail Mail system related messages.
    Cluster Cluster related messages, e.g. cluster failed.
    ClusterConfiguration Cluster configuration messages, e.g. software configuration messages.
    ClusterRecovery Recovery messages like cluster or node recovery failures.
    Node Message related to nodes, e.g. node disconnected, missing GRANT, failed to start HAProxy, failed to start NDB cluster nodes.
    Host Host related messages, e.g. CPU/disk/RAM/swap alarms.
    DbHealth Database health related messages, e.g. memory usage of mysql servers, connections.
    DbPerformance Alarms for long running transactions and deadlocks
    SoftwareInstallation Software installation related messages.
    Backup Messages about backups.
    Unknown Other uncategorized messages.
  • Select how you want alarms/events delivered
    Action Description
    Ignore Ignore if an alarm raised.
    Deliver Send notification immediately via email once an alarm raised.
    Digest Send a summary of alarms raised everyday at Send digests at

5.2.4. Integrations

Manages ClusterControl integration modules. Starting from version 1.5.0, there are two modules available:

  • 3rd Party Notifications via clustercontrol-notifications package.
  • Cloud Provider integration via clustercontrol-cloud and clustercontrol-clud packages.

5.2.4.1. 3rd Party Notifications

Configures third-party notifications on events triggered by ClusterControl.

Supported services are:

Incident management services Chat services Others
PagerDuty Slack Webhook
VictorOps Telegram
OpsGenie
ServiceNow
  • Add new integration
    • Opens the service integration configuration wizard.
  • Select Service
    • Pick a service that you want to configure. Different service requires different set of options.
  • Service Configuration
    • Specify a name for this integration together with the corresponding service key. The service key can be retrieved from the provider website. Click on the “Test” button to verify if ClusterControl is able to connect with the service provider.
  • Notification Configuration
    • Specify the cluster name together with ClusterControl events that you would like to trigger for incident. You can define multiple values for both fields. Details on the events is described in the following table:
    Event Description
    All Events All ClusterControl events including warning and critical events.
    All Warning Events All ClusterControl warning events, e.g. cluster degradation, network glitch. See Warning Events.
    All Critical Events All ClusterControl critical events, e.g. cluster failed, host failed. See Critical Events.
    Network Network related events, e.g. host unreachable, SSH issues.
    CMON Database Internal CMON database related events, e.g. unable to connect to CMON database, datadir mounted as read-only.
    Mail Mail system related events, e.g. unable to send mail, mail server unreachable.
    Cluster Cluster related events, e.g. cluster failed, cluster degradation, time drifting.
    Cluster Configuration Cluster configuration events, e.g. SST account mismatch.
    Cluster Recovery Recovery events, e.g. cluster or node recovery failures.
    Node Node related events, e.g. node disconnected, missing GRANT, failed to start HAProxy, failed to start NDB cluster nodes.
    Host Host related messages, e.g. CPU/disk/RAM/swap exceeds thresholds, memory full.
    Database Health Database health related events, e.g. memory usage of mysql servers, connections, missing primary key.
    Database Performance Alarms for long running transactions, replication lag and deadlocks.
    Software Installation Software installation related events, e.g. license expiration.
    Backup Backups related events, e.g. backup failed.
  • Edit
    • Edit the selected integration.
  • Delete
    • Remove the selected integration.
5.2.4.1.1. Warning Events
Area Alarms Severity Description
Node MySqlReplicationLag Warning MySQL replication slave lag, default 10 seconds.
MySqlReplicationBroken Warning The SQL thread has stopped.
CertificateExpiration Warning SSL certificate expiration time (<=31 days, >7 days).
MySqlAdvisor Warning Raised by wsrep_sst_method.js and wsrep_node_name.js advisors.
MySqlTableAnalyzer Warning Raised by schema_check_nopk.js advisor.
StorageMyIsam Warning Raised by schema_check_myisam.js advisor.
MySqlIndexAnalyzer Warning Raised by schema_check_dupl_index.js advisor.
Host HostSwapV2 Warning If a configurable number of pages has been swapped in/out during a configurable period of time. Default 20 pages in 10 minutes.
HostSwapping Warning >5% swap space has been used.
HostCpuUsage Warning >80%, <90% CPU used.
HostRamUsage Warning >80%, <90% RAM used.
HostDiskUsage Warning >80%, <90% disk space used on a monitored_mountpoint.
ProcessCpuUsage Warning >95 % CPU used in average by a process for 15 minutes.
Backup BackupFailed Warning Backup job fails.
Recovery GaleraWsrepMissing Warning wsrep_cluster_address or wsrep_provider is missing.
GaleraSstAuth Warning SST settings (user/pass are wrong).
Network HostFirewall Warning Host is not responding to ping after 3 cycles.
HostSshSlow Warning It takes 6-12 seconds to SSH into a host.
Cluster ClusterTimeDrift Warning Time drift between ClusterControl and database nodes.
ClusterLicenseExpire Warning License is about to expire.
5.2.4.1.2. Critical Events
Area Alarms Severity Description
Node MySqlDisconnected Critical Node has disconnected.
MySqlGrantMissing Critical Node does not have the correct privileges set for the cmon user.
MySqlLongRunningQuery Critical If queries are running for too long time. Only used if configured, by default it is not.
ProcFailedRestart Critical A process (HAProxy, ProxySQL, Garbd, MaxScale) could not be restarted after failure.
CertificateExpiration Critical (<= 7 days), SSL Certificates expiration time.
Host HostSwapV2 Critical If a configurable number of pages has been swapped in/out during a configurable period of time. Default 20 pages in 10 minutes.
HostSwapping Critical >20% swap space has been used.
HostCpuUsage Critical >90% CPU used.
HostRamUsage Critical >90% RAM used.
HostDiskUsage Critical >90% disk space used on a monitored_mountpoint.
ProcessCpuUsage Critical >99 % CPU used in average by a process for 15 minutes.
Backup BackupVerificationFailed Critical Backup verification fails.
Recovery GaleraWsrepMissing Critical wsrep_cluster_address or wsrep_provider is missing, and still missing after 20 sample cycles which is ~ 100 seconds in this case)
GaleraClusterSplit Critical There is a split brain.
ClusterRecoveryFail Critical Recovery has failed.
GaleraConfigProblem1 Critical A configuration issue preventing the node to start.
GaleraNodeRecoveryFail Critical Automatic recovery has failed 3 consecutive times.
Network HostUnreachable Critical Host is not responding to ping after 3 cycles.
HostSshFailed Critical Please check SSH access to host. The host may also be down.
HostSshAuth Critical Please check whether the configured SSH key is authenticated on the host.
HostSudoError Critical sudo command error on host.
HostSshSlow Critical It takes >12 seconds to SSH into a host.
Cluster ClusterFailure Critical Cluster is failure.
ClusterLicenseExpire Critical License is expired.

5.2.4.2. Cloud Providers

Manages resources and credentials for cloud providers. Note that this new feature requires two modules called clustercontrol-cloud and clustercontrol-clud. The former is a helper daemon which extends CMON capability of cloud communication, while the latter is a file manager client to upload and download files on cloud instances. Both packages are dependencies of the clustercontrol UI package, which will be installed automatically if do not exist.

The credentials that have been set up here can be used to:

  • Manage cloud resources (instances, virtual network, subnet)
  • Deploy databases in the cloud
  • Upload backup to cloud storage

To create a cloud profile, click on Add Cloud Credentials and follow the wizard accordingly. Supported cloud providers are:

  • Amazon Web Service
  • Google Cloud Platform
  • Microsoft Azure.
5.2.4.2.1. Amazon Web Services Credential

The stored AWS credential will be used by ClusterControl to list out Amazon EC2 instances, spin new instances when deploying a cluster and uploading/downloading backups to AWS S3.

To create an access key for your AWS account root user:

  1. Use your AWS account email address and password to sign in to the AWS Management Console as the AWS account root user.
  2. On the IAM Dashboard page, choose your account name in the navigation bar, and then choose “My Security Credentials”.
  3. If you see a warning about accessing the security credentials for your AWS account, choose “Continue to Security Credentials”.
  4. Expand the Access keys (access key ID and secret access key) section.
  5. Choose “Create New Access Key”. Then choose “Download Key File” to save the access key ID and secret access key to a file on your computer. After you close the dialog box, you can’t retrieve this secret access key again.
Field Description
Name Credential name.
AWS Key ID Your AWS Access Key ID as described on this page. You can get this from AWS IAM Management console.
AWS Key Secret Your AWS Secret Access Key as described on this page. You can get this from AWS IAM Management console.
Default Region Choose the default AWS region for this credential.
Comment (Optional) Description of the credential.
5.2.4.2.1.1. AWS Instances

Lists out your AWS instances. You can perform simple AWS instance management tasks directly from ClusterControl, which uses your defined AWS credentials to connect to the AWS API.

Field Description
AWS Credentials Choose which credential to use to access your AWS resources.
Stop Shutdown the instance.
Reboot Restart the instance.
Terminate Shutdown and terminate the instance.
5.2.4.2.1.2. AWS VPC

This allows you to conveniently manage your VPC from ClusterControl, which uses your defined AWS credentials to connect to AWS VPC. Most of the functionalities are dynamically populated and integrated to have the same look and feel as the AWS VPC console. Thus, you may refer to VPC User Guide for details on how to manage AWS VPC.

Field Description
Start VPC Wizard Open the VPC creation wizard. Please refer to Getting Started Guide for details on how to start creating a VPC.
AWS Credentials Choose which credentials to use to access your AWS resources.
Region Choose the AWS region for the VPC.
VPC

List of VPCs created under the selected region.

  • Create VPC - Create a new VPC.
  • Delete - Delete selected VPC.
  • DHCP Options Set - Specify the DHCP options for your VPC.
Subnet

List of VPC subnet created under the selected region.

  • Create - Create a new VPC subnet.
  • Delete - Delete selected subnet.
Route Tables List of routing tables created under the selected region.
Internet Gateway List of security groups created under the selected region.
Network ACL List of network Access Control Lists created under the selected region.
Security Group List of security groups created under the selected region.
Running Instances List of all running instances under the selected region.
5.2.4.2.2. Google Cloud Platform Credentials

To create a service account:

  1. Open the “Service Accounts” page in the Cloud Platform Console.
  2. Select your project and click “Continue””.
  3. In the left navigation, click “Service accounts”.
  4. Look for the service account for which you wish to create a key, click on the vertical ellipses button in that row, and click “Create key”.
  5. Select JSON as the “Key type” and click “Create”.
Field Description
Name Credential name.
Read from JSON The service account definition in JSON format.
Comment (Optional) Description of the credential.
5.2.4.2.3. Microsoft Azure Credentials

To create a service account:

  1. Open the “Service Accounts” page in the Cloud Platform Console.
  2. Select your project and click “Continue””.
  3. In the left nav, click “Service accounts”.
  4. Look for the service account for which you wish to create a key, click on the vertical ellipses button in that row, and click “Create key”.
  5. Select JSON as the “Key type” and click “Create”.
Field Description
Name Credential name.
Read from JSON The service account definition in JSON format.
Comment (Optional) Description of the credential.

5.2.5. Key Management

Key Management allows you to manage a set of SSL certificates and keys that can be provisioned on your clusters. This feature allows you to create Certificate Authority (CA) and/or self-signed certificates and keys. Then, it can be easily enabled and disabled for MySQL and PostgreSQL client-server connections using SSL encryption feature. See Enable SSL Encryption for details.

5.2.5.1. Manage

Manage existing keys and certificates generated by ClusterControl.

  • Revoke
    • Revoke the selected certificate. This will put an end to the validity of the certificate.
  • Generate
    • Re-generate an invalid or expired certificate. By using this, ClusterControl can generate a new key and certificate by using the same information used when it was generated for the first time.
  • Move
    • Move the selected certificate to another location. Clicking on this will open another dialog box where you can create/delete a directory under /var/lib/cmon/ca. Use this feature to organize and categorize the generated certificate per directory.

5.2.5.2. Generate

By default, the generated keys and certificates will be created under default repository at /var/lib/cmon/ca.

  • New Folder
    • Create a new directory under the default repository.
  • Delete Folder
    • Delete the selected directory.
  • Refresh
    • Refresh the list.
5.2.5.2.1. Self-signed Certificate Authority and Key

Generate a self-signed Certificate Authority and key. You can use this Certificate Authority (CA) to sign your client and server certificates.

  • Path
    • Certification repository path. To change the path, click on the file browser left-side menu. Default value is /var/lib/cmon/ca.
  • Certificate Authority and Key Name
    • Enter a name without extension. For example MyCA, ca-cert
  • Description
    • Put some description for the certificate authority.
  • Country
    • Choose a country name from the dropdown menu.
  • State
    • State or province name.
  • Locality
    • City name.
  • Organization
    • Organization name.
  • Organization unit
    • Unit or department name.
  • Common name
    • Specify server fully-qualified domain name (FQDN) or your name.
    • Common Name value used for the server and client certificates/keys must each differ from the Common Name value used for the CA certificate. Otherwise, the certificate and the key files will not work for the servers compiled using OpenSSL.
  • Email
    • Email address.
  • Key length (bits)
    • The key length in bits. 2048 and higher is recommended. The larger the public and private key length, the harder it is to crack.
  • Expiration Date (days)
    • Certificate expiration in days.
  • Generate
    • Generate certificate and key.
  • Reset
    • Reset the form.
5.2.5.2.2. Client/Server Certificates and Key

Sign with an existing CA or generate a self-signed certificate. ClusterControl generates certificate and key depending on the type, server or client. The generated server’s key and certificate can then be used by Enable SSL Encryption feature.

  • Certificate Authority
    • Select an existing CA (by clicking on any existing CA on the left-hand side menu) or leave it empty to generate a self-signed certificate.
  • Type
    • server - Generate certificate for server usage.
    • client - Generate certificate for client usage.
  • Certificate and Key Name
    • Enter the certificate and key name. The same name will be used by ClusterControl to generate certificate and key. For example, if you specify the name is “severalnines”, ClusterControl will generate severalnines.key and severalnines.crt respectively.
  • Description
    • Put some description for the certificate and key.
  • Country
    • Choose a country name from the dropdown menu.
  • State
    • State or province name.
  • Locality
    • City name.
  • Organization
    • Organization name.
  • Organization unit
    • Unit or department name.
  • Common name
    • Specify server fully-qualified domain name (FQDN) or your name.
    • Common Name value used for the server and client certificates/keys must each differ from the Common Name value used for the CA certificate. Otherwise, the certificate and the key files will not work for the servers compiled using OpenSSL.
  • Email
    • Email address.
  • Key length (bits)
    • The key length in bits. 2048 and higher is recommended.
  • Expiration Date (days)
    • Certificate expiration in days.
  • Generate
    • Generate certificate and key.
  • Reset
    • Reset the form.

5.2.5.3. Import

Import keys and certificates into ClusterControl’s certificate repository. The imported keys and certificates can then be used to enable SSL encryption for server-client connection, replication or backup at a later stage. Before you perform the import action, bear in mind to:

  1. Upload your certificate and key to a directory in the ClusterControl Controller host
  2. Uncheck the Self-signed Certificate checkbox if the certificate is not self-signed
  3. You need to also provide a CA certificate if the certificate is not self-signed
  4. Duplicate certificates will not be created
  • Destination Path - Where you want the certificate to be imported to. Click on the file explorer window on the left to change the path.
  • Save As - Certificate name.
  • Certificate File - Physical path to the certificate file. For example: /home/user/ssl/file.crt.
  • Private Key File - Physical path to the key file. For example: /home/user/ssl/file.key.
  • Self-signed Certificate - Uncheck the checkbox if the certificate is not self-signed.
  • Import - Start the import process.

5.2.6. User Management

5.2.6.1. Teams

Manage teams (organizations) and users under ClusterControl. Take note that only the first user created with ClusterControl will be able to create the teams. You can have one or more teams and each team consists of zero or more clusters and users. You can have many roles defined under ClusterControl and a user must be assigned with one role.

As a roundup, here is how the different entities relate to each other:

../_images/cc_erd.png

Note

ClusterControl creates ‘Admin’ team by default.

5.2.6.2. Users

A user belongs to one team and assigned with a role. Users created here will be able to login and see specific cluster(s), depending on their team and the cluster they have been assigned to.

Each role is defined with specific privileges under Access Control. ClusterControl default roles are Super Admin, Admin and User:

Role Description
Super Admin Able to see all clusters that are registered in the UI. The Super Admin can also create organizations and users. Only the Super Admin can transfer a cluster from one organization to another.
Admin Belongs to a specific organization, and is able to see all clusters registered in that organization.
User Belongs to a specific organization, and is only able to see the cluster(s) that he/she registered.

To create a custom role, see Access Control.

5.2.6.3. Access Control

ClusterControl uses Role-Based Access Control (RBAC) to restrict access to clusters and their respective deployment, management and monitoring features. This ensures that only authorized user requests are allowed. Access to functionality is fine-grained, allowing access to be defined by organization or user. ClusterControl uses a permissions framework to define how a user may interact with the management and monitoring functionality, after they have been authorized to do so.

You can create a custom role with its own set of access levels. Assign the role to specific user under Teams tab.

Note

The Super Admin role is not listed since it is a default role and has the highest level of privileges in ClusterControl.

5.2.6.3.1. Privileges
Privilege Description
Allow Allow access without modification. Similar to read-only mode.
Deny Deny access. The selected feature will not appear in the UI.
Manage Allow access with modification.
Modify Similar to manage, for certain features that required modification.
5.2.6.3.2. Features Description
Feature Description
Overview Overview tab - ClusterControl > Overview
Nodes Nodes tab - ClusterControl > Nodes
Configuration Management Configuration management page - ClusterControl > Manage > Configurations
Query Monitor Query Monitor tab - ClusterControl > Query Monitor
Performance Performance tab - ClusterControl > Performance
Backup Backup tab - ClusterControl > Backup
Manage Manage tab - ClusterControl > Manage
Alarms Alarms tab - ClusterControl > Alarms
Jobs Jobs tab - ClusterControl > Jobs
Settings Settings tab - ClusterControl > Settings
Add Existing Cluster Add Existing Cluster button and page - ClusterControl > Add Existing Server/Cluster
Create Cluster Create Database Cluster button and page - ClusterControl > Create Database Cluster
Add Load Balancer Add Load Balancer page - ClusterControl > Actions > Add Load Balancer and ClusterControl > Manage > Load Balancer
Clone Clone Cluster page (Galera only) - ClusterControl > Actions > Clone Cluster
Access All Clusters Access all clusters registered under the same organization.
Cluster Registrations Cluster Registrations page - ClusterControl > Settings (top-menu) > Cluster Registrations
Cloud Providers Cloud Providers page - ClusterControl > Settings (top-menu) > Integrations -> Cloud Providers
Search Search button and page - ClusterControl > Search
Create Database Node Create Database Node button and page - ClusterControl > Create Database Node
Developer Studio Developer Studio page - ClusterControl > Manage > Developer Studio
MySQL User Management MySQL user management sections - ClusterControl > Settings (top-menu) > MySQL User Management and ClusterControl > Manage > Schema and Users
Operational Reports Operational reports page - ClusterControl > Settings (top-menu) > Operational Reports
Integrations Integrations page - ClusterControl > Settings (top-menu) > Integrations
Web SSH Web-based SSH on every managed node - ClusterControl > Nodes > Node Actions > SSH Console
Custom Advisor Custom Advisors page - ClusterControl > Manage > Custom Advisors
SSL Key Management Key Management page - ClusterControl > Settings (top-menu) > Key Management

5.2.6.4. LDAP Settings

ClusterControl supports Active Directory, FreeIPA and LDAP authentication. This allows users to log into ClusterControl by using their corporate credentials instead of a separate password. LDAP groups can be mapped onto ClusterControl user groups to apply roles to the entire group. It supports up to LDAPv3 protocol based on RFC2307.

When authenticating, ClusterControl will first bind to the directory tree server (‘LDAP Host’) using the specified ‘Login DN’ user and password, then it will check if the username you entered exists in the form of uid, cn or sAMAccountName of the ‘User DN’. If it exists, it will then use the username to bind against the LDAP server to check whether it has the configured group as in ‘LDAP Group Name’ in ClusterControl. If it does, ClusterControl will then map the user to the appropriate ClusterControl role and grant access to the UI.

The following flowchart summarizes the workflow:

../_images/ipaad_flowchart.png

You can map the LDAP group to corresponding ClusterControl role created under Access Control tab. This would ensure that ClusterControl authorizes the logged-in user based on the role assigned.

Once the LDAP settings are verified, login into ClusterControl by using the LDAP credentials (uid, cn or sAMAccountName with respective password). User will be authenticated and redirected to ClusterControl dashboard page based on the assigned role. From this point, both ClusterControl and LDAP authentications would work.

Attention

For Active Directory, ensure you configure the exact distinguished name (with proper capitalization) since the LDAP interchange format (LDIF) fields are returned in capital letters.

For example on how to setup OpenLDAP authentication with ClusterControl, please refer to this blog post, How to Setup Centralized Authentication of ClusterControl Users with LDAP.

5.2.6.4.1. LDAP Group

If LDAP authentication is enabled, you would need to map ClusterControl roles with their respective LDAP groups. You can configure this by clicking on the ‘+’ icon to add an LDAP group:

Field Description Example
Team The organization that you want the LDAP group to be assigned to. Admin
LDAP Group Name The distinguished name of the LDAP group, relative to the Group DN cn=Database Administrator,ou=group
Role User role in ClusterControl. See Teams. Super Admin
5.2.6.4.2. Settings
  • Enable LDAP Authentication
    • Choose whether to enable or disable LDAP authentication.
  • LDAP Host
    • The LDAP server hostname or IP address. To use LDAP over SSL/TLS, specify LDAP URI instead, for example ldaps://LDAP_host.
  • LDAP Port
    • Default is 389 and 636 for LDAP over SSL. Make sure to allow connections from ClusterControl host for both TCP and UDP protocol.
  • Base DN
    • The root LDAP node under which all other nodes exist in the directory structure.
  • Login DN
    • The distinguished name used to bind to the LDAP server. This is often the administrator or manager user. It can also be a dedicated login with minimal access that should be able to return the DN of the authenticating users. ClusterControl must do an LDAP search using this DN before any user can log in. This field is case-sensitive.
  • Password
    • The password for the binding user specified in Login DN.
  • User DN
    • The user’s relative distinguished name (RDN) used to bind to the LDAP server. For example, if the LDAP user DN is CN=userA,OU=People,DC=ldap,DC=domain,DC=com, specify OU=People,DC=ldap,DC=domain,DC=com. This field is case-sensitive.
  • Group DN
    • The group’s relative distinguished name (RDN) used to bind to the LDAP server. For example, if the LDAP group DN is CN=DBA,OU=Group,DC=ldap,DC=domain,DC=com, specify OU=Group,DC=ldap,DC=domain,DC=com. This field is case-sensitive.

Attention

ClusterControl does not support binding against a nested directory group. Ensure each LDAP user that authenticates to ClusterControl has a direct membership to the LDAP group.

5.2.6.4.3. FreeIPA

ClusterControl is able to bind to a FreeIPA server and perform lookups on compatible schema. Once the DN for that user is retrieved, it tries to bind using the full DN (in standard tree) with the entered password to verify the LDAP group of that user.

Thus, for FreeIPA, the user’s and group’s DN should use compatible schema, cn=compat replacing the default cn=accounts in ClusterControl LDAP Settings except for the ‘Login DN’, as shown in following screenshot:

../_images/ipaad_set_ipa.png

For example on integrating ClusterControl with FreeIPA and Windows Active Directory, please refer to this blog post, Integrating ClusterControl with FreeIPA and Windows Active Directory for Authentication.

5.2.6.5. Clusters

Manage database clusters inside ClusterControl.

  • Delete
    • Unregister the selected database cluster from the ClusterControl UI. This action will NOT delete the actual database cluster.
  • Change Team
    • Change the selected database cluster to another organization created under Teams.

5.2.7. Documentation

Opens ClusterControl online documentation page.

5.2.8. Give us Feedback

Opens a feedback form which you can use to send feedbacks, report bug, submit feature request or ask us questions. The submitted form will be sent directly to our support system and you will receive the response into your email inbox.

5.2.9. What’s New?

Opens the What’s new popup. This popup also appears the first time a user logs in after new installation or upgrade.

5.2.10. Support Forum

Opens Severalnines community support forums. Community users are encouraged to use this support channel. For licensed user, please raise a support ticket.

5.2.11. Switch Theme

A switcher for a dark or light colour background of the side menu.

5.2.12. Close Menu

Collapses and expands the side menu.

5.3. User Guide for MySQL

This user guide covers ClusterControl with MySQL-based clusters, namely:

  • Galera Cluster for MySQL
    • MySQL Galera Cluster
    • Percona XtraDB Cluster
    • MariaDB Galera Cluster
  • MySQL Cluster

  • MySQL/MariaDB Replication

  • MySQL/MariaDB single instance

  • MySQL Group Replication

Contents: