This section provides detailed information on ClusterControl requirement on hardware, system environment and security policy.
Following is the expected system environment for the ClusterControl host:
Architecture: x86_64 only
RAM: >2 GB
CPU: >2 cores
Disk space: >20 GB
Network: conventional network interface (eth, en, em)
- Hardware platform:
- Tested cloud platform:
- AWS EC2
- RackSpace Cloud
- Digital Ocean
Internet connection (for selected cluster deployment)
3.2. Operating system¶
ClusterControl has been tested on the following operating systems:
- RedHat/CentOS 6.x/7.x
- Ubuntu 12.04/14.04/16.04 LTS
- Debian 7.x/8.x
The following do not work:
- CentOS 5.4 and earlier
- Fedora Core 16 and earlier
3.3. Software Dependencies¶
The following software is required by ClusterControl:
MySQL server (5.1 or later, preferably 5.5 or later)
- Apache web server (2.2 or later)
- allow .htaccess override
- PHP (5 or later)
- RHEL: php, php-mysql, php-gd, php-ldap, php-curl
- Debian: php5-common, php5-mysql, php5-gd, php5-ldap, php5-curl, php5-json
Linux Kernel Security (SElinux or AppArmor) - must be disabled or set to permissive mode
BASH (recommended: version 4 or later)
NTP server - All servers’ time must be synced under one time zone
netcat - for streaming backups
If ClusterControl is installed via installation script (install-cc) or package manager (yum/apt), all dependencies will be automatically satisfied.
3.4. Supported Browsers¶
- We highly recommend user to use the following web browsers when accessing ClusterControl UI:
- Google Chrome
- Mozilla Firefox
Ensure to keep up-to-date of these browsers as we are very likely taking advantage of the new features available in the latest version.
ClusterControl is built and tested only on the mentioned web browsers. Some major web browsers like Safari, Opera and Internet Explorer could also work.
3.5. Supported Databases¶
The following table shows supported database clusters with recommended minimum nodes:
|Database type||Cluster type||Version||Minimum recommended nodes|
|MySQL/MariaDB||MySQL Cluster (NDB)||7.1 and later||5 hosts (2 data nodes + 2 API/mgmd nodes + 1 ClusterControl node)|
|MySQL replication||5.5 and later||3 hosts (1 master node + 1 standby master/slave + 1 ClusterControl node)|
||4 hosts (3 master nodes + 1 ClusterControl node)|
|Single instance||5.5 and later||2 hosts (1 MySQL node + 1 ClusterControl node)|
|MongoDB/Percona Server for MongoDB||Replicated sharded cluster||3.2 and later||4 hosts (3 config servers/2 mongos/2 shard servers + 1 ClusterControl node)|
|Sharded cluster||4 hosts (3 config servers/2 mongos/2 shard servers + 1 ClusterControl node)|
|Replica set||3 hosts (2 replica servers + 1 ClusterControl node)|
|PostgreSQL||Single instance||9.x||2 hosts (1 PostgreSQL node + 1 ClusterControl node)|
|Replication||3 hosts (1 master node + 1 slave node + 1 ClusterControl node)|
3.6. Firewall and Security Groups¶
It is important to secure the ClusterControl node and the database cluster. We recommend user to isolate their database infrastructure from the public Internet and just whitelist the known hosts or networks to connect to the database cluster.
ClusterControl requires ports used by the following services to be opened/enabled:
- ICMP (echo reply/request)
- SSH (default is 22)
- HTTP (default is 80)
- HTTPS (default is 443)
- MySQL (default is 3306)
- CMON RPC (default is 9500)
- CMON RPC TLS (default is 9501)
- CMON Events (default is 9510)
- CMON SSH (default is 9511)
- CMON Cloud (default is 9518)
- Streaming port for backups through netcat (default is 9999)
ClusterControl supports various database and application vendors and each has its own set of standard ports that need to be reachable. Following ports and services need to be reachable by ClusterControl on the managed database nodes:
|Database Cluster (Vendor)||Port/Service|
|MySQL/MariaDB (Single instance and replication)||
|MySQL Cluster (NDB)||
|MongoDB replica set||
|MongoDB sharded cluster||
|Galera Arbitrator (garbd)||
3.7. Hostnames and IP addresses¶
It is recommended for users to setup a proper host definition file in
/etc/hosts file. The file should be identical on all servers in your cluster. Otherwise, your database cluster might not work as expected with ClusterControl. Below is an example of a host definition file:
127.0.0.1 localhost.localdomain localhost 10.0.1.10 clustercontrol clustercontrol.example.com 10.0.1.11 server1 server1.example.com 10.0.1.12 server2 server2.example.com
You need to separate the 127.0.0.1 entry from your real hostname, specifying it only to
localhost.localdomain. To verify whether you have set up the hostname correctly, ensure the following command returns the primary IP address:
$ hostname -I 10.0.1.10 # This is good. IP address returned is neither 127.0.0.1 nor 127.0.1.1
3.8. Operating System User¶
ClusterControl controller (cmon) process requires a dedicated operating system user to perform various management and monitoring commands on the managed nodes. This user which is defined as
sshuser in CMON configuration file, must exist on all managed nodes and it should have the ability to perform super-user commands.
You are recommended to install ClusterControl as ‘root’, and running as root is the easiest option. If you perform the installation using another user other than ‘root’, the following must be true:
- The OS user must exist on all nodes
- The OS user must not be ‘mysql’
- ‘sudo’ program must be installed on all hosts
- The OS user must be allowed to do ‘sudo’, i.e, it must be in sudoers
- The OS user must be configured with proper PATH environment variable. The following PATH are expected for user
For sudoers, using passwordless sudo is recommended. To setup a passwordless sudo user, add following line into
Edit the sudoers with the following command (as root):
And add the following line at the end. Replace
[OS user] with the sudo username of your choice:
[OS user] ALL=(ALL) NOPASSWD: ALL
Open a new terminal to verify it works. You should now be able to run the command below without entering a password:
$ sudo ls /usr
You can also verify this with SSH command line used by CMON (assuming passwordless SSH has been setup correctly):
$ ssh -qt [OS user]@[IP address/hostname] "sudo ls /usr"
[OS user] is the name of the user you intend to use during the installation, and
[IP address/hostname] is the IP address or hostname of a node in your cluster.
3.9. Passwordless SSH¶
Proper passwordless SSH setup from ClusterControl node to all nodes (including ClusterControl node) is mandatory. When adding a new node, the node must be accessible via passwordless SSH from ClusterControl beforehand.
3.9.1. Setting up passwordless SSH¶
To setup a passwordless SSH, make sure you generate a SSH key and copy it from the ClusterControl host as the designated user to the target host. Take note that ClusterControl also requires passwordless SSH to itself, so do not forget to set this up as described in the example below.
Most of the sampling tasks for controller are done locally but there are some tasks that require a working self-passwordless SSH e.g: starting netcat when performing backup (to stream created backup to the other node). There are also various places where ClusterControl performs the execution “uniformly” regardless of the node’s role or type. So, setting this up is required and failing to do so will result ClusterControl to raise an alarm.
It is NOT necessary to setup two-way passwordless SSH, e.g: from the managed database node to the ClusterControl.
Examples below show how a root user on the ClusterControl host generates and copies a SSH key to a database host, 192.168.0.10:
$ whoami root $ ssh-keygen -t rsa # press Enter on all prompts $ ssh-copy-id 192.168.0.10 # insert the root password of 192.168.0.10 if prompted
ssh-copy-id command to all nodes (including ClusterControl node)
If you are running as a sudo user e.g sysadmin, here is an example:
$ whoami sysadmin $ ssh-keygen -t rsa # press Enter on all prompts $ ssh-copy-id 192.168.0.10 # insert the sysadmin password of 192.168.0.10 if prompted
ssh-copy-id command to all nodes (including ClusterControl node)
You should now able to SSH from ClusterControl to the other server(s) without password:
$ ssh [username]@[server IP address]
If it does not work, check permissions of the
.ssh directory and the files in it. Some users need to set the following in their
Do not forget to restart SSH daemon if you make changes in the
In order to prevent a long running SSH connection to be terminated by the firewall or switch, you may also want to set in
/etc/ssh/ssh_config on the ClusterControl node:
ServerAliveInterval 30 ServerAliveCountMax 10
For AWS cloud users, you can use the corresponding key pair by uploading it onto the ClusterControl host and specifying the physical location under
ssh_identity in CMON configuration file:
If you use DSA (CMON defaults to RSA), then you need to follow these instructions.
3.9.2. Sudo password¶
Sudoers with or without password is possible with sudo configuration option. If undefined, CMON will escalate to sudoer without password. To specify the sudo password, add the following option inside the CMON configuration file:
sudo="echo 'thesudopassword' | sudo -S 2>/dev/null"
2>/dev/null in the sudo command is compulsory to exclude stderr from the response.
Don’t forget to restart cmon service to load the option.
3.9.3. Encrypted home directory¶
If the sudo user’s home directory is encrypted, you might be facing following scenarios:
- First SSH login will required password, even though you have copied the public key to the remote host
- If you run another SSH session, while the first SSH session still active, you will able to authenticate without password and the key authentication is successful.
Encrypted home directories aren’t decrypted until the login is successful, and your SSH keys are stored in your home directory. The first SSH connection you make will require a password. While the subsequent connections will no longer need password since the SSH service is able to read the
authorized_key (inside user’s homedir) in decrypted environment.
To solve this, you need to follow these instructions.
ClusterControl requires all servers’ time to be synchronized and to run within a same time zone. Verify this by using following command:
$ date Mon Sep 17 22:59:24 UTC 2013
To change time zone, e.g from UTC to Pacific time:
$ rm /etc/localtime $ ln -sf /usr/share/zoneinfo/US/Pacific localtime
UTC is however recommended. Configure NTP client for each host with a working time server to avoid time drifting between hosts which could cause inaccurate reporting or incorrect graphs plotting. To immediately sync a server’s time with a time server, use following command:
$ ntpdate -u [NTP server, e.g europe.pool.ntp.org]
ClusterControl comes in 4 versions - Community, Standalone, Advanced and Enterprise editions, within the same binary. Please review the ClusterControl product page for features comparison between these editions. To upgrade from Community to Standalone, Advanced or Enterprise, you would need a valid software license. When the license expires, ClusterControl defaults back to the Community Edition.
All installation methods automatically configures ClusterControl with a 30-day fully functional trial license. For commercial information, please contact us.