6. Components

Starting from v1.2.12, ClusterControl consists of four components:

Component Package naming Role
ClusterControl controller (cmon) clustercontrol-controller The brain of ClusterControl. A backend service performing automation, management, monitoring and scheduling tasks. All the collected data will be stored directly inside CMON database
ClusterControl REST API [1] clustercontrol-cmonapi Interprets request and response data between ClusterControl UI and CMON database
ClusterControl UI clustercontrol A modern web user interface to visualize and manage the cluster. It interacts with CMON controller via remote procedure call (RPC) or REST API interface
ClusterControl NodeJS clustercontrol-nodejs This optional package is introduced in ClusterControl version 1.2.12 to provide an interface for notification services and integration with 3rd party tools

6.1. ClusterControl Controller (CMON)

ClusterControl Controller (CMON) is the core backend process that performs all automation and management procedures. It is usually installed as /usr/sbin/cmon. It comes with a collection of helper scripts in /usr/bin directory (prefixed with s9s_) to complete specific tasks. However, some of the scripts have been deprecated due to the corresponding tasks are now being handled by the CMON core process.

CMON controller package is available at Severalnines download site. Redhat-based systems should download and install the RPM package while Debian-based systems should download and extract the DEB package. The package name is formatted as:

  • RPM package (Redhat-based systems): clustercontrol-controller-[version]-[build number]-[architecture].rpm
  • DEB package (Debian-based systems): clustercontrol-controller-[version]-[build number]-[architecture].deb

A configuration file /etc/cmon.cnf is required to initially setup the CMON Controller. It is possible to have several configuration files each for multiple clusters as described in the next sections.

6.1.1. Command Line Arguments

By default if you just run cmon (without any arguments), cmon defaults to run in the background. ClusterControl Controller (cmon) supports several command line options as shown below:

-h, --help

  • Print the help.

--help-config

-v, --version

  • Prints out the version number and build info.

--logfile=[filepath]

The path of the log file to be used.

-d, --nodaemon

  • Run in foreground. Ctrl + C to exit.

-r, --directory=[directory]

  • Running directory.

-p, --rpc-port=[integer]

  • Listen on RPC port. Default is 9500.

-t, --rpc-tls=<bool>

  • Enable TLS on RPC port. Default is false.

-b, --bind-addr='ip1,ip2..'

  • Bind Remote Procedure Call (RPC) to IP addresses (default is 127.0.0.1,::1). By default cmon binds to ‘127.0.0.1’ and ‘::1’. If another bind-address is needed, then it is possible to define the bind addresses in the file /etc/default/cmon. The CMON init script translates those RPC_* ones into command line options.
  • Example
RPC_PORT=9500
RPC_BIND_ADDRESSES="10.10.10.13,192.168.33.1"

-u, --upgrade-schema

  • Try to upgrade the CMON schema (Supported from CMON version 1.2.12 and later).

-U, --cmondb-user=USERNAME

  • Sets the user name to access the CMON database.

-P, --cmondb-password=PASSWORD

  • Uses the password to access teh CMON database.

-H, --cmondb-host=HOSTNAME

  • Access the CMON database on the given host.

-D, --cmondb-name=DATABASE

  • Sets the CMON database name.

-e, --events-client=URL

  • Additional RPC URL where backend sends events.

6.1.2. Configuration File

A single CMON Controller process is able to monitor one or more clusters. Each of the cluster requires one exclusive configuration file residing in the /etc/cmon.d/ directory. For instance, the default CMON configuration file is located at /etc/cmon.cnf, and commonly used to store the default (minimal) configuration for CMON process to run.

Example of the CMON main configuration file located at /etc/cmon.cnf:

mysql_port=3306
mysql_hostname=127.0.0.1
mysql_password=cm0nP4ss
mysql_basedir=/usr
hostname=10.0.0.196
logfile=/var/log/cmon.log
rpc_key=390faeffb8166277a4f25336a69efa50915635a7

For the first cluster (cluster_id=1), the configuration options should be stored inside /etc/cmon.d/cmon_1.cnf. For the second cluster, it would be /etc/cmon.d/cmon_2.cnf with cluster_id=2 respectively, and so on. The following shows example content of CMON cluster’s configuration file located at /etc/cmon.d/cmon_1.cnf:

cluster_id=1
cmon_user=cmon
created_by_job=1
db_stats_collection_interval=30
enable_query_monitor=1
galera_vendor=codership
galera_version=3.x
group_owner=1
host_stats_collection_interval=60
hostname=10.0.0.196
logfile=/var/log/cmon_1.log
mode=controller
monitored_mountpoints=/var/lib/mysql/
monitored_mysql_port=3306
monitored_mysql_root_password=7XU@Wy4nqL9
mysql_bindir=/usr/bin/
mysql_hostname=127.0.0.1
mysql_password=cm0nP4ss
mysql_port=3306
mysql_server_addresses=10.0.0.99:3306,10.0.0.253:3306,10.0.0.181:3306
mysql_version=5.6
name='Galera Cluster'
os=redhat
osuser=root
owner=1
pidfile=/var/run
basedir=/usr
repl_password=9hHRgQLSsZz3Vd4a
repl_user=rpl_user
rpc_key=3V0RaV6dE8KSyClE
ssh_identity=/root/ashrafawskey.pem
ssh_port=22
type=galera
vendor=codership

An example of CMON configuration file hierarchy is as follows:

Example cluster Configuration file Cluster identifier Log file location
Default configuration /etc/cmon.cnf N/A logfile=/var/log/cmon.log
Cluster #1 (Galera) /etc/cmon.d/cmon_1.cnf cluster_id=1 logfile=/var/log/cmon_1.log
Cluster #2 (MySQL Cluster) /etc/cmon.d/cmon_2.cnf cluster_id=2 logfile=/var/log/cmon_2.log
Cluster #N (cluster type) /etc/cmon.d/cmon_N.cnf cluster_id=N logfile=/var/log/cmon_N.log

Note

It’s highly recommendeded to separate CMON logging for each cluster to its own log file. In the above example, we can see that cluster_id and logfile are two imporant configuration options to distinguish the cluster.

The CMON Controller will import the configuration options defined in each configuration file into the CMON database during process starts up. Once loaded, CMON then use all the loaded information to manage clusters based on the cluster_id value.

6.1.3. Configuration Options

All of the options and values as described below must not contain any whitespace between them. Any changes to the CMON configuration file requires a CMON service restart before they are applied. The s

The configuration options can be divided into a number of types:

  1. General
  2. CMON
  3. Operating system
  4. SSH
  5. Nodes (MySQL, MongoDB, PostgreSQL)
  6. Monitoring
  7. Management
  8. Security & Encryption

Following is the list of common configuration options inside CMON Controller configuration file. You can also see them by using --help-config parameter in the terminal:

$ cmon --help-config

6.1.3.1. General

cluster_id=<integer>

  • Cluster identifier. This will be used by CMON to indicate which cluster to provision. It must be unique, two clusters can not share the same ID.
  • Example: cluster_id=1

name=<string>

  • Cluster name. The cluster name configured under ClusterControl > DB cluster > Settings > General Settings > Cluster Name precedes this.
  • Example: name=cluster_1

cluster_name=<string>

  • Alias to name.

type=<string>

  • Cluster type. Supported values are galera, mysql_single, mysqlcluster, mongodb, postgresql_single, replication, group_replication.
  • Example: type=galera

cluster_type

  • Alias to type.

created_by_job=<integer>

  • The ID of the job created this cluster. This is usually automatically generated by ClusterControl.
  • Example: created_by_job=13

6.1.3.2. CMON

mode=<string>

  • CMON role. Supported values are controller, dual, agent, hostonly.
  • Example: mode=controller

agentless=<boolean integer>

  • CMON controller mode (deprecated). Agents are no longer supported. 0 for agentful or 1 for agentless (default).
  • Example: agentless=1

logfile=<path to log file>

  • CMON log file location. This is where CMON logs its activity. The file will be automatically generated if it doesn’t exist. CMON will write to syslog by default.
  • Example: logfile=/var/log/cmon.log

pidfile=<path to PID directory>

  • CMON process identifier file directory. Keep the default value is recommended.
  • Example: pidfile=/var/run

mysql_hostname=<string>

  • The MySQL hostname or IP address where CMON database resides. Using IP address is recommended.
  • Example: mysql_hostname=192.168.0.10

mysql_password=<string>

  • The MySQL password for user cmon to connect to CMON database. Alphanumeric values only.
  • Example: mysql_password=cMonP4ss

mysql_port=<integer>

  • The MySQL port used by CMON to connecto to CMON database.
  • Example: mysql_port=3306

6.1.3.3. Operating system

os=<string>

  • Operating system runs across the cluster, including ClusterControl host. ‘redhat’ for Redhat-based distributions (CentOS/Red Hat Enterprise Linux/Oracle Linux) or ‘debian’ for Debian-based distributions (Debian/Ubuntu).
  • Example: os=redhat

osuser=<string>

  • Operating system user that will be used by CMON to perform automation tasks like cluster recovery, backups and upgrades. This user must be able to perform super-user activities. Using root is recommended.
  • Example: os_user=root

os_user=<string>

  • Alias to osuser.

sshuser=<string>

  • Alias to osuser.

sudo="echo '<sudo password>' | sudo -S 2>/dev/null"

  • The command used to obtain superuser permissions. If sudo user requires password, specify the sudo command with sudo password here. The sudo command must be trimmed by redirecting stderr to somewhere else. Therefore, it is compulsary to have -S 2>/dev/null appended in the sudo command.
  • Example: sudo="echo 'My5ud0' | sudo -S 2>/dev/null"

sudo_opts=<command>

  • Alias to sudo.

hostname=<string>

  • Hostname or IP address of the ClusterControl Controller (cmon) host.
  • Example: hostname=192.168.0.10

wwwroot=<path to CMONAPI and ClusterControl UI>

  • Path to CMONAPI and ClusterControl UI. If not set, it defaults to ‘/var/www/html’ for Redhat-based distributions or ‘/var/www’ for Debian-based distributions.
  • Example: wwwroot=/var/www/html

vendor=<string>

  • Database vendor name. ClusterControl needs to know this in order to distinguish the vendor’s relevant naming convention especially for package name, daemon name, deployment steps, recovery procedures and lots more. Supported value at the moment is percona, codership, mariadb, mongodb, oracle.
  • Example: vendor=codership

6.1.3.4. SSH

ssh_identify=<path to SSH key or key pair>

  • The SSH key or key pair file that will be used by CMON to connect managed nodes (including ClusterControl node) passwordlessly. If undefined, CMON will use the home directory of os_user and look for .ssh/id_rsa file.
  • Example: ssh_identity=/root/.ssh/id_rsa

ssh_keypath=<path to SSH key or key pair>

  • Alias to ssh_identify.

identity_file=<path to SSH key or key pair>

  • Alias to ssh_identify.

ssh_port=<integer>

  • The SSH port used by CMON to connect to managed nodes. If undefined, CMON will use port 22.
  • Example: ssh_port=22

ssh_options=<string>

  • The SSH options used by CMON to connect to managed nodes. Details on SSH manual page.
  • Example: ssh_options='-nqtt'

ssh_acquire_tty=<boolean integer>

  • Setting for libssh: should it acquire a remote tty. Default is 1 (true).
  • Example: ssh_acquire_tty=1

ssh_password=<string>

  • The SSH password used for connection to nodes.
  • Example: ssh_password=P4ssw0rd123

ssh_timeout=<integer>

  • Network timeout value in seconds for SSH connections. Default is 30.
  • Example: ssh_timeout=30

libssh_timeout=<integer>

  • Alias to ssh_timeout

libssh_loglevel=<integer>

  • Setting for libssh logging verbosity to stdout. Accepted values are 0 (NONE), 1 (WARN), 2 (INFO), 3 (DEBUG), 4 (TRACE).
  • Example: libssh_loglevel=2

6.1.3.5. Monitoring

monitored_mountpoints=<list of paths to be monitored>

  • The MySQL/MongoDB/TokuMX/PostgreSQL data directory used by database nodes for disk performance in comma separated list.
  • Example: monitored_mountpoints=/var/lib/mysql,/mnt/data/mysql

monitored_nics=<list of NICs to be monitored>

  • List of network interface card (NIC) to be monitored for network performance in comma separated list.
  • Example: monitored_nics=eth1,eth2

db_stats_collection_interval=<integer>

  • Database statistic collections interval in seconds. The lowest value is 1. Default is 30 seconds.
  • Example: db_stats_collection_interval=30

host_stats_collection_interval=<integer>

  • Host statistic collections interval in seconds. The lowest value is 1. Default is 30 seconds.
  • Example: host_stats_collection_interval=30

lb_stats_collection_interval=<integer>

  • Load balancer stats collection interval. Default is 30.
  • Example: lb_stats_collection_interval=30

db_schema_stats_collection_interval=<integer>

  • How often database growth and table checks are performed in seconds. This translates to information_schema queries. Default is 10800 seconds (3 hours). 0 means disabled.
  • Example: db_schema_stats_collection_interval=10800

db_log_collection_interval=<integer>

  • Database log files collection interval. Default is 600.
  • Example: db_log_collection_interval=600

db_long_query_time_alarm=<integer>

  • If a query takes longer than db_long_query_time_alarm to execute, an alarm will be raised containing detailed information about blocked and long running transactions. Default is 10 seconds.
  • Example: db_long_query_time_alarm=5

db_schema_max_objects=<integer>

  • Maximum number of database objects that ClusterControl will pull from monitored database nodes.
  • Example: db_schema_max_objects=500

db_hourly_stats_collection_interval=<integer>

  • Database statistic collections interval in seconds. Default is 5.
  • Example: db_hourly_stats_collection_interval=5

enable_mysql_timemachine=<boolean integer>

  • This determine whether ClusterControl should enable MySQL time machine status and variable collections. The status time machine allows you to select status variable for a time range and compare the values at the start and end of that range from ClusterControl UI. Default is 0, meaning it is disabled.
  • Example: enable_mysql_timemachine=1

6.1.3.6. Management

enable_cluster_autorecovery=<boolean integer>

  • If undefined, CMON defaults to 0 (false) and will NOT perform automatic recovery if it detects cluster failure. Supported value is 1 (cluster recovery is enabled) or 0 (cluster recovery is disabled).
  • Example: enable_cluster_autorecovery=1

enable_node_autorecovery=<boolean integer>

  • If undefined, CMON default to 0 (false) and will NOT perform automatic recovery if it detects node failure. Supported value is 1 (node recovery is enabled) or 0 (node recovery is disabled).
  • Example: enable_node_autorecovery=1

enable_autorecovery=<boolean integer>

  • If undefined, CMON defaults to 0 (false) and will NOT perform automatic recovery if it detects node or cluster failure. Supported value is 0 (cluster and node recovery are disabled) or 1 (cluster and node recovery are enabled). This setting will internally set enable_node_autorecovery and enable_cluster_autorecovery to the specified value.
  • Example: enable_autorecovery=1

netcat_port=<integer>

  • The netcat port used to stream backups. Default is 9999.
  • Example: netcat_port=9999

6.1.3.7. Nodes (MySQL)

mysql_server_addresses=<string>

  • Comma separated list of MySQL hostnames or IP addresses (with or without port is supported). For MySQL Cluster, this should be the list of MySQL API node IP addresses.
  • Example: mysql_server_addresses=192.168.0.11:3306,192.168.0.12:3306,192.168.0.13:3306

datanode_addresses=<string>

  • Exclusive for MySQL Cluster. Comma separated list of data node hostnames or IP addresses.
  • Example: datanode_addresses=192.168.0.41,192.168.0.42

mgmnode_addresses=<string>

  • Exclusive for MySQL Cluster. Comma separated list of management node hostnames or IP addresses.
  • Example: mgmnode_addresses=192.168.0.51,192.168.0.52

ndb_connectstring=<string>

  • Exclusive for MySQL Cluster. NDB connection string for the cluster.
  • Example: ndb_connectstring=192.168.0.51:1186,192.168.0.52:1186

ndb_binary=<string>

  • Exclusive for MySQL Cluster. NDB binary for data node. Supported values are ndbd or ndbmtd.
  • Example: ndb_binary=ndbmtd

db_configdir=<string>

  • Exclusive for MySQL Cluster. Directory where configuration files (my.cnf/config.ini) of the cluster is stored.
  • Example: db_configdir=/etc/mysql

monitored_mysql_port=<integer>

  • MySQL port for the managed cluster. ClusterControl all DB nodes are running on the same MySQL port.
  • Example: monitored_mysql_port=3306

monitored_mysql_root_password=<string>

  • MySQL root password for the managed cluster. ClusterControl assumes all DB nodes are using the same root password. This is required when you want to scale your cluster by adding a new DB node or replication slave.
  • Example: monitored_mysql_root_password=r00tPassword

mysql_basedir=<MySQL base directory location>

  • The MySQL base directory used by CMON to find MySQL client related binaries.
  • Example: mysql_basedir=/usr

mysql_bindir=<MySQL binary directory location>

  • The MySQL binary directory used by CMON to find MySQL client related binaries.
  • Example: mysql_bindir=/usr/bin

repl_user=<string>

  • The MySQL replication user.
  • Example: repl_user=repluser

repl_password=<string>

  • Password for repl_user.
  • Example: repl_password=ZG04Z2Jnk0MUWAZK

auto_manage_readonly=<boolean integer>

  • Enable/Disable automatic management of the MySQL server read_only variable. Default is 1 (true), which means ClusterControl will set the read_only=ON if the MySQL replication role is slave.
  • Example: auto_manage_readonly=0

galera_port=<integer>

  • The galera port to be used. Default is 4567.
  • Example: galera_port=5555

replication_failover_whitelist=<string>

  • Comma separated list of MySQL slaves which should be used as potential master candidates. If this variable is set, only those hosts will be considered. This parameter takes precedence over replication_failover_blacklist.
  • Example: replication_failover_whitelist=192.168.1.11,192.168.1.12

replication_failover_blacklist=<string>

  • Comma separated list of MySQL slaves which will never be considered a master candidate. You can use it to list slaves that are used for backups or analytical queries. If the hardware varies between slaves, you may want to put here the slaves which use slower hardware. replication_failover_whitelist takes precedence over this parameter if it is set.
  • Example: replication_failover_blacklist=192.168.1.101,192.168.1.102

replication_skip_apply_missing_txs=<boolean integer>

  • Default is 0. Skip the check process for additional missing transactions before promoting a slave to a master and just use the most advanced slave. Such process may result in a serious problem though - if an errant transaction is found, replication may be broken.
  • Example: replication_skip_apply_missing_txs=1

replication_stop_on_error=<boolean integer>

  • Default is 1. ClusterControl will perform the MySQL master switch only once and will be aborted immediately if the switchover fails, unless the controller is restarted or you specify this variable to 0.
  • Example: replication_stop_on_error=0

replication_failover_wait_to_apply_timeout=<boolean integer>

  • Default is -1, which means that failover won’t happen if a master candidate is lagging. ClusterControl will wait indefinitely for it to apply all missing transactions from its relay logs. This is safe, but, if for some reason, the most up-to-date slave is lagging badly, failover may takes hours to complete. If set to 0, failover happens immediately, no matter if the master candidate is lagging or not.
  • Example: replication_failover_wait_to_apply_timeout=0

6.1.3.8. Nodes (MongoDB)

mongodb_server_addresses=<string>

  • Comma separated list of MongoDB/TokuMX shard or replica IP addresses with port.
  • Example: mongodb_server_addresses=192.168.0.11:27017,192.168.0.12:27017,192.168.0.13:27017

mongoarbiter_server_addresses=<string>

  • Comma separated list of MongoDB/TokuMX arbiter IP addresses with port.
  • Example: mongoarbiter_server_addresses=192.168.0.11:27019,192.168.0.12:27019,192.168.0.13:27019

mongocfg_server_addresses=<string>

  • Comma separated list of MongoDB/TokuMX config server IP addresses with port.
  • Example: mongocfg_server_addresses=192.168.0.11:27019,192.168.0.12:27019,192.168.0.13:27019

mongos_server_addresses=<string>

  • Comma separated list of MongoDB/TokuMX mongos IP addresses with port.
  • Example: mongos_server_addresses=192.168.0.11:27017,192.168.0.12:27017,192.168.0.13:27017

mongodb_basedir=<location MongoDB base directory>

  • The MongoDB base directory used by CMON to find mongodb client related binaries.
  • Example: mongodb_basedir=/usr

mongodb_user=<string>

  • MongoDB admin/root username.
  • Example: mongodb_user=root

mongodb_password=myadminpassword

  • Password for mongodb_user.
  • Example: mongodb_password=kPo123^^#*

mongodb_authdb=<string>

  • The database containing user information to use for authentication. Default is admin.
  • Example: mongodb_authdb=admin

mongodb_cluster_key=<path>

  • The cluster’s nodes authenticating to each other using this key.
  • Example: mongodb_cluster_key=/etc/repl.key

6.1.3.9. Nodes (PostgreSQL)

postgresql_server_addresses=<string>

  • The PostgreSQL node instances.
  • Example: postgresql_server_addresses=192.168.10.100

postgre_server_addresses=<string>

  • Alias to postgresql_server_addresses.

postgresql_user=<string>

  • The PostgreSQL admin user name. Default is postgres.
  • Example: postgresql_user=postgres

postgre_user=<string>

  • Alias to postgresql_user.

postgresql_password=<string>

  • The password used for PostgreSQL user.
  • Example: postgresql_password=p4ssw0rd123

postgre_password=<string>

  • Alias to postgresql_password.

6.1.3.10. Encryption and Security

cmondb_ssl_key=<file path>

  • Path to SSL key, for SSL encryption between CMON process and the CMON database.
  • Example: cmondb_ssl_key=/etc/ssl/mysql/client-key.pem

cmondb_ssl_cert=<file path>

  • Path to SSL certificate, for SSL encryption between CMON process and the CMON database.
  • Example: cmondb_ssl_cert=/etc/ssl/mysql/client-cert.pem

cmondb_ssl_ca=<file path>

  • Path to SSL CA, for SSL encryption between CMON process and the CMON database.
  • Example: cmondb_ssl_ca=/etc/ssl/mysql/ca-cert.pem

cluster_ssl_key=<file path>

  • Path to SSL key, for SSL encryption between CMON process and managed MySQL Servers.
  • Example: cluster_ssl_key=/etc/ssl/mysql/client-key.pem

cluster_ssl_cert=<file path>

  • Path to SSL cert, for SSL encryption between CMON process and managed MySQL Servers.
  • Example: cluster_ssl_cert=/etc/ssl/mysql/client-cert.pem

cluster_ssl_ca=<file path>

  • Path to SSL CA, for SSL encrption between CMON and managed MySQL Servers.
  • Example: cluster_ssl_ca=/etc/ssl/mysql/ca-cert.pem

cluster_certs_store=<directory path>

  • Path to storage location of SSL related files. This is required when you want to add new node in an encrypted Galera cluster.
  • Example: cluster_certs_store=/etc/ssl/galera/cluster_1

rpc_key=<string>

  • Unique secret token for authentication. To interact with individual cluster via CMON RPC interface (port 9500), one must use this key or else you would get ‘HTTP/1.1 403 Access denied’.
  • ClusterControl UI needs this key stored as RPC API Token to communicate with CMON RPC interface. Each cluster should be configured with different rpc_key value. This value is automatically generated when new cluster/server is created or added into ClusterControl.
  • Example: rpc_key=VJZKhr5CvEGI32dP

6.1.4. Agentless

Starting from version 1.2.5, ClusterControl introduces an agentless mode of operation. There is now no need to install agents on the managed nodes. User only need to install the CMON controller package on the ClusterControl host, and make sure that passwordless SSH and the CMON database user GRANTs are properly set up on each of the managed hosts.

The agentless mode is the default and recommended type of setup. Starting from version 1.2.9, an agentful setup is no longer supported.

6.1.5. CMON database

The CMON Controller requires a MySQL database running on mysql_hostname as defined in CMON configuration file. The database name and user is ‘cmon’ and is immutable.

The CMON database is the persistent store for all monitoring data collected from the managed nodes, as well as all ClusterControl meta data (e.g. what jobs there are in the queue, backup schedules, backup statuses, etc.). ClusterControl CMONAPI contains logic to query the CMON DB, e.g. for cluster statistics that is presented in the ClusterControl UI.

The CMON database dump files are shipped with the CMON Controller package and can be found under /usr/share/cmon once it installed. When performing a manual upgrade from an older version, it is compulsory to apply the SQL modification files relative to the upgrade. For example, when upgrading from version 1.2.0 to version 1.2.5, apply all SQL modification files between those versions in sequential order:

  1. cmon_db_mods-1.2.0-1.2.1.sql
  2. cmon_db_mods-1.2.3-1.2.4.sql
  3. cmon_db_mods-1.2.4-1.2.5.sql

Note that there is no 1.2.1 to 1.2.2 SQL modification file. That means there is no changes on the CMON database structure between those versions. The database upgrade procedure will not remove any of the existing data inside the CMON database. You can just use simple MySQL import command as follow:

mysql -f -ucmon -p[cmon_password] -h[mysql_hostname] -P[mysql_port] cmon < /usr/share/cmon/cmon_db.sql
mysql -f -ucmon -p[cmon_password] -h[mysql_hostname] -P[mysql_port] cmon < /usr/share/cmon/cmon_data.sql

Note

Replace the variables in square brackets with respective values defined in CMON configuration file.

MySQL user ‘cmon’ needs to have proper access to CMON DB by performing following grant:

Grant all privileges to ‘cmon’ at hostname value (as defined in CMON configuration file) on ClusterControl host:

GRANT ALL PRIVILEGES ON *.* TO 'cmon'@'[hostname]' IDENTIFIED BY '[mysql_password]' WITH GRANT OPTION;

Grant all privileges for ‘cmon’ at 127.0.0.1 on ClusterControl host:

GRANT ALL PRIVILEGES ON *.* TO 'cmon'@'127.0.0.1' IDENTIFIED BY '[mysql_password]' WITH GRANT OPTION;

For each managed database server, on the managed database server, grant all privileges to cmon at controller’s hostname value (as defined in CMON configuration file) on each of the managed database host:

GRANT ALL PRIVILEGES ON *.* TO 'cmon'@'[hostname]' IDENTIFIED BY '[mysql_password]' WITH GRANT OPTION;

Don’t forget to run FLUSH PRIVILEGES on each of the above statement so the grant will be kept after restart. If users deploy using the deployment package generated from the Severalnines Cluster Configurator and installer script, this should be configured correctly.

6.1.6. Database Client

For MySQL-based clusters, CMON Controller requires MySQL client to connect to CMON database. This package usually comes by default when installing MySQL server required by CMON database.

For MongoDB/TokuMX cluster, the CMON Controller requires to have both MySQL and MongoDB client packages installed and correctly defined in CMON configuration file on mysql_basedir and mongodb_basedir option.

For PostgreSQL, the CMON controller doesn’t require any PostgreSQL clients installed on the node. All PostgreSQL commands will be executed locally on the managed PostgreSQL node via SSH.

If users deploy using the deployment package generated from the Severalnines Cluster Configurator, this should be configured automatically.

6.2. ClusterControl REST API (CMONAPI)

The CMONAPI is a RESTful interface, and exposes all ClusterControl functionality as well as monitoring data stored in the CMON DB. Each CMONAPI connects to one CMON DB instance. Several instances of the ClusterControl UI can connect to one CMONAPI as long as they utilize the correct CMONAPI token and URL. The CMON token is automatically generated during installation and is stored inside config/bootstrap.php.

You can generate the CMONAPI token manually by using following command:

python -c 'import uuid; print uuid.uuid4()' | sha1sum | cut -f1 -d' '

By default, the CMONAPI is running on Apache and located under /var/www/html/cmonapi (Redhat/CentOS/Ubuntu >14.04) or /var/www/cmonapi (Debian/Ubuntu <14.04). The value is relative to wwwroot value defined in CMON configuration file. The web server must support rule-based rewrite engine and able to follow symlinks.

The CMONAPI page can be accessed through following URL:

http|https://[ClusterControl IP address or hostname]/cmonapi

Both ClusterControl CMONAPI and UI must be running on the same version to avoid misinterpretation of request and response data. For instance, ClusterControl UI version 1.2.6 needs to connect to the CMONAPI version 1.2.6.

Attention

We are gradually in the process of migrating all functionalities in REST API to RPC interface. Kindly expect the REST API to be obselete in the near future.

6.3. ClusterControl UI

ClusterControl UI provides a modern web user interface to visualize the cluster and perform tasks like backup scheduling, configuration changes, adding nodes, rolling upgrades, etc. It requires a MySQL database called ‘dcps’, to store cluster information, users, roles and settings. It interacts with CMON controller via remote procedure call (RPC) or REST API interface.

You can install the ClusterControl UI independently on another server by running following command:

yum install clustercontrol # RHEL/CentOS
sudo apt-get install clustercontrol # Debian/Ubuntu

Note

Omit ‘sudo’ if you are running as root.

The ClusterControl UI can connect to multiple CMON Controller servers (if they have installed the CMONAPI) and provides a centralized view of the entire database infrastructure. Users just need to register the CMONAPI token and URL for a specific cluster on the Cluster Registrations page.

The ClusterControl UI will load the cluster in the database cluster list, similar to the screenshot below:

_images/docs_cc_ui.png

Similar to the CMONAPI, the ClusterControl UI is running on Apache and located under /var/www/html/clustercontrol (Redhat/CentOS/Ubuntu >14.04) or /var/www/clustercontrol (Debian <8/Ubuntu <14.04). The web server must support rule-based rewrite engine and must be able to follow symlinks.

ClusterControl UI page can be accessed through following URL:

http|https://[ClusterControl IP address or hostname]/clustercontrol

Please refer to User Guide for more details on the functionality available in the ClusterControl UI.

6.4. ClusterControl NodeJS

This optional package is introduced in ClusterControl version 1.2.12 to provide an interface for notification services and integration with 3rd party tools like PagerDuty or external mail system. It allows NodeJS to be triggered as part of pseudo-javascript from Developer Studio when the values for the Custom Advisors meet the actual system values.

At the time of this writing, Severalnines contributes two NodeJS plugins available at NPM page.

This package works differently if compared to ClusterControl plugin interface, whereby ClusterControl executes the plugin script if only alarm is raised/closed. Alarm’s rules is hardcorded in ClusterControl which is not as dynamic as Advisors. Advisors extends the ClusterControl capability in health checks and notifications, built on top of ClusterControl Domain Specific Language (DSL). Each Advisors will have to be compiled and scheduled directly from ClusterControl’s Developer Studio. The list of scheduled Custom Advisors is available at ClusterControl > Performance > Advisors.

We have future plan to push alarms to NodeJS interface, so NodeJS can push them into a web socket, and all the subscribers (clients) will get those instantly.

Footnotes

[1]We are gradually in the process of migrating all functionalities in REST API to RPC interface. Kindly expect the REST API to be obselete in the near future.