ClusterControl Documentation
Use the menu below or the search below to learn everything you need to know about ClusterControl

5. User Guide

ClusterControl provides two user interfaces to interact with ClusterControl Controller (CMON) service:

  1. ClusterControl GUI - Web application
  2. ClusterControl CLI - Command line client called “s9s”

Not all functionalities are available on both user interfaces. For instances, there is no practicality for the command line client (ClusterControl CLI) to have all the advanced monitoring features offered by the web application client (ClusterControl UI). The command line client is heavily focused on automation and adoption in management, deployment and scaling operations while for the web UI client, it focuses more on structural visualization with guided approach. Occasionally, new management features will be introduced in the CLI before being incorporated into ClusterControl UI to turn it into a polished, finalized feature.

Starting from ClusterControl 1.4.2, installation using the installer script will include ClusterControl CLI installation as well. You can verify this by running the following commands on ClusterControl server after the installation process completes:

$ s9s --version | grep Version
s9s Version 1.7.6
$ s9s cluster --ping
PING Ok  11 ms

For more details on how to install, configure and manage these two clients, see the following guides:

5.1. ClusterControl GUI

5.1.1. Dashboard

This page is the landing page once you are logged in. It provides a summary of database clusters monitored by ClusterControl.

../_images/cc_cluster_list_172.png

Top Menu

ClusterControl’s top menu.

Sidebar

Left-side navigation provides quick access to ClusterControl administration menu. See Sidebar.

Cluster List

List of database clusters managed by ClusterControl with summarized status. Database cluster deployed by (or imported into) ClusterControl will be listed in this page. See Database Cluster List.

Cluster Actions

Provides shortcuts to main cluster functionality. Every supported database cluster has its own set of menu:

5.1.1.1. Activity

Clicking on it will expand the activity tab which consists of Alarms, Jobs and Logs. Click once more to collapse the content. If you rolled over the menu icon, you would see a counter summary for every component.

5.1.1.1.1. Alarms

Shows aggregated view of alarms raised for all clusters monitored by ClusterControl. Each alarm entry has a header, details on the issue, severity level, category, corresponding cluster name, corresponding host and timestamp. All the alarms listed here are also accessible directly under individual cluster main menu available at Alarms > Alarms.

Click on the alarm entry itself to see the full details and recommendation. Furthermore, you can click on the “Full Alarm Details” to see the full information, recommendation and to send out this alarm as an email to the recipients configured under Settings > CMON Settings > Email Notification Settings. Click “Ignore Alarm” to silent the respective alarm from appearing in the list again.

5.1.1.1.2. Jobs

Shows aggregated view of jobs that have been initiated and performed by ClusterControl across clusters (e.g., deploying a new cluster, adding an existing cluster, cloning, creating backup, etc). Each job entry has a job status, a cluster name, which user started the job and also timestamp. All the jobs listed here are also accessible directly under individual cluster main menu available at Logs > Jobs.

Click on the job entry itself to see its most recent job messages. Furthermore, you can click on the Full Job Details to see the full job specification and messages. Under the Full Job Details popup, you have the ability to see the full command sent to the controller service for that particular job by clicking on Expand Job Specs button. Underneath it is the full job messages in descending order (newer first) returned by the controller service. Copy to clipboard button will copy the content of the job messages to the clipboard.

Note

Starting from v1.6, ClusterControl has a better support for parallelism, where you can perform multiple deployments simultaneously.

The job status:

Job status Description
FINISHED The job is successfully executed.
FAILED The job is executed but failed.
RUNNING The job is started and in progress.
ABORTED The job is started but terminated.
DEFINED The job is defined but yet to start.
5.1.1.1.3. Logs

Shows aggregated view of ClusterControl logs which require user’s attention across clusters (logs with severity WARNING and ERROR). Each log entry has a message subject, severity level, component, the corresponding cluster name and also timestamp. All the logs listed here are also accessible directly under individual cluster at Logs > CMON Logs.

5.1.1.2. Global Settings

Provides interface to register clusters, repositories and subscriptions inside ClusterControl.

5.1.1.2.1. Repositories

Manages provider’s repository for database servers and clusters. You can have three types of repository when deploying database server/cluster using ClusterControl:

  1. Use Vendor Repositories
    • Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
  2. Do Not Setup Vendor Repositories
    • Provision software by using the pre-existing software repository already setup on the nodes. User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
  3. Use Mirrored Repositories (Create new repository)
    • Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository.
    • This allows you to “freeze” the current versions of the software packages used to provision a database cluster for a specific vendor and you can later use that mirrored repository to provision the same set of versions when adding more nodes or deploying other clusters.
    • ClusterControl sets up the mirrored repository under {wwwroot}/cmon-repos/, which is accessible via HTTP at http://ClusterControl_host/cmon-repos/.

Only Local Mirrored Repository will be listed and manageable here.

  • Remove Repositories
    • Remove the selected repository.
  • Filter by cluster type
    • Filter the repository list by cluster type.

For reference purpose, following is an example of yum definition if Local Mirrored Repository is configured on the database nodes:

$ cat /etc/yum.repos.d/clustercontrol-percona-5.6-yum-el7.repo
[percona-5.6-yum-el7]
name = percona-5.6-yum-el7
baseurl = http://10.0.0.10/cmon-repos/percona-5.6-yum-el7
enabled = 1
gpgcheck = 0
gpgkey = http://10.0.0.10/cmon-repos/percona-5.6-yum-el7/localrepo-gpg-pubkey.asc
5.1.1.2.2. Cluster Registrations

Registers a database cluster managed by ClusterControl Controller (CMON) to be viewed by ClusterControl UI. Each database cluster can be registered with the UI through this interface or you may skip doing this and use ClusterControl CLI instead. By default, all clusters deployed by and imported into ClusterControl through the web interface will be automatically registered with the UI. This effectively establishes the communication between the UI and the controller.

5.1.1.2.3. Subscriptions

Attention

ClusterControl introduces new license format on v1.7.1 (the new license key format contains only a long-encrypted string). If you are having the older format, contact the account manager or email our sales department at sales@severalnines.com to get a new license.

For users with a valid subscription (Advanced and Enterprise), enter your license key here to unlock additional features based on the subscription. The license string contains information about the license type, company/affilication, email, expiration date and total number of licensed nodes.

The following example is the license information that one would get from us:

Email: test@severalnines.com
Company: Severalnines AB
Expiration Date: 2019-04-23
Licensed Nodes: 1
License Key: ewogICAgImNvbXBhbnkiOiAiU2V2ZXJhbG5pbmPkIEFCIiwKICAgICJlbWFpbF9hZGRyZXNzIjog
InRlc3RAc2VRZXJhbG5pbmVzLmNvbSIsCiAgICAiZXhwaXJhdGlvbl9kYXRlIjogIjIwMTktMDQt
MjNUMDA6MDA6MDAuMDAxWiIsCiAgICAibGljZW5zZWRfbm9kZXMiOiAtMSwKICAgICJ0eXBlIjog
IkVudGVycHJpc2UiCn0Ke3M5cy1zaWduYXR1cmUtc2VwYXJhdG9yfQp0RUew5OZKV8SqmmwiQEzT
P+qTNnmCphirIVm7MriA/aCdlJYQcr1NJr4nvTNcSmgu4uFVf3Ufv4olHr4wrBq0/Js9Rm8bJWZo
BO8svHzQhCmIVEWcTYub362krjRyREnOGXaqWwUnvkZ0uUCT+WDaM1P9qn/HawoNd0e8E0+7WiZK
CpwjH+ESqSEppeu/Ewzf3p0C0e8WZbwHtZ9UmX2qJNQq9NDlByrO8FtbVjOOL4zTbc8jV0W2DWzY
1swzOgeyk+7N2eGVRWfdUSzudQSXkT3LA4cdV2HAsU5QLnmKxSCgg+jq+RQJiPwdPXEr3gzjzJzV
+qhmOZ5tTN+WABPy9l3kpztlCbkfzO84/4lM7Z3c4rQ8snMTu6RvD2M+oh/4lhvR8M9RrQQcl8JF
RX2Ak1ZAKxAXkJ97Z5U7nIzuyUGuMTCXdKGEtQkBXzpIcYFvXDeWu0MUks+EULpqG+OFnl+rSZa0
nNTSW3mR/f9B+4e2mK4y2OpJhh4rWPXR1DLpLVLk/2p0o64aEizA+IPe0TP+ox7bFzEfAXirVWfC
/Ol7m1k6arRbl8PSV1DRRcefM+UsABa6jypoiit+JXNPOajdjY1WBgEekCn/jeXBBoPM2k26274u
br0BuHULLkxGSpC8I2/nW6s84E653FO1Kpbvyx+2SKJxwUxLiuEZ2g==

Only paste the license key string (starting after “License Key: ” part) into the license text field. Once applied, restart CMON service to load the new license information:

$ systemctl restart cmon # systemd
$ service cmon restart # SysVinit

When the license expires, ClusterControl defaults back to the Community Edition. For features comparison, please refer to ClusterControl product page.

If you would like to force the existing enteprise edition to community edition (commonly to test out and compare different editions during trial), you can truncate the license table on the ClusterControl host manually. On ClusterControl server, run:

$ mysql -uroot -p cmon
mysql> truncate table cmon.license
$ systemctl restart cmon

Warning

Once a trial license is truncated and cmon is restarted, there is no way you can re-activate a trial licence anymore. Only a working enterprise license will be working as trial license can’t be applied more than once.

5.1.1.2.4. Configure Mail Server

Configures how email notifications should be sent out. ClusterControl supports two options for sending email notifications, either using local mail commands via local MTA (Sendmail/Postfix/Exim) or using an external SMTP server. Make sure the local MTA is installed and verified using Test Email button.

5.1.1.2.4.2. Use Sendmail
  • Use sendmail
    • Use this option to use sendmail to send notifications. See Installing Sendmail if you haven’t installed Sendmail. If you want to use Postfix, see Using Postfix.
  • Reply-to/From
    • Specify the sender of the email. This will appear in the ‘From’ field of mail header.
5.1.1.2.4.2.1. Installing Sendmail

On ClusterControl server, install the following packages:

$ apt-get install sendmail mailutils #Debian/Ubuntu
$ yum install sendmail mailx #RHEL/CentOS

Start the sendmail service:

$ systemctl start sendmail #systemd
$ service sendmail start #sysvinit

Verify if it works:

$ echo "test message" | mail -s "test subject" [email protected]

Replace myemail@example.com with your email address.

5.1.1.2.4.2.2. Using Postfix

Many of Linux distributions come with Sendmail as default MTA. To replace Sendmail and use other MTA, e.g Postfix, you just need to uninstall Sendmail, install Postfix and start the service. Following example shows commands that need to be executed on ClusterControl node as root user for RHEL:

$ service sendmail stop
$ yum remove sendmail -y
$ yum install postfix mailx cronie -y
$ chkconfig postfix on
$ service postfix start
5.1.1.2.5. Runtime Configurations

A shortcut to ClusterControl Controller runtime configurations per cluster. Runtime configuration shows the active ClusterControl Controller (CMON) runtime configuration parameters and displays the versions of ClusterControl Controller and ClusterControl UI packages. All parameters listed are loaded directly from cmon.cmon_configuration table, grouped by cluster ID.

Clicking on any of the list will redirect user to the Runtime Configurations page for that particular cluster.

5.1.1.3. Database Cluster List

Each row represents the summarized status of a database cluster:

Field Description
Cluster Name The cluster name, configured under ClusterControl > Settings > CMON Settings > General Settings > Name.
ID The cluster identifier number.
Version Database server major version.
Database Vendor Database vendor icon.
Cluster Type

The database cluster type:

  • MYSQL_SERVER - Standalone MySQL server.
  • REPLICATION - MySQL/MariaDB Replication.
  • GALERA - MySQL Galera Cluster, Percona XtraDB Cluster, MariaDB Galera Cluster.
  • GROUP_REPLICATION - MySQL Group Replication.
  • MYSQLCLUSTER - MySQL Cluster (NDB).
  • MONGODB - MongoDB ReplicaSet, MongoDB Sharded Cluster, MongoDB Replicated Sharded Cluster.
  • POSTGRESQL - PostgreSQL Standalone or Replication.
Cluster Status

The cluster status:

  • ACTIVE (green) - The cluster is up and running. All cluster nodes are running normally.
  • DEGRADED (yellow) - The full set of nodes in a cluster is not available. One or more nodes is down or unreachable.
  • FAILURE (red) - The cluster is down. Probably that all or most of the nodes are down or unreachable, resulting the cluster fails to operate as expected.
Auto Recovery

The auto recovery status of Galera Cluster:

  • Cluster - If sets to ON, ClusterControl will perform automatic recovery if it detects cluster failure.
  • Node - If sets to ON, ClusterControl will perform automatic recovery if it detects node failure.
Node Type and Status See table on node status indicators below.

Node status indicator:

Indicator Description
Green (tick) OK: Indicates the node is working fine.
Yellow (exclamation) WARNING: Indicates the node is degraded and not fully performing as expected.
Red (wrench) MAINTENANCE: Indicates that maintenance mode is on for this node.
Dark red (cross) PROBLEMATIC: Indicates the node is down or unreachable.

5.1.2. Deploy Database Cluster

Opens a step-by-step modal dialog to deploy a new set of database cluster. The following database cluster types are supported:

  • MySQL Replication

  • MySQL Galera
    • Percona XtraDB Cluster
    • MariaDB Galera Cluster
  • MySQL Cluster (NDB)

  • TimescaleDB (standalone or streaming replication)

  • PostgreSQL (standalone or streaming replication)

  • MongoDB ReplicaSet

  • MongoDB Sharded Cluster

There are prerequisites that need to be fulfilled prior to the deployment:

  • Passwordless SSH is configured from ClusterControl node to all database nodes. See Passwordless SSH.
  • Verify that sudo is working properly if you are using a non-root user. See Operating System User.

ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl > Activity > Jobs.

5.1.2.1. MySQL Replication

Deploys a new MySQL Replication or a standalone MySQL server. The database cluster will be automatically added into ClusterControl once deployed. Minimum of two nodes is required for MySQL replication. If only one MySQL IP address or hostname is defined, ClusterControl will deploy it as a standalone MySQL server with binary log enabled.

By default, ClusterControl deploys MySQL replication with the following configurations:

  • GTID with log_slave_updates enabled (MySQL and Percona only).
  • Start all database nodes with read_only=ON and super_read_only=ON (if supported). The chosen master will be promoted by disabling read-only dynamically.
  • PERFORMANCE_SCHEMA disabled.
  • ClusterControl will create and grant necessary privileges for 2 MySQL users - cmon for monitoring and management and backupuser for backup and restore purposes.
  • Generated account credentials are stored inside /etc/mysql/secrets-backup.cnf.
  • ClusterControl will configure semi-synchronous replication.

If you would like to customize the above configurations, modify the template base file to suit your needs before proceed to the deployment. See Base Template Files for details.

Starting from version 1.4.0, it’s possible to setup a master-master replication from scratch under ‘Define Topology’ tab. You can add more slaves later after the deployment completes.

Caution

ClusterControl sets read_only=1 on all slaves but a privileged user (SUPER) can still write to a slave (except for MySQL versions that support super_read_only).

5.1.2.1.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for the target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software - Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.2.1.2. 2) Define MySQL Servers
  • Vendor
    • Percona - Percona Server by Percona
    • MariaDB - MariaDB Server by MariaDB
    • Oracle - MySQL Server by Oracle
  • Version
    • Select the MySQL version for new deployment. For Oracle, only 5.7 and 8.0 are supported. For Percona, 5.6, 5.7 and 8.0 while MariaDB, 10.1, 10.2, 10.3 and 10.4 are supported.
  • Server Data Directory
    • Location of MySQL data directory. Default is /var/lib/mysql.
  • Server Port
    • MySQL server port. Default is 3306.
  • my.cnf Template
    • MySQL configuration template file under /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
  • Admin/Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the cluster in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
5.1.2.1.3. 3) Define Topology
  • Master A - IP/Hostname
    • Specify the IP address of the primary MySQL master node.
  • Add slaves to master A
    • Add a slave node connected to master A. Press enter to add more slaves.
  • Add Second Master Node
    • Opens the add node wizard for secondary MySQL master node.
  • Master B - IP/Hostname
    • Only available if you click Add Second Master Node.
    • Specify the IP address of the other MySQL master node. ClusterControl will setup a master-master replication between these nodes. Master B will be read-only once deployed (secondary master), letting Master A to hold the write role (primary master) for the replication chain.
  • Add slaves to master B
    • Only available if you click Add Second Master Node.
    • Add a slave node connected to master B. Press ‘Enter’ to add more slave.
  • Deploy
    • Starts the MySQL Replication deployment.

5.1.2.2. MySQL Galera

Deploys a new MySQL Galera Cluster. The database cluster will be automatically added into ClusterControl once deployed. A minimal setup is comprised of one Galera node (no high availability, but this can later be scaled with more nodes). However, the minimum of three nodes is recommended for high availability. Garbd (an arbitrator) can be added later after the deployment completes if needed.

By default, ClusterControl deploys MySQL Galera with the following configurations:

  • Use xtrabackup-v2 or mariabackup (depending on the vendor chosen) for wsrep_sst_method.
  • PERFORMANCE_SCHEMA disabled.
  • Binary logging disabled.
  • ClusterControl will create and grant necessary privileges for 2 MySQL users - cmon for monitoring and management and backupuser for backup and restore purposes.
  • Generated account credentials are stored inside /etc/mysql/secrets-backup.cnf.
5.1.2.2.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software - Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.2.2.2. 2) Define MySQL Servers
  • Vendor
    • Percona - Percona XtraDB Cluster by Percona
    • MariaDB - MariaDB Server (Galera embedded) by MariaDB
  • Version
    • Select the MySQL version for new deployment. For Percona, 5.6 and 5.7 are supported while MariaDB, 10.1, 10.2, 10.3 and 10.4 are supported.
  • Server Data Directory
    • Location of MySQL data directory. Default is /var/lib/mysql.
  • Server Port
    • MySQL server port. Default is 3306.
  • my.cnf Template
    • MySQL configuration template file under /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
  • Admin/Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the Galera Cluster in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
  • Add Node
    • Specify the IP address or hostname of the MySQL nodes. Press ‘Enter’ once specified so ClusterControl can verify the node reachability via passwordless SSH. Minimum of three nodes is recommended.
  • Deploy
    • Starts the Galera Cluster deployment.

5.1.2.3. MySQL Cluster (NDB)

Deploys a new MySQL Cluster (NDB) by Oracle. The cluster consists of management nodes, MySQL API nodes and data nodes. The database cluster will be automatically added into ClusterControl once deployed. Minimum of 4 nodes (2 SQL and management + 2 data nodes) is recommended.

Attention

Every data node must have at least 1.5 GB of RAM for the deployment to succeed.

5.1.2.3.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software - Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.2.3.2. 2) Define Management Servers
  • Server Port
    • MySQL Cluster management port. Default to 1186.
  • Server Data Directory
    • MySQL Cluster data directory for NDB. Default is /var/lib/mysql-cluster.
  • Management Server 1
    • Specify the IP address or hostname of the first management server.
  • Management Server 2
    • Specify the IP address or hostname of the second management server.
5.1.2.3.3. 3) Define Data Nodes
  • Server Port
    • MySQL Cluster data node port. Default to 2200.
  • Server Data Directory
    • MySQL Cluster data directory for NDB. Default is /var/lib/mysql-cluster.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL Cluster data node. It’s recommended to have data nodes in pair. You can add up to 14 data nodes to your cluster. Every data node must have at least 1.5GB of RAM.
5.1.2.3.4. 4) Define MySQL Servers
  • my.cnf Template
    • MySQL configuration template file under /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
  • Server Port
    • MySQL server port. Default to 3306.
  • Server Data Directory
    • MySQL data directory. Default is /var/lib/mysql.
  • Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all nodes in the cluster.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL Cluster API node. You can use the same IP address with management node, co-locate both roles in a same host.
  • Deploy
    • Starts the MySQL Cluster deployment.

5.1.2.4. TimeScaleDB

Deploys a new TimeScaleDB standalone or streaming replication cluster. Only TimeScaleDB 9.6 and later are supported. Minimum of two nodes

5.1.2.4.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the database.
  • Install Software - Check the box if you use clean and minimal VMs. Existing PostgreSQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.2.4.2. 2) Define PostgreSQL Servers
  • Server Port
    • PostgreSQL server port. Default is 5432.
  • User
    • Specify the PostgreSQL super user for example, postgres.
  • Password
    • Specify the password for User.
  • Version
    • Supported versions are 9.6, 10, 11 and 12.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Create New Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the PostgreSQL in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
5.1.2.4.3. 3) Define Topology
  • Master A - IP/Hostname
    • Specify the IP address of the TimeScaleDB master node. Press ‘Enter’ once specified so ClusterControl can verify the node reachability via passwordless SSH.
  • Add slaves to master A
    • Add a slave node connected to master A. Press ‘Enter’ to add more slave.
5.1.2.4.4. 4) Deployment Summary
  • Synchronous Replication
    • Toggle on if you would like to use synchronous streaming replication between the master and the chosen slave. Synchronous replication can be enabled per individual slave node with considerable performance overhead.
  • Deploy
    • Starts the TimeScaleDB standalone or replication deployment.

5.1.2.5. PostgreSQL

Deploys a new PostgreSQL standalone or streaming replication cluster from ClusterControl. Only PostgreSQL 9.6 and later are supported.

5.1.2.5.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the database.
  • Install Software - Check the box if you use clean and minimal VMs. Existing PostgreSQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.2.5.2. 2) Define PostgreSQL Servers
  • Server Port
    • PostgreSQL server port. Default is 5432.
  • User
    • Specify the PostgreSQL super user for example, postgres.
  • Password
    • Specify the password for User.
  • Version
    • Supported versions are 9.6, 10, 11 and 12.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Create New Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the PostgreSQL in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
5.1.2.5.3. 3) Define Topology
  • Master A - IP/Hostname
    • Specify the IP address of the PostgreSQL master node. Press ‘Enter’ once specified so ClusterControl can verify the node reachability via passwordless SSH.
  • Add slaves to master A
    • Add a slave node connected to master A. Press ‘Enter’ to add more slave.
5.1.2.5.4. 4) Deployment Summary
  • Synchronous Replication
    • Toggle on if you would like to use synchronous streaming replication between the master and the chosen slave. Synchronous replication can be enabled per individual slave node with considerable performance overhead.
  • Deploy
    • Starts the PostgreSQL standalone or replication deployment.

5.1.2.6. MongoDB ReplicaSet

Deploys a new MongoDB Replica Set. The database cluster will be automatically added into ClusterControl once deployed. Minimum of three nodes (including mongo arbiter) is recommended.

Attention

It is possible to deploy only 2 MongoDB nodes (without arbiter). The caveat of this approach is no automatic failover. If the primary node goes down then manual failover is required to make the other server as primary. Automatic failover works fine with 3 nodes and more.

5.1.2.6.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software - Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.2.6.2. 2) Define MongoDB Servers
  • Vendor
    • Percona - Percona Server for MongoDB by Percona.
    • MongoDB - MongoDB Server by MongoDB Inc.
  • Version
    • The supported MongoDB versions are 3.4, 3.6, 4.0 and 4.2.
  • Server Data Directory
    • Location of MongoDB data directory. Default is /var/lib/mongodb.
  • Admin User
    • MongoDB admin user. ClusterControl will create this user and enable authentication.
  • Admin Password
    • Password for MongoDB Admin User.
  • Server Port
    • MongoDB server port. Default is 27017.
  • mongodb.conf Template
    • MongoDB configuration template file under /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
  • ReplicaSet Name
    • Specify the name of the replica set, similar to replication.replSetName option in MongoDB.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the MongoDB in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
  • Add Nodes
    • Specify the IP address or hostname of the MongoDB nodes. Minimum of three nodes is required.
  • Deploy
    • Starts the MongoDB ReplicaSet deployment.

5.1.2.7. MongoDB Shards

Deploys a new MongoDB Sharded Cluster. The database cluster will be automatically added into ClusterControl once deployed. Minimum of three nodes (including mongo arbiter) is recommended.

Warning

It is possible to deploy only 2 MongoDB nodes (without arbiter). The caveat of this approach is no automatic failover. If the primary node goes down then manual failover is required to make the other server as primary. Automatic failover works fine with 3 nodes and more.

5.1.2.7.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
  • Cluster Name
    • Specify a name for the cluster.
  • Install Software - Check the box if you use clean and minimal VMs. Existing MySQL dependencies will be removed. New packages will be installed and existing packages will be uninstalled when provisioning the node with required software. - If unchecked, existing packages will not be uninstalled, and nothing will be installed. This requires that the instances have already provisioned the necessary software.

  • Disable Firewall
    • Check the box to disable firewall (recommended).
  • Disable AppArmor/SELinux
    • Check the box to let ClusterControl disable AppArmor (Ubuntu) or SELinux (RedHat/CentOS) if enabled (recommended).
5.1.2.7.2. 2) Configuration Servers and Routers

Configuration Server

  • Server Port
    • MongoDB config server port. Default is 27019.
  • Add Configuration Servers
    • Specify the IP address or hostname of the MongoDB config servers. Minimum of one node is required, recommended to use three nodes.

Routers/Mongos

  • Server Port
    • MongoDB mongos server port. Default is 27017.
  • Add More Routers
    • Specify the IP address or hostname of the MongoDB mongos.
5.1.2.7.3. 3) Define Shards
  • Replica Set Name
    • Specify a name for this replica set shard.
  • Server Port
    • MongoDB shard server port. Default is 27018.
  • Add Node
    • Specify the IP address or hostname of the MongoDB shard servers. Minimum of one node is required, recommended to use three nodes.
  • Advanced Options
    • Click on this to open set of advanced options for this particular node in this shard:
      • Add slave delay - Specify the amount of delayed slave in milliseconds format.
      • Act as an arbiter - Toggle to ‘Yes’ if the node is arbiter node. Otherwise, choose ‘No’.
  • Add Another Shard - Create another shard. You can then specify the IP address or hostname of MongoDB server that falls under this shard.

5.1.2.7.4. 4) Database Settings
  • Vendor
    • Percona - Percona Server for MongoDB by Percona
    • MongoDB - MongoDB Server by MongoDB Inc
  • Version
    • The supported MongoDB versions are 3.4, 3.6, 4.0 and 4.2.
  • Server Data Directory
    • Location of MongoDB data directory. Default is /var/lib/mongodb.
  • Admin User
    • MongoDB admin user. ClusterControl will create this user and enable authentication.
  • Admin Password
    • Password for MongoDB Admin User.
  • Server Port
    • MongoDB server port. Default is 27017.
  • mongodb.conf Template
      • MongoDB configuration template file under /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
  • Repository
    • Use Vendor Repositories - Provision software by setting up and using the database vendor’s preferred software repository. ClusterControl will always install the latest version of what is provided by database vendor repository.
    • Do Not Setup Vendor Repositories - Provision software by using repositories already setup on the nodes. The User has to set up the software repository manually on each database node and ClusterControl will use this repository for deployment. This is good if the database nodes are running without internet connections.
    • Use Mirrored Repositories - Create and mirror the current database vendor’s repository and then deploy using the local mirrored repository. This is a preferred option when you have to scale the MongoDB in the future, to ensure the newly provisioned node will always have the same version as the rest of the members.
  • Deploy
    • Starts the MongoDB Sharded Cluster deployment.

5.1.3. Import Existing Server/Cluster

Opens a wizard to import the existing database setup into ClusterControl. The following database cluster types are supported:

  • MySQL Replication

  • MySQL Galera
    • Percona XtraDB Cluster
    • MariaDB Galera Cluster
  • MySQL Cluster (NDB)

  • MongoDB ReplicaSet

  • MongoDB Shards

  • PostgreSQL (standalone or streaming replication)

  • TimeScaleDB (standalone or streaming replication)

There are some prerequisites that need to be fulfilled prior to adding the existing setup. The existing database cluster/server must:

  • Verify that sudo is working properly if you are using a non-root user. See Operating System User.
  • Passwordless SSH from ClusterControl node to database nodes has been configured correctly. See Passwordless SSH.
  • The target cluster must not be in a degraded state. For example, if you have a three-node Galera cluster, all nodes must be alive, accessible and in synced.

For more details, refer to the Requirements section. Each time you add an existing cluster or server, ClusterControl will trigger a job under ClusterControl > Settings > Cluster Jobs. You can see the progress and status under this page. A window will also appear with messages showing the progress.

5.1.3.1. Import Existing MySQL Replication

ClusterControl is able to manage and monitor an existing set of MySQL servers (standalone or replication). Individual hosts specified in the same list will be added to the same server group in the UI. ClusterControl assumes that you are using the same MySQL root password for all instances specified in the group and it will attempt to determine the server role as well (master, slave, multi or standalone).

When importing an existing MySQL Replication, ClusterControl will do the following:

  • Verify SSH connectivity to all nodes.
  • Detect the host environment and operating system.
  • Discover the database role of each node (master, slave, multi, standalone).
  • Pull the configuration files.
  • Generate the authentication key and register the node into ClusterControl.
5.1.3.1.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.3.1.2. 2) Define MySQL Servers
  • Vendor
    • Percona for Percona Server
    • MariaDB for MariaDB Server
    • Oracle for MySQL Server
  • MySQL Version
    • Supported version:
      • Percona Server (5.5, 5.6, 5.7, 8.0)
      • MariaDB Server (10.1, 10.2, 10.3)
      • Oracle MySQL Server (5.7, 8.0)
  • Basedir
    • MySQL base directory. Default is /usr. ClusterControl assumes all MySQL nodes are using the same base directory.
  • Server Port
    • MySQL port on the target server/cluster. Default to 3306. ClusterControl assumes MySQL is running on the same port on all nodes.
  • Admin/Root User
    • MySQL user on the target server/cluster. This user must able to perform GRANT statement. Recommended to use MySQL ‘root’ user.
  • Admin/Root Password
    • Password for MySQL User. ClusterControl assumes that you are using the same MySQL root password for all instances specified in the group.
  • “information_schema” Queries
    • Toggle on to enable information_schema queries to track databases and tables growth. Queries to the information_schema may not be suitable when having many database objects (hundreds of databases, hundreds of tables in each database, triggers, users, events, stored procedures, etc). If disabled, the query that would be executed will be logged so it can be determined if the query is suitable in your environment.
    • This is not recommended for clusters with more than 2000 database objects.
  • Import as Standalone Nodes
    • Toggle on if you only importing a standalone node (by specifying only one node under ‘Add Nodes’ section).
  • Node AutoRecovery
    • ClusterControl will perform automatic recovery if it detects any of the nodes in the cluster is down.
  • Cluster AutoRecovery
    • ClusterControl will perform automatic recovery if it detects the cluster is down or degraded.
  • Add Nodes
    • Enter the MySQL single instances’ IP address or hostname that you want to group under this cluster.
  • Import
    • Click the button to start the import. ClusterControl will connect to the MySQL instances, import configurations and start managing them.

5.1.3.2. Import Existing MySQL Galera

5.1.3.2.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.3.2.2. 2) Define MySQL Servers
  • Vendor
    • Percona XtraDB - Percona XtraDB Cluster by Percona
    • MariaDB - MariaDB Galera Cluster by MariaDB
  • Version
    • Supported version:
      • Percona Server (5.5, 5.6, 5.7)
      • MariaDB Server (10.1, 10.2, 10.3)
  • Basedir
    • MySQL base directory. Default is /usr. ClusterControl assumes MySQL is having the same base directory on all nodes.
  • Port
    • MySQL port on the target cluster. Default to 3306. ClusterControl assumes MySQL is running on the same port on all nodes.
  • Admin/Root User
    • MySQL user on the target cluster. This user must be able to perform GRANT statement. Recommended to use MySQL ‘root’ user.
  • Admin/Root Password
    • Password for MySQL User. The password must be the same on all nodes that you want to add into ClusterControl.
  • “information_schema” Queries
    • Toggle on to enable information_schema queries to track databases and tables growth. Queries to the information_schema may not be suitable when having many database objects (hundreds of databases, hundreds of tables in each database, triggers, users, events, stored procedures, etc). If disabled, the query that would be executed will be logged so it can be determined if the query is suitable in your environment.
    • This is not recommended for clusters with more than 2000 database objects.
  • Node AutoRecovery
    • Toggle on so ClusterControl will perform automatic recovery if it detects any of the nodes in the cluster is down.
  • Cluster AutoRecovery
    • Toggle on so ClusterControl will perform automatic recovery if it detects the cluster is down or degraded.
  • Automatic Node Discovery
    • If toggled on, you only need to specify ONE Galera node and ClusterControl will discover the remaining nodes based on the hostname/IPs used for Galera’s intra-node communication. Replication slaves, load balancers, and other supported services connected to the Galera Cluster can be added after the import has finished.
  • Add Node
    • Specify the target node and press ‘Enter’ for each of them. If you have Automatic Node Discovery enabled, you only need to specify only one node.
  • Import
    • Click the button to start the import. ClusterControl will connect to the Galera node, discover the configuration for the rest of the members and start managing/monitoring the cluster.

5.1.3.3. Import Existing MySQL Cluster

ClusterControl is able to manage and monitor an existing production deployed MySQL Cluster (NDB). Minimum of 2 management nodes and 2 data nodes is required.

5.1.3.3.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.3.3.2. 2) Define Management Server
  • Management server 1
    • Specify the IP address or hostname of the first MySQL Cluster management node.
  • Management server 2
    • Specify the IP address or hostname of the second MySQL Cluster management node.
  • Server Port
    • MySQL Cluster management port. The default port is 1186.
5.1.3.3.3. 3) Define Data Nodes
  • Port
    • MySQL Cluster data node port. The default port is 2200.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL Cluster data node.
5.1.3.3.4. 4) Define MySQL Servers
  • Root Password
    • MySQL root password.
  • Server Port
    • MySQL port. Default to 3306.
  • MySQL Installation Directory
    • MySQL server installation path where ClusterControl can find the mysql binaries.
  • Enable information_schema Queries
    • Toggle on to enable information_schema queries to track databases and tables growth. Queries to the information_schema may not be suitable when having many database objects (hundreds of databases, hundreds of tables in each database, triggers, users, events, stored procedures, etc). If disabled, the query that would be executed will be logged so it can be determined if the query is suitable in your environment.
    • This is not recommended for clusters with more than 2000 database objects.
  • Enable Node AutoRecovery
    • ClusterControl will perform automatic recovery if it detects any of the nodes in the cluster is down.
  • Enable Cluster AutoRecovery
    • ClusterControl will perform automatic recovery if it detects the cluster is down or degraded.
  • Add Nodes
    • Specify the IP address or hostname of the MySQL Cluster API/SQL node.
  • Import
    • Click the button to start the import. ClusterControl will connect to the MySQL Cluster nodes, discover the configuration for the rest of the nodes and start managing/monitoring the cluster.

5.1.3.4. Import Existing MongoDB ReplicaSet

ClusterControl is able to manage and monitor an existing MongoDB/Percona Server for MongoDB 3.x and 4.0 replica set.

5.1.3.4.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.3.4.2. 2) Define MongoDB Servers
  • Vendor
    • Percona - Percona Server for MongoDB by Percona.
    • MongoDB - MongoDB Server by MongoDB Inc (formerly 10gen).
  • Version
    • The supported MongoDB version are 3.2, 3.4 and 3.6.
  • Server Port
    • MongoDB server port. Default is 27017.
  • Admin User
    • MongoDB admin user.
  • Admin Password
    • Password for MongoDB Admin User.
  • MongoDB Auth DB
    • MongoDB database to authenticate against. Default is admin.
  • Hostname
    • Specify one IP address or hostname of the MongoDB replica set member. ClusterControl will automatically discover the rest.
  • Import
    • Click the button to start the import. ClusterControl will connect to the specified MongoDB node, discover the configuration for the rest of the nodes and start managing/monitoring the cluster.

5.1.3.5. Import Existing MongoDB Shards

ClusterControl is able to manage and monitor an existing MongoDB/Percona Server for MongoDB 3.x and 4.0 sharded cluster setup.

5.1.3.5.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • If you use sudo with password, specify it here. Ignore this if SSH User is root or sudoer does not need a sudo password.
  • SSH Port Number
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.3.5.2. 2) Set Routers/Mongos

Configuration Server

  • Server Port
    • MongoDB mongos server port. Default is 27017.
  • Add More Routers
    • Specify the IP address or hostname of the MongoDB mongos.
5.1.3.5.3. 3) Database Settings
  • Vendor
    • Percona - Percona Server for MongoDB by Percona
    • MongoDB - MongoDB Server by MongoDB Inc
  • Version
    • The supported MongoDB version are 3.4, 3.6, 4.0 and 4.2.
  • Admin User
    • MongoDB admin user.
  • Admin Password
    • Password for MongoDB Admin User.
  • MongoDB Auth DB
    • MongoDB database to authenticate against. Default is admin.
  • Import
    • Click the button to start the import. ClusterControl will connect to the specified MongoDB mongos, discover the configuration for the rest of the members and start managing/monitoring the cluster.

5.1.3.6. Import Existing PostgreSQL/TimeScaleDB

ClusterControl is able to manage/monitor an existing set of PostgreSQL/TimeScaleDB version 9.6 and later. Individual hosts specified in the same list will be added to the same server group in the UI. ClusterControl assumes that you are using the same database admin password for all instances specified in the group.

5.1.3.6.1. 1) General & SSH Settings
  • SSH User
    • Specify root if you have root credentials.
    • If you use ‘sudo’ to execute system commands, specify the name that you wish to use here. The user must exists on all nodes. See Operating System User.
  • SSH Key Path
    • Specify the full path of SSH key (the key must exist in ClusterControl node) that will be used by SSH User to perform passwordless SSH. See Passwordless SSH.
  • Sudo Password
    • Specify the password if the SSH user that you specified under SSH User requires sudo password to run super-privileged commands. Ignore this if SSH User is root or have no sudo password.
  • SSH Port
    • Specify the SSH port for target nodes. ClusterControl assumes SSH is running on the same port on all nodes.
5.1.3.6.2. 2) Define PostgreSQL Servers
  • Server Port
    • PostgreSQL port on the target server/cluster. Default to 5432. ClusterControl assumes PostgreSQL/TimeScaleDB is running on the same port on all nodes.
  • User
    • PostgreSQL user on the target server/cluster. Recommended to use PostgreSQL/TimeScaleDB ‘postgres’ user.
  • Password
    • Password for User. ClusterControl assumes that you are using the same admin password for all instances under this group.
  • Version
    • PostgreSQL/TimeScaleDB server version on the target server/cluster. Supported versions are 9.6, 10.x, 11.x and 12.x.
  • Basedir
    • PostgreSQL/TimeScaleDB base directory. Default is /usr. ClusterControl assumes all PostgreSQL/TimeScaleDB nodes are using the same base directory.
  • Add Node
    • Specify all PostgreSQL/TimeScaleDB instances that you want to group under this cluster.
  • Import
    • Click the button to start the import. ClusterControl will connect to the PostgreSQL/TimeScaleDB instances, import configurations and start managing them.

5.1.4. Deploy in the Cloud

Opens a step-by-step modal dialog to deploy a new set of database cluster in the cloud. Supported cloud providers are:

  • Amazon Web Service
  • Google Cloud Platform
  • Microsoft Azure

The following database cluster types are supported:

  • MySQL Replication:
    • Percona Server 8.0
    • Oracle MySQL Server 8.0
    • MariaDB Server 10.3
  • MySQL Galera:
    • Percona XtraDB Cluster 5.7
    • MariaDB 10.2
    • MariaDB 10.3
  • MongoDB ReplicaSet:
    • Percona Server for MongoDB 3.6
    • MongoDB 3.6
    • MongoDB 4.0
  • PostgreSQL 11 Streaming Replication

  • TimeScaleDB 11 Streaming Replication

There are prerequisites that need to be fulfilled prior to the deployment:

  • A working cloud credential profile on the supported cloud platform. See Cloud Providers.
  • The date and time for ClusterControl node must be synced with NTP server. See Timezone.
  • If the cloud instance is inside a private network, the network must support auto-assign public IP address. ClusterControl only connects to the created cloud instance via public network.

Under the hood, the deployment process does the following:

  1. Create cloud instances.
  2. Configure security groups and networking.
  3. Verify the SSH connectivity from ClusterControl to all created instances.
  4. Deploy database on every instance.
  5. Configure the clustering or replication links.
  6. Register the deployment into ClusterControl.

ClusterControl will trigger a deployment job and the progress can be monitored under ClusterControl > Activity > Jobs.

Attention

This feature is still in beta. See Known Limitations for details.

5.1.4.1. Cluster Details

  • Select Cluster Type
    • Choose a cluster.
  • Select Vendor and Version
    • MySQL Replication Cluster - Percona Server 8.0, Oracle MySQL Server 8.0, MariaDB Server 10.3 and MariaDB 10.4.
    • MySQL Galera - Percona XtraDB Cluster 5.7, MariaDB 10.2, MariaDB 10.3 and MariaDB 10.4.
    • MongoDB Replica Set - MongoDB 3.6, MongoDB 4.0, MongoDB 4.2 by MongoDB, Inc and Percona Server for MongoDB 4.2 by Percona (replica set only).
    • PostgreSQL Streaming Replication - PostgreSQL 11.0 and PostgreSQL 12.0 (streaming replication only).
    • TimeScaleDB - TimeScaleDB 11.0 (streaming replication only).

5.1.4.2. Configure Cluster

5.1.4.2.1. MySQL Replication
  • Select Number of Nodes
    • How many nodes for the database cluster. You can start with one but two is recommended.
  • Cluster Name
    • This value will be used as the instance name or tag. No space is allowed.
  • MySQL Server Port
    • MySQL server port. Default is 3306.
  • MySQL Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
  • my.cnf Template
    • MySQL configuration template file under /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
  • MySQL Server Data Directory
    • Location of MySQL data directory. Default is /var/lib/mysql.
5.1.4.2.2. MySQL Galera Cluster
  • Select Number of Nodes
    • How many nodes for the database cluster. You can start with one but three (or bigger odd number) is recommended.
  • Cluster Name
    • This value will be used as the instance name or tag. No space is allowed.
  • MySQL Server Port
    • MySQL server port. Default is 3306.
  • MySQL Root Password
    • Specify MySQL root password. ClusterControl will configure the same MySQL root password for all instances in the cluster.
  • my.cnf Template
    • MySQL configuration template file under /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
  • MySQL Server Data Directory
    • Location of MySQL data directory. Default is /var/lib/mysql.
5.1.4.2.3. MongoDB Replica Set
  • Select Number of Nodes
    • How many nodes for the database cluster. You can start with one but three (or bigger odd number) is recommended.
  • Cluster Name
    • This value will be used as the instance name or tag. No space is allowed.
  • Admin User
    • MongoDB admin user. ClusterControl will create this user and enable authentication.
  • Admin Password
    • Password for MongoDB Admin User.
  • Server Data Directory
    • Location of MongoDB data directory. Default is /var/lib/mongodb.
  • Server Port
    • MongoDB server port. Default is 27017.
  • mongodb.conf Template
    • MongoDB configuration template file under /etc/cmon/templates or /usr/share/cmon/templates. See Base Template Files for details.
  • ReplicaSet Name
    • Specify the name of the replica set, similar to replication.replSetName option in MongoDB.
5.1.4.2.4. PostgreSQL Streaming Replication
  • Select Number of Nodes
    • How many nodes for the database cluster. You can start with one but two or more is recommended.

Note

The first virtual machine that comes up will be configured as a master.

  • Cluster Name
    • This value will be used as the instance name or tag. No space is allowed.
  • User
    • Specify the PostgreSQL super user for example, postgres.
  • Password
    • Specify the password for User.
  • Server Port
    • PostgreSQL server port. Default is 5432.
5.1.4.2.5. TimeScaleDB Streaming Replication
  • Select Number of Nodes
    • How many nodes for the database cluster. You can start with one but two or more is recommended.

Note

The first virtual machine that comes up will be configured as a master.

  • Cluster Name
    • This value will be used as the instance name or tag. No space is allowed.
  • User
    • Specify the TimeScaleDB super user for example, postgres.
  • Password
    • Specify the password for User.
  • Server Port
    • TimeScaleDB server port. Default is 5432.

5.1.4.3. Select Credential

Select one of the existing cloud credentials or you can create a new one by clicking on the Add New Credential button.

  • Add New Credential

5.1.4.4. Select Virtual Machine

Most of the settings in this step are dynamically populated from the cloud provider by the chosen credentials.

  • Operating System
    • Choose a supported operating system from the dropdown.
  • Instance Size
    • Choose an instance size for the cloud instance.
  • Virtual Private Cloud (VPC)
    • Exclusive for AWS. Choose a virtual private cloud network for the cloud instance.
  • Add New
    • Opens the Add VPC wizard. Specify the tag name and IP address block.
  • SSH Key
    • SSH key location on the ClusterControl host. This key must be able to authenticate to the created cloud instances passwordlessly.
  • Storage Type
    • Choose the storage type for the cloud instance.
  • Allocate Storage
    • Specify the storage size for the cloud instance in GB.

5.1.4.5. Load Balancer

  • Select Number of Loadbalancers
    • How many nodes for the load balancer nodes. You can start with one but two or more is recommended.
  • Instance Size
    • Choose an instance size for the cloud instance.
  • Listen Port (Read/Write)
    • Specify the HAProxy listenting port for read-write connections.
  • Listen Port (Read Only)
    • Specify the HAProxy listenting port for read-only connections.
  • Policy
    • Choose one of these load balancing policy:
      • leastconn - The server with the lowest number of connections receives the connection.
      • roundrobin - Each server is used in turns, according to their weights.
      • source - The same client IP address will always reach the same server as long as no server goes down.

5.1.4.6. Deployment Summary

  • Subnet
    • Choose one existing subnet for the selected network.
  • Add New Subnet
    • Opens the Add Subnet wizard. Specify the subnet name, availability zone and IP CIDR block address. E.g: 10.0.10.0/24

5.1.4.7. Known Limitations

There are known limitations for the cloud deployment feature:

  • There is currently no ‘accounting’ in place for the cloud instances. You will need to manually remove created cloud instances.

We appreciate your feedbacks, feature requests and bug reports. Contact us via the support channel or create a feature request. See FAQ for details.

5.1.6. User Guide for MySQL

This user guide covers ClusterControl with MySQL-based clusters, namely:

  • Galera Cluster for MySQL
    • Percona XtraDB Cluster
    • MariaDB Galera Cluster
  • MySQL Cluster (NDB)

  • MySQL/MariaDB Replication

  • MySQL/MariaDB single instance

Contents:

5.2. ClusterControl CLI

Also known as s9s CLI, cc CLI or s9s-tools, s9s is a command line tool binary introduced in ClusterControl version 1.4.1 to interact, control and manage database clusters using the ClusterControl Database Platform. Starting from version 1.4.1, the installer script will automatically install this package on the ClusterControl node. You can also install it on another computer or workstation to manage the database cluster remotely.

ClusterControl CLI opens a new door for cluster automation where you can easily integrate it with existing deployment automation tools like Ansible, Puppet, Chef or Salt. The command line tool is invoked by executing a binary called s9s. The commands are basically JSON messages being sent over to the ClusterControl Controller (CMON) RPC interface. Communication between the s9s (the command line tool) and the cmon process (ClusterControl Controller) is encrypted using TLS and requires the port 9501 to be opened on controller and the client host.

The command line client installs manual pages and can be viewed by entering the command:

$ man s9s-{command group}

For example:

$ man s9s
$ man s9s-cluster
$ man s9s-alarm
$ man s9s-backup

The general synopsis to execute commands using s9s is:

s9s {command group} {options}

Supported command list:

Command Description
s9s account Manage accounts on clusters.
s9s alarm Manage alarms.
s9s backup View, create and restore database backups.
s9s cluster List and manipulate clusters.
s9s container Manage virtual machines.
s9s controller Manage Cmon controllers.
s9s job View jobs.
s9s maintenance View and manipulate maintenance periods.
s9s metatype Print metatype information.
s9s node Handle nodes.
s9s process View processes running on nodes.
s9s replication Monitor and control data replication.
s9s report Manage reports.
s9s script Manage and execute scripts.
s9s server Manage hardware resources.
s9s user Manage users.

5.2.1. s9s account

Manage user accounts on clusters. The “account” term in this section can be treated as database user account of a managed database server/cluster.

Usage

s9s account {command} {options}

Command

Name, shorthand Description
−−create Creates a new account on the cluster. Note that the account is an account of the cluster and not a user of the Cmon system.
−−delete Removes an account.
−−list, -L Lists the accounts on the cluster.

Options

Name, shorthand Description
−−account=USERNAME[:PASSWORD][@HOSTNAME] The account to be used or created on the cluster. The command line option argument may contain a username, a password for the user and a hostname identifying the host from where the user may log in. The s9s command line tool will handle the command line option argument as an URL encoded string, so if the password for example contains an @ character, it should be encoded as %40. URL encoded parts are supported anywhere in the string, usernames and passwords (and even hostnames) may also have special characters.
−−grant Grant privileges for an account on one or more databases.
−−list Lists the accounts on the cluster.
−−private Create a secure, more restricted account on the cluster. The actual interpretation of this flag depends on the controller, the current version is restricting the access to the ProxySQL servers. The account that is created with the --private option will not be imported into the ProxySQL to have access through the ProxySQL server immediately after they created on the cluster.
−−privileges=EXPRESSION Privileges to be granted to a user account on the server. See Privilege Expression.
−−with-database Creates a database for the new account while creating a new user account on the cluster. The name of the database will be the same as the name of the account and all access rights will be granted for the account to use the database.

5.2.1.1. Privilege Expression

The privileges are specified using a simple language that is interpreted by the CMON Controller. The language is specified as follows:

expression: specification[;...]
specification: [object[,...]:]privilege[,...]
object: {
        *
        | *.*
        | database_name.*
        | database_name.table_name
        | database_name
}

Please note that an object name on itself is a database name (and not a table name) and multiple objects can be enumerated by using the , as separator. It is also important that multiple specifications can be enumerated using the semicolon (;) as separator.

The expression MyDb:INSERT,UPDATE;Other:SELECT for example defines INSERT and UPDATE privileges on the MyDb database and SELECT privilege on the Other database. The expression INSERT,UPDATE on the other hand would specify INSERT and UPDATE privileges on all databases and all tables.

Examples

Create a new MySQL user account “myuser” with password “secr3tP4ss”, and allow it to have ALL PRIVILEGES on database “shop_db” while SELECT on table “account_db.payments”:

$ s9s account \
        --create \
        --cluster-id=1 \
        --account="myuser:[email protected]" \
        --privileges="shop_db.*:ALL;account_db.payments:SELECT"

Create a new PostgreSQL user account called “mydbuser”, and allowed the host in the network subnet 192.168.0.0/24 to access database mydbshop:

$ s9s account --create \
        --cluster-id=50 \
        --account='mydbuser:k#[email protected]/24' \
        --privileges="mydbshop.*:ALL"

Delete a database user called “joe”:

$ s9s account \
        --delete \
        --cluster-id=1 \
        --account="joe"

Lists the accounts on the cluster.

$ s9s account \
        --list \
        --long \
        --cluster-id=1

5.2.2. s9s alarm

Manage alarms.

Usage

s9s alarm {command} {options}

Command

Name, shorthand Description
−−delete. Sets the alarm to be ignored. This will not in fact delete the alarm, but it will make the alarm to disappear from the active alarms list, hence the name of the option.
−−list Lists the active alarms.

Options

Name, shorthand Description
−−cluster-id=ID, -i The ID of the cluster to manipulate.
−−cluster-name=NAME, -n Sets the cluster name. If the operation creates a new cluster this will be the name of the new cluster.
−−alarm-id=ID. The ID of the alarm to manipulate.

Examples

List out all alarms generated by ClusterControl for a database cluster named “PostgreSQL Cluster”:

$ s9s alarm --cluster-name="PostgreSQL Cluster" --list

Delete an alarm:

$ s9s alarm --delete --alarm-id=1015

5.2.3. s9s backup

View and create database backups. Three backup methods are supported:
  • mysqldump
  • xtrabackup (full)
  • xtrabackup (incremental)
  • mariabackup (full)
  • mariabackup (incremental)
  • mongodump
  • pg_dump
The s9s client also needs to know:
  • The cluster ID (or cluster name) of the cluster to backup.
  • The node to backup.
  • The databases that should be included (default all databases).

By default, the backups will be stored on the controller node. If you wish to store the backup on the data node, you can set the flag --on-node.

Note

If you are using Percona Xtrabackup, an incremental backup requires that there is already a full backup made of the same databases (all or individually specified). Otherwise, the incremental backup will be upgraded to a full backup.

Usage

s9s backup {command} {options}

Command

Name, shorthand Description
−−create Creates a new backup. This command line option will initiate a new job that will create a new backup.
−−create-schedule Creates a backup schedule, a backup that is repeated. Please note that there are two ways to create a repeated backup. A job that creates a backup can be scheduled and repeated and with this option a backup schedule can be created to repeat the creation of a backup.
−−delete Deletes an existing backup.
−−delete-old Initiates a job that checks for expired backups and removes them from the system.
−−list Lists the backups. When listing the backups with the --long option, a more detailed columns are listed. See Backup List.
−−list-databases Lists the backups in database view format. This format is designed to show the archived databases in the backups.
−−list-files Lists the backups in file view format. This format is designed to show the archive files of the backups.
−−list-schedules Lists the backup schedules.
−−restore Restores an existing backup.
−−restore-cluster-info Restores the information the controller has about a cluster from a previously created archive file.
−−restore-controller Restores the entire controller from a previously created tarball (created by using the --save-controller option).
−−save-cluster-info Saves the information about one cluster.
−−save-controller Saves the entire controller into a file.
−−verify Creates a job to verify a backup. When this main option is used the --backup-id option has to be used to identify a backup and the --test-server is also necessary to provide a server where the backup will be tested.

Options

Name, shorthand Description
−−backup-directory=DIR The directory where the backup is placed.
−−backup-format[=FORMATSTRING] The string that controls the format of the printed information about the backups. See Backup Format.
−−backup-id=ID The ID of the backup.
−−backup-method=METHOD Controls what backup software is going to be used to create the backup. The controller currently supports the following methods: ndb, mysqldump, xtrabackupfull, xtrabackupincr, mariabackupfull, mariabackupincr, mongodump, pgdump, pg_basebackup, mysqlpump.
−−backup-password=PASSWORD The password for the SQL account that will create the backup. This command line option is not mandatory.
−−backup-retention=DAYS Controls a custom retention period for the backup, otherwise the default global setting will be used. Specifying a positive number value here can control how long (in days) the taken backups will be preserved, -1 has a special meaning, it means the backup will be kept forever, while value 0 is the default, means prefer the global setting (configurable on UI).
−−backup-user=USERNAME The username for the SQL account that will create the backup.
−−cloud-retention=DAYS Retention used when the backup is on a cloud.
−−cluster-id=ID The ID of the cluster.
−−databases=LIST A comma separated list of database names. This argument controls which databases are going to be archived into the backup file. By default all the databases are going to be archived.
−−encrypt-backup When this option is specified ClusterControl will attempt to encrypt the backup files using AES-256 encryption (the key will be auto-generated if not exists yet and stored in cluster configuration file).
−−full-path Print the full path of the files.
−−memory=MEGABYTES Controls how many memory the archiver process should use while restoring an archive. Currently only the xtrabackup supports this option.
−−no-compression Do not compress the archive file.
−−nodes=NODELIST The list of nodes involved in the backup. See Node List.
−−on-node Do not copy the created archive file to the controller, store it on the node where it was created.
−−on-controller Stream and store the created backup files on the controller.
−−parallelism=N Controls how many threads are used while creating backup. Please note that not all the backup methods support multi-thread operations.
−−pitr-compatible Creates PITR-compatible backup.
−−recurrence=STRING Schedule time and frequency in cron format.
−−safety-copies=N Controls how many safety backups should be kept while deleting old backups. This command line option can be used together with the --delete-old option.
−−subdirectory=MARKUPSTRING Sets the name of the subdirectory that holds the newly created backup files. The command line option argument is considered to be a subpath that may contain the field specifiers using the usual %X format. See Backup Subdirectory Variables.
−−test-server=HOSTNAME Use the given server to verify the backup. If this option is provided while creating a new backup after the backup is created a new job is going to be created to verify the backup. During the verification the SQL software will be installed on the test server and the backup will be restored on this server. The verification job will be successful if the backup is successfully restored.
−−title=STRING A short human readable string that helps the user to identify the backup later.
−−to-individual-files Archive every database into individual files. Currently only the mysqldump backup method supports this option.
−−use-pigz Use the pigz program to compress archive.

5.2.3.1. Backup Subdirectory Variables

Variable Description
B The date and time when the backup creation was beginning.
H The name of the backup host, the host that created the backup.
i The numerical ID of the cluster.
I The numerical ID of the backup.
J The numerical ID of the job that created the backup.
M The backup method (e.g. “mysqldump”).
O The name of the user who initiated the backup job.
S The name of the storage host, the host that stores the backup files.
% The percent sign itself. Use two percent signs, %% the same way the standard printf() function interprets it as one percent sign.

5.2.3.2. Backup List

Column Description
ID The numerical ID of the backup.
PI The numerical ID of the parent backup if there is a parent backup for the given entry.
CID The numerical ID of the cluster to which the backup belongs.
V The verification status. Here V means the backup is verified, - means the backup is not verified.
I The flag showing if the backup is incremental or not. Here F means the backup is a full backup, I means the backup is incremental, - means the backup contains no incremental or full backup files (because for example the backup failed) and B means the backup contains both full and incremental backups files (which is impossible).
STATE The state of the backup. Here “COMPLETED” means the backup is completed, “FAILED” means the backup has failed and “RUNNING” means the backup is being created.
OWNER The name of the Cmon user that owns the backup.
HOSTNAME The name of the host where the backup was created.
CREATED The date and time when the backup was created.
SIZE The total size of the created backup files.
TITLE The name or title of the backup. This is a human readable string that helps identify the backup.

5.2.3.3. Backup Format

When the option --backup-format is used the specified information will be printed instead of the default columns. The format string uses the % character to mark variable fields and flag characters as they are specified in the standard printf() C library functions. The % specifiers are ended by field name letters to refer to various properties of the backups.

The %+12I format string for example has the +12 flag characters in it with the standard meaning: the field will be 12 character wide and the “+” or “-” sign will always be printed with the number. The properties of the backup are encoded by letters. The in the %16H for example the letter H encodes the hostname. Standard \ notation is also available, \n for example encodes a new-line character.

The s9s-tools support the following fields:

Character Description
B The date and time when the backup creation was beginning. The format used to print the dates and times can be set using the --date-format.
C The backup file creation date and time. The format used to print the dates and times can be set using the --date-format.
d The names of the databases in a comma separated string list.
D The description of the backup. If the c modifier is used (e.g. %cD) the configured description is shown.
e The word “ENCRYPTED” or “UNENCRYOTED” depending on the encryption status of the backup.
E The date and time when the backup creation was ended. The format used to print the dates and times can be set using the --date-format.
F The archive file name.
H The backup host (the host that created the backup). If the c modifier is used (e.g. %cH) the configured backup host is shown.
I The numerical ID of the backup.
i The numerical ID of the cluster to which the backup belongs.
J The numerical ID of the job that created the backup.
M The backup method used. If the c modifier is used the configured backup method will be shown.
O The name of the owner of the backup.
P The full path of the archive file.
R The root directory of the backup.
S The name of the storage host, the host where the backup was stored.
s The size of the backup file measured in bytes.
t The title of the backup. The can be added when the backup is created, it helps to identify the backup later.
v The verification status of the backup. Possible values are “Unverified”, “Verified” and “Failed”.
% The percent sign itself. Use two percent signs, %% the same way the standard printf() function interprets it as one percent sign.

Examples

Suppose we have a data node on 10.10.10.20 (port 3306) on cluster id 2, and we want to backup all databases using mysqldump and store the backup on ClusterControl server:

$ s9s backup --create \
        --backup-method=mysqldump \
        --cluster-id=2 \
        --nodes=10.10.10.20:3306 \
        --on-controller \
        --backup-directory=/storage/backups

Create a mongodump backup on 10.0.0.148 for cluster named ‘MongoDB ReplicaSet 3.2’ and store the backup on the database node:

$ s9s backup --create \
        --backup-method=mongodump \
        --cluster-name='MongoDB ReplicaSet 3.2' \
        --nodes=10.0.0.148 \
        --backup-directory=/storage/backups

Schedule a full backup using MariaDB backup every midnight at 1:10 AM:

$ s9s backup --create \
        --backup-method=mariabackupfull \
        --nodes=10.10.10.19:3306 \
        --cluster-name=MDB101 \
        --backup-dir=/home/vagrant/backups \
        --on-controller \
        --recurrence='10 1 * * *'

Schedule an incremental backup using MariaDB backup everyday at 1:30 AM:

$ s9s backup --create \
        --backup-method=mariabackupincr \
        --nodes=10.10.10.19:3306 \
        --cluster-name=MDB101 \
        --backup-dir=/home/vagrant/backups \
        --on-controller \
        --recurrence='30 1 * * *'

Create a pg_dumpall backup on PostgreSQL master server and store the backup on ClusterControl server:

$ s9s backup --create \
        --backup-method=pgdump \
        --nodes=192.168.0.81:5432 \
        --cluster-id=43 \
        --backup-dir=/home/vagrant/backups  \
        --on-controller \
        --log

List all backups for cluster ID 2:

$ s9s backup --list \
        --cluster-id=2 \
        --long \
        --human-readable

Note

Omit the --cluster-id=2 to see the backup records for all clusters.

Restore backup ID 3 on cluster ID 2:

$ s9s backup --restore \
        --cluster-id=2 \
        --backup-id=3 \
        --wait

Note

If the backup is encrypted, it will be decrypted automatically when restoring.

Create a job to verify the given backup identified by the backup ID. The job will attempt to install MySQL on the test server using the same settings as for the given cluster, then restore the backup on this test server. The job returns OK only if the backup is successfully restored on the test server:

$ s9s backup --verify \
        --log \
        --backup-id=1 \
        --test-server=192.168.0.55 \
        --cluster-id=1

Delete old backups for cluster ID 1 that are longer than 7 days, but do not delete at least 3 of the latest backups:

$ s9s backup --delete-old \
        --cluster-id=1 \
        --backup-retention=7 \
        --safety-copies=3 \
        --log

5.2.4. s9s cluster

Create, manage and manipulate clusters.

Usage

s9s cluster {command} {options}

Command

Name, shorthand Description
−−add-node Adds a new node (server) to the cluster or to be more precise creates a new job that will eventually add a new node to the cluster. The name (or IP address) of the node should be specified using the --nodes command line option. See Node List.
−−check-hosts Checks the hosts before installing a cluster.
−−collect-logs Creates a job that will collect the log files from the nodes of the cluster.
−−create Creates a new cluster. When this command line option is provided the program will contact the controller and register a new job that will eventually create a new cluster.
−−create-account Creates a new account to be used on the cluster to access the database(s).
−−create-database Creates a database on the cluster.
−−create-report When this command line option is provided a new job will be started that will create a report. After the job is executed the report will be available on the controller. If the --output-dir command line option is provided the report will be created in the given directory on the controller host.
−−delete-account Deletes an existing account from the cluster.
−−disable-recovery Creates a new job that will disable the autorecovery for the cluster (both cluster autorecovery and node autorecovery). The job can optionally be used to also register a maintenance period for the cluster.
−−drop Drops cluster from the controller.
−−enable-recovery Creates a job that will enable the autorecovery for both the cluster and the nodes in the cluster.
−−import-config Creates a job that will import all the configuration files from the nodes of the cluster.
−−list, -L Lists the clusters.
−−list-config This command line option can be used to print the configuration values for the cluster. The cluster configuration in this context is the Cmon Controller’s configuration for the given cluster.
−−list-databases Lists the databases found on the cluster. Please note that if the cluster has a lot of databases, this option might not show some of them. Sampling a huge number of databases would generate high load and so the controller has an upper limit built into it.
−−ping Checks the connection to the controller.
−−promote-slave Promotes a slave node to become a master. This main option will of course work only on clusters where it is meaningful, where there are slaves and masters are possible.
−−register Registers an existing cluster in the controller. This option is very similar to the --create option, but it of course will not install a new cluster, it just registers one.
−−remove-node Removes a node from the cluster (creates a new job that will remove the node from the cluster). The name (or IP address) of the node should be specified using the --nodes command line option. See Node List.
−−rolling-restart Restarts the nodes (one node at a time) without stopping the cluster.
−−set-read-only Creates a job that when executed will set the entire cluster into read-only mode. Please note that not every cluster type supports the read-only mode.
−−start Creates a new job to start the cluster.
−−stat Prints the details of one or more clusters.
−−stop Creates and registers and a new job that will stop the cluster when executed.

Options

Name, shorthand Description
−−backup-id=NUMBER The id of a backup to be restored on the newly created cluster.
−−batch Print no messages. If the application created a job print only the job ID number and exit. If the command prints data do not use syntax highlight, headers, totals, only the pure table to be processed using filters.
−−cluster-format=FORMATSTRING The string that controls the format of the printed information about clusters. See Cluster Format.
−−cluster-id=ID, -i The ID of the cluster to manipulate.
−−cluster-name=NAME, -n Sets the cluster name. If the operation creates a new cluster this will be the name of the new cluster.
−−cluster-type=TYPE The type of the cluster to install. Currently the following types are supported: galera, mysqlreplication, groupreplication (or group_replication), ndb (or ndbcluster), mongodb (MongoDB ReplicaSet only) and postgresql.
−−config-template=FILENAME Use the specified file as configuration template to create the configuration file for the new cluster.
−−datadir=DIRECTORY The directory on the node(s) that will hold the data. The primary use for this command line option is to set the data directory path when a cluster is created.
−−db-admin=USERNAME The user name of the database administrator (e.g. ‘root’).
−−db-admin-passwd=PASSWD The password for the database admin.
−−donor=ADDRESS Currently this option is used when starting a cluster. It can be used to control which node will be started first and used for the others as donor.
−−job-tags=LIST Tags for the job if a job is created.
−−long, -l Print the detailed list.
−−no-header Do not print headers for tables.
−−no-install Skip the cluster software installation part. Assume all software is installed on the node(s). This command line option is considered when installing a new cluster or adding a new node to an existing cluster.
−−nodes=NODE_LIST List of nodes to work with. See Node List.
−−os-user=USERNAME The name of the remote user that is used to gain SSH access on the remote nodes. If this command line option is omitted the name of the local user will be used on the remote host too.
−−os-key-file=PATH The path of the SSH key to install on a new container to allow the user to log in. This command line option can be passed when a new container is created, the argument of the option should be the path of the private key stored on the controller. Although the path of the private key file is passed only the public key will be uploaded to the new container.
−−output-dir=DIR The directory where the files are created. Use in conjunction with --create-report command.
−−provider-version=VER The version string of the software to be installed.
−−remote-cluster-id=ID The remote cluster ID for the cluster creation when cluster-to-cluster replication is to be installed. Please note that not all the cluster types support cluster to cluster replication.
−−use-internal-repos Use internal repositories when installing software packages. Using this command line option it is possible to deploy clusters and add nodes off-line, without a working internet connection. The internal repositories has to be set up in advance.
−−vendor=VENDOR The name of the software vendor to be installed.
−−wait Waits for the specified job to end. While waiting, a progress bar will be shown unless the silent mode is set.
−−with-timescaledb Install the TimescaleDB option when creating a new cluster. This is currently only supported on PostgreSQL systems.
ACCOUNT, DATABASE & CONFIGURATION MANAGEMENT
−−account=NAME[:PASSWD][@HOST] Account to be created on the cluster.
−−db-name=NAME The name of the database.
−−opt-group=NAME The option group for configuration.
−−opt-name=NAME The name of the configuration item.
−−opt-value=VALUE The value for the configuration item.
−−with-database Create a database for the user too.
CONTAINER & CLOUD
−−cloud=PROVIDER This option can be used when new container(s) created. The name of the cloud provider where the new container will be created. This command line option can also be used to filter the list of the containers when used together with one of the --list or --stat options.
−−containers=LIST A list of containers to be created and used by the created job. This command line option can be used to create container (virtual machines) and then install clusters on them or just add them to an existing cluster as nodes. Please check s9s container for details.
−−credential-id=ID The cloud credential ID that should be used when creating a new container. This is an optional value, if not provided the controller will find the credential to be used by the cloud name and the chosen region.
−−firewalls=LIST List of firewall (security groups) IDs separated by , or ; to be used for newly created containers. Check s9s-container for further details.
−−generate-key Create a new SSH keypair when creating new containers. If this command line option was provided a new SSH keypair will be created and registered for a new user account to provide SSH access to the new container(s). If the command creates more than one containers the same one keypair will be registered for all. The username will be the username of the authenticated cmon-user. This can be overruled by the --os-user command line option. When the job creates a new cluster the generated keypair will be registered for the cluster and the file path will be saved into the cluster’s Cmon configuration file. When adding a node to such a cluster this --generate-key option should not be passed, the controller will automatically re-use the previously created keypair.
−−image=NAME The name of the image from which the new container will be created. This option is not mandatory, when a new container is created the controller can choose an image if it is needed. To find out what images are supported by the registered container severs please issue the s9s server --list-images command.
−−image-os-user=NAME The name of the initial OS user defined in the image for the first login. Use this option to create containers based on custom images.
−−os-password=PASSWORD This command line option can be passed when creating new containers to set the password for the user that will be created on the container. Please note that some virtualization backend might not support passwords, only keys.
−−subnet-id=ID This option can be used when new containers are created to set the subnet ID for the container. To find out what subnets are supported by the registered container severs please issue the s9s server --list-subnets command.
−−template=NAME The name of the container template. See Container Template.
−−volumes=LIST When a new container is created this command line option can be used to pass a list of volumes that will be created for the container. The list can contain one or more volumes separated by the ; character. Every volume consists three properties separated by the : character, a volume name, the volume size in gigabytes and a volume type that is either “hdd” or “ssd”. The string vol1:5:hdd;vol2:10:hdd for example defines two hard-disk volumes, one 5GByte and one 10GByte. For convenience the volume name and the type can be omitted, so that automatically generated volume names are used.
−−vpc-id=ID This option can be used when new containers are created to set the VPC ID for the container. To find out what VPCs are supported by the registered container severs please issue the s9s server --list-subnets --long command.
LOAD BALANCER
−−admin-password=PASSWORD The password for the administrator of load balancers.
−−admin-user=USERNAME The username for the administrator of load balancers.
−−dont-import-accounts If this option is provided the database accounts will not be imported after the loadbalancer is installed and added to the cluster. The accounts can be imported later, but it is not going to be the part of the load balancer installation performed by the controller.
−−haproxy-config-template=FILENAME Configuration template for the HAProxy installation.
−−monitor-password=PASSWORD The password of the monitoring user of the load balancer.
−−monitor-user=USERNAME The username of the monitoring user of the load balancer.

5.2.4.1. Node List

The list of nodes or hosts enumerated in a special string using a semicolon as field separator (e.g. 192.168.1.1;192.168.1.2). The strings in the node list are URLs that can have the following protocols:

URI Description
mysql:// The protocol to install and handle MySQL servers.
ndbd:// The protocol for MySQL Cluster (NDB) data node servers.
ndb_mgmd:// The protocol for MySQL Cluster (NDB) management node servers. The mgmd:// notation is also accepted.
haproxy:// Used to create and manipulate HaProxy servers.
proxysql:// Use this to install and handle ProxySql servers.
maxscale:// The protocol to install and handle MaxScale servers.
mongos:// The protocol to install and handle mongo router servers.
mongocfg:// The protocol to install and handle mongo config servers.
mongodb:// The protocol to install and handle mongo data servers.
postgresql:// The protocol to install and handle PostgreSQL servers.

5.2.4.2. Cluster Format

The string that controls the format of the printed information about clusters. When this command line option is used, the specified information will be printed instead of the default columns. The format string uses the % character to mark variable fields and flag characters as they are specified in the standard printf() C library functions. The % specifiers are ended by field name letters to refer to various properties of the clusters.

The %+12I format string for example has the +12 flag characters in it with the standard meaning: the field will be 12 character wide and the + or - sign will always be printed with the number. The properties of the message are encoded by letters. The in the %-5I for example the letter I encodes the “cluster ID” field, so the numerical ID of the cluster will be substituted. Standard \ notation is also available, \n for example encodes a new-line character.

The s9s-tools support the following fields:

Field Description
a The number of active alarms on the cluster.
C The configuration file for the cluster.
c The total number of CPU cores in the cluster. Please note that this number may be affected by hyper-threading. When a computer has 2 identical CPUs, with four cores each and uses 2x hyper-threading it will count as 2x4x2 = 16.
D The domain name of the controller of the cluster. This is the string one would get if executed the “domain name” command on the controller host.
G The name of the group owner of the cluster.
H The host name of the controller of the cluster. This is the string one would get if executed the “hostname” command on the controller host.
h The number of the hosts in the cluster including the controller itself.
I The numerical ID of the cluster.
i The total number of monitored disk devices (partitions) in the cluster.
k The total number of disk bytes found on the monitored devices in the cluster. This is a double precision floating point number measured in Terabytes. With the f modifier (e.g. %6.2fk) this will report the free disk space in TeraBytes.
L The log file of the cluster.
M A human readable short message that discribes the state of the cluster.
m The size of memory of all the hosts in the cluster added together, measured in GBytes. This value is represented by a double precision floating pointer number, so formatting it with precision (e.g. %6.2m) is possible. When used with the f modifier (e.g. %6.2fm) this reports the free memory, the memory that available for allocation, used for cache or used for buffers.
N The name of the cluster.
n The total number of monitored network interfaces in the cluster.
O The name of the owner of the cluster.
S The state of the cluster.
T The type of the cluster.
t The total network traffic (both received and transmitted) measured in MBytes/seconds found in the cluster.
V The vendor and the version of the main software (e.g. the MySQL server) on the node.
U The number of physical CPUs on the host.
u The CPU usage percent found on the cluster.
w The total swap space found in the cluster measured in GigaBytes. With the f modifier (e.g. %6.2fk) this reports the free swap space in GigaBytes.
% The % character itself.

Examples

Create a three-node Percona XtraDB Cluster 5.7 cluster, with OS user vagrant:

$ s9s cluster --create \
        --cluster-type=galera \
        --nodes="10.10.10.10;10.10.10.11;10.10.10.12" \
        --vendor=percona \
        --provider-version=5.7 \
        --db-admin-passwd='pa$$word' \
        --os-user=vagrant \
        --os-key-file=/home/vagrant/.ssh/id_rsa \
        --cluster-name='Percona XtraDB Cluster 5.7'

Create a three-node MongoDB Replica Set 3.2 by MongoDB Inc (formerly 10gen) and use the default /root/.ssh/id_rsa as the SSH key, and let the deployment job running in foreground:

$ s9s cluster --create \
        --cluster-type=mongodb \
        --nodes="10.0.0.148;10.0.0.189;10.0.0.219" \
        --vendor=10gen \
        --provider-version='3.2' \
        --os-user=root \
        --db-admin='admin' \
        --db-admin-passwd='MyS3cr3tPass' \
        --cluster-name='MongoDB ReplicaSet 3.2' \
        --wait

An example for creating a MongoDB Sharded Cluster with 3 mongos, 3 mongo config and one shard consists of a three-node replicaset called ‘replset2’:

$ s9s cluster --create \
        --cluster-type=mongodb \
        --vendor=10gen \
        --provider-version=3.2 \
        --db-admin=adminuser \
        --db-admin-passwd=adminpwd \
        --os-user=root \
        --os-key-file=/root/.ssh/id_rsa \
        --nodes="mongos://192.168.1.11;mongos://192.168.1.12;mongos://192.168.1.12;mongocfg://192.168.1.11;mongocfg://192.168.1.12;mongocfg://192.168.1.13;192.168.1.14?priority=5.0;192.168.1.15?arbiter_only=true;192.168.1.16?priority=2;192.168.1.17?rs=replset2;192.168.1.18?rs=replset2&arbiter_only=yes;192.168.1.19?rs=replset2&slave_delay=3&priority=0"

Import and existing Percona XtraDB Cluster 5.7 and let the deployment job running in foreground (provided passwordless SSH from ClusterControl node to all database nodes have been setup correctly):

$ s9s cluster --register \
        --cluster-type=galera \
        --nodes="192.168.100.34;192.168.100.35;192.168.100.36" \
        --vendor=percona \
        --provider-version=5.7 \
        --db-admin="root" \
        --db-admin-passwd="root123" \
        --os-user=root \
        --os-key-file=/root/.ssh/id_rsa \
        --cluster-name="My DB Cluster" \
        --wait

Create a MySQL 5.7 replication cluster by Oracle with multiple master and slaves (note the ? sign to identify the node’s role in the --nodes parameter):

$ s9s cluster --create \
        --cluster-type=mysqlreplication \
        --nodes="192.168.1.117?master;192.168.1.113?slave;192.168.1.115?slave;192.168.1.116?master;192.168.1.118?slave;192.168.1.119?slave;" \
        --vendor=oracle \
        --db-admin="root" \
        --db-admin-passwd="root123" \
        --cluster-name=ft_replication_23986 \
        --provider-version=5.7 \
        --log

Create a PostgreSQL 12 streaming replication cluster with one master and two slaves (note the ? sign to identify the node’s role in the --nodes parameter):

$ s9s cluster --create \
        --cluster-type=postgresql \
        --nodes="192.168.1.81?master;192.168.1.82?slave;192.168.1.83?slave;" \
        --db-admin="postgres" \
        --db-admin-passwd="mySuperStongP455w0rd" \
        --cluster-name=ft_replication_23986 \
        --os-user=vagrant \
        --os-key-file=/home/vagrant/.ssh/id_rsa \
        --provider-version=12 \
        --log

List all clusters with more details:

$ s9s cluster --list --long

Delete a cluster with cluster ID 1:

$ s9s cluster --delete --cluster-id=1

Add a new database node on Cluster ID 1:

$ s9s cluster --add-node \
        --nodes=10.10.10.14 \
        --cluster-id=1 \
        --wait

Add a data node to an existing MongoDB Sharded Cluster with cluster ID 12 having replicaset name ‘replset2’:

$ s9s cluster --add-node \
        --cluster-id=12 \
        --nodes="mongodb://192.168.1.20?rs=replset2"

Create an HAProxy load balancer, 192.168.55.198 on cluster ID 1:

$ s9s cluster --add-node \
        --cluster-id=1 \
        --nodes="haproxy://192.168.55.198" \
        --wait

Remove a database node from cluster ID 1 as a background job:

$ s9s cluster --remove-node \
        --nodes=10.10.10.13 \
        --cluster-id=1

Check if the hosts are part of other cluster and accessible from ClusterControl:

$ s9s cluster --check-hosts \
        --nodes="10.0.0.148;10.0.0.189;10.0.0.219"

Schedule a rolling restart of the cluster 20 minutes from now:

$ s9s cluster --rolling-restart \
        --cluster-id=1 \
        --schedule="$(date -d 'now + 20 min')"

Create a database on the cluster with the given name:

$ s9s cluster --create-database \
        --cluster-id=2 \
        --db-name=my_shopping_db

Create a database account on the cluster and also create a new database to be used by the new user. Grant all access on the new database for the new user:

$ s9s cluster --create-account \
        --cluster-id=1 \
        --account="john:[email protected]" \
        --with-database

5.2.5. s9s container

Manage cloud and container virtualization. Multiple technologies (multiple virtualization backends) are supported (e.g. Linux LXC and AWS) providing various levels of virtualization. Throughout this documentation (and in fact in the command line options) s9s uses the word “container” to identify virtualized servers. The actual virtualization backend might use the term “virtual machine” or “Linux container” but s9s provides a high level generic interface to interact with them, so the generic “container” term is used. So please note, the term “container” does not necessarily mean “Linux container”, it means “a server that is running in some kind of virtualized environment”.

In order to utilize the s9s command line tool and the CMON Controller to manage virtualization, a virtualization host (container server) has to be installed first. The installation of such a container environment is documented in the s9s server.

Usage

s9s container {command} {options}

Command

Name, shorthand Description
−−create Creates and start a new container or virtual machine. If this option is provided, the controller will create a new job that creates a container. By default the container will also be started, an account will be created, passwordless sudo granted and the controller will wait the controller to obtain an IP address.
−−delete Stop and delete the container or virtual machine.
−−list, -L Lists the containers. See Container List.
−−start Starts an existing container.
−−stat Prints the details of a container.
−−stop Stops the container. This will not remove the container by default, but it will stop it. If the container is set to be deleted on stop (temporary) it will be deleted.

Options

Name, shorthand Description
−−log Waits until the job is executed. While waiting the job logs will be shown unless the silent mode is set.
−−recurrence=CRONTABSTRING This option can be used to create recurring jobs, jobs that are repeated over and over again until they are manually deleted. Every time the job is repeated a new job will be instantiated by copying the original recurring job and starting the copy. The option argument is a crontab style string defining the recurrence of the job. See Crontab.
−−schedule=DATETIME The job will not be executed now but it is scheduled to execute later. The datetime string is sent to the backend, so all the formats are supported that is supported by the controller.
−−timeout=SECONDS Sets the timeout for the created job. If the execution of the job is not done before the timeout counted from the start time of the job expires the job will fail. Some jobs might not support the timeout feature, the controller might ignore this value.
−−wait Waits until the job is executed. While waiting a progress bar will be shown unless the silent mode is set.
−−cloud=PROVIDER This option can be used when new container(s) created. The name of the cloud provider where the new container will be created. This command line option can also be used to filter the list of the containers when used together with one of the --list or --stat options.
−−containers=LIST A list of containers to be created or managed. The containers can be passed as command line options (suitable for simple commands) or as an option argument for this command line option. The s9s container --stop node01 and the s9s container --stop --containers=node01 commands for example are equivalent. See Create Container List.
−−image=NAME The name of the image from which the new container will be created. This option is not mandatory, when a new container is created the controller can choose an image if it is needed. To find out what images are supported by the registered container severs please issue the s9s server --list-images command.
−−os-key-file=PATH The path of the SSH key to install on a new container to allow the user to log in. This command line option can be passed when a new container is created, the argument of the option should be the path of the private key stored on the controller. Although the path of the private key file is passed only the public key will be uploaded to the new container.
−−os-password=PASSWORD This command line option can be passed when creating new containers to set the password for the user that will be created on the container. Please note that some virtualization backend might not support passwords, only keys.
−−os-user=USERNAME This option may be used when creating new containers to pass the name of the user that will be created on the new container. Please note that this option is not mandatory, because the controller will create an account whose name is the same as the name of the cmon user creating the container. The public key of the cmon user will also be registered (if the user has an associated public key) so the user can actually log in.
−−servers=LIST A list of servers to work with.
−−subnet-id=ID This option can be used when new containers are created to set the subnet ID for the container. To find out what subnets are supported by the registered container severs please issue the s9s server --list-subnets command.
−−template=NAME The name of the container template. See Container Template.
−−volumes=LIST When a new container is created this command line option can be used to pass a list of volumes that will be created for the container. See Volume List.
−−vpc-id=ID This option can be used when new containers are created to set the VPC ID for the container. To find out what VPCs are supported by the registered container severs please issue the s9s server --list-subnets --long command.

5.2.5.1. Container List

Using the --list and --long command line options a detailed list of the containers can be printed. Here is an example of such a list:

$ s9s container --list --long
S TYPE TEMPLATE OWNER GROUP     NAME                IP ADDRESS    SERVER
- lxc  -        pipas testgroup bestw_controller    -             core1
u lxc  -        pipas testgroup dns1                192.168.0.2   core1
u lxc  ubuntu   pipas testgroup ft_containers_35698 192.168.0.228 core1
u lxc  -        pipas testgroup mqtt                192.168.0.5   core1
- lxc  -        pipas testgroup ubuntu              -             core1
u lxc  -        pipas testgroup www                 192.168.0.19  core1
Total: 6 containers, 4 running.

The list contains the following fields:

Field Description
S The abbreviated status information. This is u for a container that is up and running and - otherwise.
TYPE Shows what kind of container or virtual machine shown in this line, the type of the software that provides the virtualization.
TEMPLATE The name of the template that is used to create the container.
OWNER The owner of the server object.
GROUP The group owner of the server object.
NAME The name of the container. This is not necessarily the hostname, this is a unique name to identify the container on the host.
IP ADDRESS The IP address of the container or the - character if the container has no IP address.
SERVER The server on which the container can be found.

5.2.5.2. Create Container List

The command line option argument is one or more containers separated by the ; character. Each container is an URL defining the container name (an alias for the container) and zero or more properties. The string container05?parent_server=core1;container06?parent_server=core2 for example defines two containers one on one server and the other is on an other server.

To see what properties are supported in the controller for the containers, one may use the following command:

$ s9s metatype --list-properties --type=CmonContainer --long
ST NAME            UNIT DESCRIPTION
r- acl             -    The access control list.
r- alias           -    The name of the container.
r- architecture    -    The processor architecture.

See Property List for details.

5.2.5.3. Container Template

Defining a template is an easy way to set a number of complex properties without actually enumerating them in the command line one by one. The actual interpretation of the template name is up to the virtualization backend that is the protocol of the container server. The lxc backend for example considers the template to be an already created container, it simply creates the new container by copying the template container so the new container inherits everything.

The template name can also be provided as a property name for the container, so the command s9s container --create --containers="node02?template=ubuntu;node03" --log for example will create two containers, one using a template, the other using the default settings.

Note that the --template command line option is not mandatory, if emitted suitable default values will be chosen, but if the template is provided and the template is not found the creation of the new container will fail.

5.2.5.4. Volume List

The list can contain one or more volumes separated by the ; character. Every volume consists three properties separated by the : character, a volume name, the volume size in gigabytes and a volume type that is either “hdd” or “ssd”. The string vol1:5:hdd;vol2:10:hdd for example defines two hard-disk volumes, one 5GByte and one 10GByte.

For convenience, the volume name and the type can be omitted, so that automatically generated volume names are used.

Examples

Create a container no special information needed, every settings will use the default values. For this of course, at least one container server has to be pre-registered and properly working:

$ s9s container --create --wait

Using the default, automatically chosen container names might not be the easiest way, so here is an example that provides a container name:

$ s9s container --create --wait node01

This is equivalent with the following example that provides the container name through a command line option:

$ s9s container --create --wait --containers="node01"

5.2.6. s9s controller

View and handle controller (CMON instance), allows building a highly available cluster of CMON instances to achieve ClusterControl high availability.

Note

CMON HA feature is still a beta feature.

This command can help setup CMON with high availability feature using the following simple steps:

  1. Install a CMON Controller together with the CMON Database serving as a permanent storage for the controller. The CMON HA will not replicate the CMON Database, so it has to be accessible from all the controllers and if necessary it has to provide redundancy on itself.
  2. Enable the CMON HA subsystem using the --enable-cmon-ha on the running controller. This will create one CmonController class object. Check the object using the --list or --stat option. The CMON HA is now enabled, but there is no redundancy, only one controller is running. The one existing controller in this stage should be a leader although there are no followers.
  3. Install additional Cmon Controllers one by one and start them the usual way. The next controllers should use the same CMON Database and should have the same configuration files. When the additional controllers are started they will find the leader in the CMON Database and will ask the leader to let them join. When the join is successful one more CmonController will be created for every joining controller.

Usage

s9s controller {command} {options}

Command

Name, shorthand Description
−−create-snapshot Creates job that will create a controller to controller snapshot of the Cmon HA subsystem. Creating a snapshot manually using this command line option is not necessary for the Cmon HA to operate, this command line option is made for testing and repairing.
−−enable-cmon-ha Enabled the CMON HA subsystem. By default Cmon HA is not enabled for compatibility reasons, so this command line option is implemented to enable the controller to controller communication. When the CMON HA is enabled CmonController class objects will be created and used to implement the high availability features (e.g. the leader election). So if the controller at least one CmonController object, the CMON HA is enabled, if not, it is not enabled.
−−list Lists the CmonController type objects known to the controller. If the CMON HA is not enabled there will be no such objects, if it is enabled one or more controllers will be listed. With the --long option a more detailed list will be shown where the state of the controllers can be checked.
−−ping Sends a ping request to the controller and prints the information received. Please note that there is an other ping request for the clusters, but this ping request is quite different from that. This request does not need a cluster, it is never redirected (follower controllers will also reply to this request) and it is replied with some basic information about the Cmon HA subsystem.
−−stat Prints more details about the controller objects.

Examples

Create a controller to controller snapshot of CMON HA subsystem:

$ s9s controller \
        --create-snapshot \
        --log

Enable CMON HA feature for the current CMON instance:

$ s9s controller --enable-cmon-ha

List out all controllers participate in the CMON HA cluster:

$ s9s controller \
        --list \
        --long

Send ping request to the controller and print the output in JSON format, with some text filtering on the output:

$ s9s controller \
        --ping \
        --print-json \
        --json-format='status: ${controller_status}\n'

Print more details about the controller objects:

$ s9s controller --stat

5.2.7. s9s job

View jobs.

Usage

s9s job {command} {options}

Command

Name, shorthand Description
−−clone Creates a copy of the job to re-run it. The clone will have all the properties the original job had and will be executed the same way as new jobs are executed. If the --cluster-id command line option is used the given cluster will execute the job, if not, the clone will have the same cluster the original job had.
−−delete Deletes the job referenced by the job ID.
−−fail Creates a job that does nothing and fails.
−−kill This command line option can be used to send a signal to a running Cmon Job in order to abort its execution. The job subsystem is not preemptive in the controller, so the job only will be actually aborted if and when the job supports aborting what it is doing.
−−list, -L Lists the jobs. See Job List.
−−log Prints the job messages of the specified job.
−−success Creates a job that does nothing and succeeds.
−−wait Waits for the specified job to end. While waiting, a progress bar will be shown unless the silent mode is set.

Options

Name, shorthand Description
NEWLY CREATED JOB
−−job-tags=LIST List of one of more strings separated by either , or ; to be added as tags to a newly created job if a job is indeed created.
−−log Waits for the specified job to end. While waiting, the job logs will be shown unless the silent mode is set.
−−recurrence=CRONTABSTRING Creates recurring jobs, jobs that are repeated over and over again until they are manually deleted. See Crontab.
−−schedule=DATETIME The job will not be executed now but it is scheduled to execute later. The datetime string is sent to the backend, so all the formats are supported that is supported by the controller.
−−timeout=SECONDS Sets the timeout for the created job. If the execution of the job is not done before the timeout counted from the start time of the job expires the job will fail. Some jobs might not support the timeout feature, the controller might ignore this value.
−−wait Waits for the specified job to end. While waiting, a progress bar will be shown unless the silent mode is set.
JOB RELATED OPTIONS
−−job-id=ID The job ID of the job to handle or view.
−−from=DATE&TIME Controls the start time of the period that will be printed in the job list.
−−limit=NUMBER Limits the number of jobs printed.
−−offset=NUMBER Controls the relative index of the first item printed.
−−show-aborted Turn on the job state filtering and show jobs that are in aborted state. This command line option can be used while printing job lists together with the other --show-* options.
−−show-defined Turn on the job state filtering and show jobs that are in defined state. This command line option can be used while printing job lists together with the other --show-* options.
−−show-failed Turn on the job state filtering and show jobs that are failed. This command line option can be used while printing job lists together with the other --show-* options.
−−show-finished Turn on the job state filtering and show jobs that are finished. This command line option can be used while printing job lists together with the other --show-* options.
−−show-running Turn on the job state filtering and show jobs that are running. This command line option can be used while printing job lists together with the other --show-* options.
−−show-scheduled Turn on the job state filtering and show jobs that are scheduled. This command line option can be used while printing job lists together with the other --show-* options.
−−until=DATE&TIME Controls the end time of the period that will be printed in the job list.
−−log-format=FORMATSTRING The string that controls the format of the printed log and job messages. See Log Format Variables.
−−with-tags=LIST List of one of more strings separated by either , or ; to be used as a filter when printing information about jobs. When this command line option is provided only the jobs that has any of the tags will be printed.
−−without-tags=LIST List of one of more strings separated by either , or ; to be used as a filter when printing information about jobs. When this command line option is provided the jobs that has any of the tags will not be printed.

5.2.7.1. Crontab

Every time the job is repeated a new job will be instantiated by copying the original recurring job and starting the copy. The option argument is a crontab style string defining the recurrence of the job.

The crontab string must have exactly five space separated fields as follows:

Field Value
minute 0 - 59
hour 0 - 23
day of the month 1 - 31
month 1 - 12
day of the week 0 - 7

All the fields may be a simple expression or a list of simple expression separated by a comma (,). So to clarify the fields are separated by space can contain subfields separated by comma.

The simple expression is either a star (*) representing “all the possible values”, an integer number representing the given minute, hour, day or month (e.g. 5 for the fifth day of the month), or two numbers separated by a dash representing an interval (e.g. 8-16 representing every hour from 8 to 16). The simple expression can also define a “step” value, so for example */2 might stand for “every other hour” or 8-16/2 might stand for “every other hour between 8 and 16 or */2 might say “every other hours”.

Please check crontab man page for more details.

5.2.7.2. Job List

Using the --list command line option a detailed list of jobs can be printed (the --long option results in even more details). Here is an example of such a list:

$ s9s job --list
ID CID STATE    OWNER  GROUP  CREATED             RDY  TITLE
1   0 FINISHED pipas  users  2017-04-25 14:12:31 100% Create MySQL Cluster
2   1 FINISHED system admins 03:00:15            100% Removing Old Backups
Total: 2

The list contains the following fields:

Field Description
ID The numerical ID of the job. The --job-id command line option can be used to pass such ID numbers.
CID The cluster ID. Most of the jobs are related to one specific cluster so those have a cluster ID in this field. Some of the jobs are not related to any cluster, so they are shown with cluster ID 0.
STATE The state of the job. The possible values are DEFINED, DEQUEUED, RUNNING, SCHEDULED, ABORTED, FINISHED and FAILED.
OWNER The user name of the user who owns the job.
GROUP The name of the group owner.
CREATED The date and time showing when the job was created. The format of this timestamp can be set using the --date-format command line option.
RDY A progress indicator showing how many percent of the job was done. Please note that some jobs has no estimation available and so this value remains 0% for the entire execution time.
TITLE A short, human readable description of the job.

5.2.7.3. Log Format Variables

The format string uses the % character to mark variable fields, flag characters as they are specified in the standard printf() C library functions and its own field name letters to refer to the various properties of the messages.

The %+12I format string for example has the “+12” flag characters in it with the standard meaning: the field will be 12 character wide and the + or - sign will always be printed with the number. Standard \ notation is also available, \n for example encodes a new-line character.

The properties of the message are encoded by letters. The in the %-5L for example the letter L encodes the “line-number” field, so the number of the source line that produced the message will be substituted. The program supports the following fields:

Variable Description
B The base name of the source file that produced the message.
C The creation date and time that marks the exact moment when the message was created. The format of the date&time substituted can be set using the --date-format command line option.
F The name of the source file that created the message. This is similar to the B fields, but instead of the base name the entire file name will be substituted.
I The ID of the message, a numerical ID that can be used as a unique identifier for the message.
J The Job ID.
L The line number in the source file where the message was created. This property is implemented mostly for debugging purposes.
M The message text.
S The severity of the message in text format. This field can be “MESSAGE”, “WARNING” or “FAILURE”.
T The creation time of the message. This is similar to the C field, but shows only hours, minutes and seconds instead of the full date and time.
% The % character itself.

Examples

List jobs:

$ s9s job --list
10235 RUNNING  dba     2017-01-09 10:10:17   2% Create Galera Cluster
10233 FAILED   dba     2017-01-09 10:09:41   0% Create Galera Cluster

The s9s client will send a job that will be executed in the background by cmon. It will printout the job ID, for example: “Job with ID 57 registered.”

It is then possible to attach to the job to find out the progress:

$ s9s job --wait --job-id=57

View job log messages of job ID 10235:

$ s9s job --log  --job-id=10235

Delete the job that has the job ID 41:

$ s9s job --delete --job-id=42

Create a job that runs in every 5 minutes and does nothing at all. This can be used for testing and demonstrating the recurring jobs without doing any significant or dangerous operations.

$ s9s job --success --recurrence="*/5 * * * *"

Clone job ID 14, run it in as a new job and see the job messages:

$ s9s job --clone \
        --job-id=14 \
        --log
        --clone

Kill a running job with job ID 20:

$ s9s job --kill \
        --job-id=20

5.2.8. s9s maintenance

View and manipulate maintenance periods.

Usage

s9s maintenance {command} {options}

Command

Name, shorthand Description
−−create Creates a new maintenance period.
−−current Prints the active maintenance for a cluster or for a host. Prints nothing if no maintenance period is active.
−−list, -L Lists the registered maintenance period. See Maintenance List.
−−delete Deletes an existing maintenance period. The maintenance periods are identified by their UUID strings. The UUID strings by default shown in an abbreviated format. When the --full-uuid command line option is provided the full length UUID strings will be shown. Deleting a maintenance period is also possible by providing only the first few characters of the UUID when these first characters are unique and enough to identify the maintenance period.
−−next Prints information about the very next maintenance period for a cluster or for a host. Prints nothing if no maintenance is registered to be started in the future.

Options

Name, shorthand Description
−−begin=DATETIME A string representation of the date and time when the maintenance period will start.
−−cluster-id=ID The cluster for cluster maintenance.
−−end=DATETIME A string representation of the date and time when the maintenance period will end.
−−full-uuid Print the full UUID.
−−nodes=NODELIST The nodes for the node maintenances. See Node List.
−−reason=STRING The reason for the maintenance.
−−start=DATETIME A string representation of the date and time when the maintenance period will start. This option is deprecated, please use the --begin option instead.
−−uuid=UUID The UUID to identify the maintenance period.

5.2.8.1. Maintenance List

Using the --list and --long command line options a detailed list of the registered maintenance periods can be printed:

$ s9s maintenance --list --long
ST UUID    OWNER  GROUP  START    END      HOST/CLUSTER  REASON
Ah a7e037a system admins 11:21:24 11:41:24 192.168.1.113 Rolling restart.
Total: 1

The list contains the following fields:

Field Description
ST The short status information, where at the first character position A stands for ‘active’ and - stands for ‘inactive’. At the second character position h stands for ‘host related maintenance’ and c stands for ‘cluster related maintenance’.
UUID The unique string that identifies the maintenance period. Normally only the first few characters of the UUID is shown, but if the --full-uuid command line option is provided the full length string will be printed.
OWNER The name of the owner of the given maintenance period.
GROUP The name of the group owner of the maintenance period.
START The date and time when the maintenance period starts.
END The date and time when the maintenance period expires.
HOST/CLUSTER The name of the cluster or host under maintenance.
REASON A short human readable description showing why the maintenance is required.

Examples

Create a maintenance period for PostgreSQL node 10.35.112.21, starting on 05:44:55 AM for one day:

    $ s9s maintenance \
      --create \
      --nodes=10.35.112.21:5432 \
--start=2018-05-19T05:44:55.000Z \
            --end=2018-05-20T05:44:55.000Z \
      --reason='Upgrading RAM' \
      --batch

create a new maintenance period for 192.168.1.121 where it starts tomorrow and finishes an hour later:

$ s9s maintenance --create \
        --nodes=192.168.1.121 \
        --start="$(date -d 'now + 1 day' '+%Y-%m-%d %H:%M:%S')" \
        --end="$(date -d 'now + 1 day + 1 hour' '+%Y-%m-%d %H:%M:%S')" \
        --reason="Upgrading software."

List out all nodes that are under maintenance period:

$ s9s maintenance --list --long
ST UUID    OWNER GROUP START    END      HOST/CLUSTER REASON
-h 70346c3 dba   admin 07:42:18 08:42:18 10.0.0.209   Upgrading RAM
Total: 1

Delete a maintenance period for UUID 70346c3:

$ s9s maintenance --delete --uuid=70346c3

Check if there is any ongoing maintenance period for cluster ID 1:

$ s9s maintenance --current --cluster-id=1

Check the next maintenance period schedule for node 192.168.0.227 for cluster ID 1:

$ s9s maintenance \
        --next \
        --cluster-id=1 \
        --nodes="192.168.0.227"

5.2.9. s9s metatype

Lists meta-types supported by the controller.

Usage

s9s metatype {command} {options}

Command

Name, shorthand Description
−−list, -L Lists the names of the types the controller supports. See Property List.
−−list-cluster-types Lists the cluster types the controller supports. With the --long option also lists the vendors and the versions.
−−list-properties Lists the properties the given metatype has. Use the --type command line option to pass the name of the metatype.

Options

Name, shorthand Description
−−type=TYPENAME The name of the type.

5.2.9.1. Property List

Using the --list-properties and --long command line options a detailed list of the metatype properties can be printed:

$ s9s metatype --list-properties --type=CmonUser --long
ST NAME               UNIT DESCRIPTION
rw email_address      -    The email address of the user.
rw first_name         -    The first name of the user.
rw groups             -    The list of groups for the user.
rw job_title          -    The job title of the user.
r- last_login         -    The date&time of the last successful login.

The list contains the following fields:

Field Description
ST The short status information, where at the first character position r stands for ‘readable’ and - shows that the property is not readable by the client program. At the second character position w stands for ‘writable’ and - shows that the property is not writable by the client.
NAME The name of the property.
UNIT The unit in which the given property is measured (e.g. ‘byte’). This field shows a single - character if the unit is not applicable.
DESCRIPTION The short human readable description of the property.

Examples

List the metatypes the controller supports:

$ s9s metatype --list

List a detailed list of the properties the CmonUser type has:

$ s9s metatype \
        --list-properties \
        --type=CmonUser \
        --long

List all cluster types currently managed by this controller:

$ s9s metatype \
        --list-cluster-types \
        --long

5.2.10. s9s node

View and handle nodes.

Usage

s9s node {command} {options}

Command

Name, shorthand Description
−−change-config Changes configuration values for the given node.
−−enable-binary-logging Creates a job to enable the binary logging on a specific node. Not all clusters support this feature (MySQL does). One need to enable binary logging in order to set up cluster to cluster replication.
−−list, -L List nodes. Default to all clusters.
−−list-config Lists the configuration values for the given node.
−−pull-config Copy the configuration file(s) from the node to the local computer. Use the --output-dir to control where the files will be created.
−−restart Restarts the node. This means the process that provides the main functionality on the node (e.g. the MySQL daemon on a Galera node) will be stopped then start again.
−−set Sets various properties of the specified node/host.
−−set-config Changes configuration values for the given node.
−−set-read-only Creates a job that sets the node to read-only mode. Please note that not all cluster types support read-only mode.
−−set-read-write Creates a job that sets the node to read-write mode if it was previously set to read-only mode. Please note that not all cluster types support read-only mode.
−−start Starts the node. This means the process that provides the main functionality on the node (e.g. the MySQL daemon on a Galera node) will be start.
−−stat Prints detailed node information. It can be used in conjunction with --graph to produce statistical data. See Graph Options.
−−stop Stops the node. This means the process that provides the main functionality on the node (e.g. the MySQL daemon on a Galera node) will be stopped.
−−unregister Unregisters the node from ClusterControl.

Options

Name, shorthand Description
−−cluster-id=ID, -i The ID of the cluster in which the node is.
−−cluster-name=NAME, -n Name of the cluster to list.
−−nodes=NODE_LIST The nodes to list or manipulate. See Node List.
−−node-format=FORMAT The format string used to print nodes.
−−opt-group=GROUP The configuration option group.
−−opt-name=NAME The name of the configuration option.
−−opt-value=VALUE The value of the configuration option.
−−output-dir=DIR The directory where the files are created.
−−graph=NAME The name of the graph to show. See Graph Options.
−−begin=TIMESTAMP The start of the graph interval.
−−end=TIMESTAMP The end of the graph interval.
−−force Force the execution of potentially dangerous operations like restarting a master node in a MySQL Replication cluster.
−−begin=TIMESTAMP The start time of the graph (the X axis).
−−density If this option is provided will be a probability density function (or histogram) instead of a timeline. The X axis shows the measured values (e.g. MByte/s) while the Y axis hows how many percent of the measurements contain the value. If for example the CPU usage is between 0% and 1% at the 90% of the time the graph will show a 90% bump at the lower end.
−−end=TIMESTAMP The end of the graph.
−−node-format[=FORMATSTRING] The string that controls the format of the printed information about the nodes. See Node Format.
−−properties=ASSIGNMENT One or more assignments specifying property names and values. The assignment operator is the = character (e.g. --properties='alias="newname"'), multiple assignments are separated by the semicolon ;.
−−output-dir=DIRECTORY The directory where the output files will be created on the local computer.

5.2.10.1. Graph Options

When providing a valid graph name together with the --stat option a graph will be printed with statistical data. Currently the following graphs are available:

Option Description
cpughz The graph will show the CPU clock frequency measured in GHz.
cpuload Shows the average CPU load of the host computer.
cpusys Percent of time the CPU spent in kernel mode.
cpuidle Percent of time the CPU is idle on the host.
cpuiowait Percent of time the CPU is waiting for IO operations.
cputemp The temperature of the CPU measured in degree Celsius. Please note that to measure the CPU temperature some kernel module might be needed (e.g. it might be necessary to run sudo modprobe coretemp). On multiprocessor systems the graph might show only the first processor.
cpuuser Percent of time the CPU is running user space programs.
diskfree The amount of free disk space measured in GBytes.
diskreadspeed Disk read speed measured in MBytes/sec.
diskreadwritespeed Disk read and write speed measured in MBytes/sec.
diskwritespeed Disk write speed measured in MBytes/sec.
diskutilization The bandwidth utilization for the device in percent.
memfree The amount of the free memory measure in GBytes.
memutil The memory utilization of the host measured in percent.
neterrors The number of receive and transmit errors on the network interface.
netreceivedspeed Network read speed in MByte/sec.
netreceiveerrors The number of packets received with error on the given network interface.
nettransmiterrors The number of packets failed to transmit.
netsentspeed Network write speed in MByte/sec.
netspeed Network read and write speed in MByte/sec.
sqlcommands Shows the number of SQL commands executed measured in 1/s.
sqlcommits The number of commits measured in 1/s.
sqlconnections Shows the number of SQL connections.
sqlopentables The number of open tables in any given moment.
sqlqueries The number of SQL queries in 1/s.
sqlreplicationlag Replication lag on the SQL server.
sqlslowqueries The number of slow queries in 1/s.
swapfree The size of the free swap space measured in GBytes.

5.2.10.2. Node Format

The string that controls the format of the printed information about the nodes. When this command line option is used the specified information will be printed instead of the default columns. The format string uses the % character to mark variable fields and flag characters as they are specified in the standard printf() C library functions. The % specifiers are ended by field name letters to refer to various properties of the nodes.

The %+12i format string for example has the +12 flag characters in it with the standard meaning the field will be 12 character-wide and the + or - sign will always be printed with the number. The properties of the node are encoded by letters. In the %16D for example, the letter D encodes the “data directory” field, so the full path of the data directory on the node will be substituted. Standard \ notation is also available, \n for example encodes a new-line character.

The s9s-tools support the following fields:

Field Description
A The IP address of the node.
a Maintenance mode flag. If the node is in maintenance mode a letter M, otherwise -.
C The configuration file for the most important process on the node (e.g. the configuration file of the MySQL daemon on a Galera node).
c The total number of CPU cores in the host. Please note that this number may be affected by hyper-threading. When a computer has 2 identical CPUs, with four cores each and uses 2x hyper-threading it will count as 2x4x2 = 16.
D The data directory of the node. This is usually the data directory of the SQL server.
d The PID file on the node.
g The log file on the node.
I The numerical ID of the node.
i The total number of monitored disk devices (partitions) in the cluster.
k The total number of disk bytes found on the monitored devices in the node. This is a double precision floating point number measured in Terabytes.
L The replay location. This field currently only has valid value in PostgreSQL clusters.
l The received location. This field currently only has valid value in PostgreSQL clusters.
M A message, describing the node’s status un human readable format.
m The total memory size found in the host, measured in GBytes. This value is represented by a double precision floating pointer number, so formatting it with precision (e.g. %6.2m) is possible. When used with the f modifier (e.g. %6.2fm) this reports the free memory, the memory that available for allocation, used for cache or used for buffers.
N The name of the node. If the node has an alias that is used, otherwise the name of the node is used. If the node is registered using the IP address, the IP address is the name.
n The total number of monitored network interfaces in the host.
P The port on which the most important service is awaiting for requests.
p The PID (process ID) on the node that presents the service (e.g. the PID of the MySQL daemon on a Galera node).
O The user name of the owner of the cluster that holds the node.
o The name and version of the operating system together with the codename.
R The role of the node (e.g. “controller”, “master”, “slave” or “none”).
r The work read-only or read-write indicating if the server is in read only mode or not.
S The status of the host (e.g. CmonHostUnknown, CmonHostOnline, CmonHostOffLine, CmonHostFailed, CmonHostRecovery, CmonHostShutDown).
s The list of slaves of the given host in one string.
T The type of the node, e.g. “controller”, “galera”, “postgres”.
t The total network traffic (both received and transmitted) measured in MBytes/seconds.
U The number of physical CPUs on the host.
u The CPU usage percent found on the host.
V The version string of the most important software (e.g. the version of the PostgreSQL installed on a PostgreSQL node).
v The ID of the container/VM in “CLOUD/ID” format. The - string if no container ID is set for the node.
w The total swap space found in the host measured in GigaBytes. With the f modifier (e.g. %6.2fw) this reports the free swap space in GigaBytes.
Z The name of the CPU model. Should the host have multiple CPUs, this will return the model name of the first CPU.
% The % character itself.

Examples

List all nodes:

$ s9s node --list --long
ST  VERSION                  CID CLUSTER        HOST       PORT COMMENT
go- 10.1.22-MariaDB-1~xenial   1 MariaDB Galera 10.0.0.185 3306 Up and running
co- 1.4.1.1856                 1 MariaDB Galera 10.0.0.205 9500 Up and running
go- 10.1.22-MariaDB-1~xenial   1 MariaDB Galera 10.0.0.209 3306 Up and running
go- 10.1.22-MariaDB-1~xenial   1 MariaDB Galera 10.0.0.82  3306 Up and running
Total: 4

Print the configuration for a node:

$ s9s node --list-config --nodes=10.0.0.3
...
mysqldump   max_allowed_packet                     512M
mysqldump   user                                   backupuser
mysqldump   password                               nWC6NSm7PnnF8zQ9
xtrabackup  user                                   backupuser
xtrabackup  password                               nWC6NSm7PnnF8zQ9
MYSQLD_SAFE pid-file                               /var/lib/mysql/mysql.pid
MYSQLD_SAFE basedir                                /usr/
Total: 71

The following example shows how a node in a given cluster can be restarted. When this command executed a new job will be created to restart a node. The command line tool will stop and show the job messages until the job is finished:

$ s9s node \
        --restart \
        --cluster-id=1 \
        --nodes=192.168.1.117 \
        --log

Change a configuration value for a PostgreSQL server:

$ s9s node \
        --change-config \
        --nodes=192.168.1.115 \
        --opt-name=log_line_prefix \
        --opt-value='%m '

Push a configuration option inside my.cnf (max_connections=500) on node 10.0.0.3:

$ s9s node \
        --change-config \
        --nodes=10.0.0.3 \
        --opt-group=mysqld \
        --opt-name=max_connections \
        --opt-value=500

Listing the Galera hosts. This can be done by filtering the list of nodes by their properties:

$ s9s node \
        --list \
        --long \
        --properties="class_name=CmonGaleraHost"

Create a set of graphs, one for each node shown in the terminal about the load on the hosts. If the terminal is wide enough the graphs will be shown side by side for a compact view:

$ s9s node \
        --stat \
        --cluster-id=1 \
        --begin="08:00" \
        --end="14:00" \
        --graph=load

Density function can also be printed to show what were the typical values for the given statistical data. The following example shows what was the typical values for the user mode CPU usage percent:

$ s9s node \
        --stat \
        --cluster-id=2 \
        --begin=00:00 \
        --end=16:00 \
        --density \
        --graph=cpuuser

The following example shows how a custom list can be created to show some information about the CPU(s) in some specific hosts:

$ s9s node \
--list \
--node-format="%N %U CPU %c Cores %6.2u%% %Z\n" 192.168.1.191 192.168.1.195
192.168.1.191 2 CPU 16 Cores  22.54% Intel(R) Xeon(R) CPU L5520 @ 2.27GHz
192.168.1.195 2 CPU 16 Cores  23.12% Intel(R) Xeon(R) CPU L5520 @ 2.27GHz

The following list shows some information about the memory, the total memory and the memory available for the applications to allocate (including cache and buffer with the free memory):

$ s9s node \
        --list \
        --node-format="%4.2m GBytes %4.2fm GBytes %N\n"
        16.00 GBytes 15.53 GBytes 192.168.1.191
        47.16 GBytes 38.83 GBytes 192.168.1.127

Set a node to read-write mode if it was previously set to read-only mode:

$ s9s node \
        --set-read-write \
        --cluster-id=1 \
        --nodes=192.168.0.78 \
        --log

Copy configuration file(s) from a PostgreSQL server 192.168.0.232 into the local host:

$ s9s node \
        --pull-config \
        --nodes="192.168.0.232" \
        --output-dir="tmp"

5.2.11. s9s process

View processes running on nodes.

Usage

s9s process {command} {options}

Command

Name, shorthand Description
−−list, -L Lists the processes.
−−list-digests Prints statement digests together with statistical data showing how long it took them to be executed. The printed list will not contain individual SQL statements but patterns that collect multiple statements of similar form merged into groups by the similarities.
−−list-queries Lists the queries, internal SQL processes of the cluster.
−−top-queries Continuously showing the internal SQL processes in an interactive UI, similar ClusterControl UI Top Queries page.
−−top Continuously showing the processes in an interactive UI like the well-known “top” utility. Please note that if the terminal program supports the UI can be controller with the mouse.

Options

Name, shorthand Description
−−cluster-id=ID The ID of the cluster to show.
−−client=PATTERN Shows only the processes that originate from clients that match the given pattern.
−−limit=N Limits the number of processes shown in the list.
−−server=PATTERN Shows only the processes that are executed by servers that match the given pattern.
−−sort-by-memory Sorts the processes by resident memory size instead of cpu usage.
−−sort-by-time Sorts the SQL queries by their runtime. The longer running queries are going to be on top.
−−update-freq=INTEGER Update frequency for screen refresh in seconds.

Examples

Continuously print aggregated view of processes (similar to top output) of all nodes for cluster ID 1:

$ s9s process --top --cluster-id=1

List aggregated view of processes (similar to ps output) of all nodes for cluster ID 1:

$ s9s process --list --cluster-id=1

Print out aggregated digested SQL statements on all MySQL nodes in cluster ID 1:

$ s9s process \
        --list-digests \
        --cluster-id=1 \
        --human-readable \
        --limit=10 \
        '*:3306'

Print aggregated list of database top queries which contains string “INSERT” on all nodes in cluster ID 1 with refresh rate every 1 second:

$ s9s process \
        --top-queries \
        --cluster-id=1 \
        --update-freq=1 \
        'INSERT*'

Print all database queries on cluster ID 1 that are coming from a client with IP address 192.168.0.127 which having only “INSERT” string:

$ s9s process \
        --list-queries \
        --cluster-id=1 \
        --client='192.168.0.127:*' \
        'INSERT*'

Print all database queries on cluster ID 1 that are reaching the database server 192.168.0.81:

$ s9s process \
        --list-queries \
        --cluster-id=1 \
        --server='192.168.0.81:*'

5.2.12. s9s replication

Manage database replication related functions.

Note

Only applicable for supported database clusters namely MySQL/MariaDB Replication and PostgreSQL Streaming Replication.

Usage

s9s replication {command} {options}

Command

Name, shorthand Description
−−failover Takes over the role of master from a failed master.
−−list List the replication links.
−−promote Promotes a slave to become a master.
−−stage Rebuilds or stages a replication slave.
−−start Starts a replication slave previously stopped using the --stop option.
−−stop Make the slave stop replicating. This option will create a job that does not stop the server but stops the replication on it.

Options

Name, shorthand Description
−−link-format=FORMATSTRING The format string that controls the format of the printed information about the replication links. See Link Format.
−−master=NODE The replication master.
−−replication-master=NODE This is the same as the --master option.
−−slave=NODE The replication slave.
−−replication-slave=NODE This is the same as the --slave option.

5.2.13. s9s report

Manage operational reports.

Usage

s9s report {command} {options}

Command

Name, shorthand Description
−−cat Prints the report text to the standard output.
−−create Creates and print a new report. Please provide the --type command line option with the report type (that was chosen from the template list printed with the --list-templates option) and the --cluster-id with the cluster ID.
−−delete Deletes a report that already exists. The report ID should be used to specify which report to delete.
−−list Lists the reports that already created.
−−list-templates Lists the report templates showing what kind of reports can be created. This command line option is usually used together with the --long option.

Options

Name, shorthand Description
−−cluster-id=ID This command line option passes the numerical ID of the cluster.
−−report-id=ID This command line option passes the numerical ID of the report to manage. When a report is deleted (--delete option) or printed (--cat option) this option is mandatory.
−−type=NAME This command line option controls the type of the report. Use the --list-templates to list what types are available.

Examples

Lists already created reports for cluster ID 1:

$ s9s report \
        --list \
        --long \
        --cluster-id=1

Print the report ID 1 for cluster ID 1:

$ s9s report \
        --cat \
        --report-id=1 \
        --cluster-id=1

Create a database growth report for cluster ID 1:

$ s9s report \
        --create \
        --type=dbgrowth \
        --cluster-id=1

Delete an operation report with report ID 11:

$ s9s report \
        --delete \
        --report-id=11

Print the supported templates that can be used with type value when creating a new report:

$ s9s report \
        --list-templates \
        --long

5.2.14. s9s script

Manage and execute Advisor scripts available in the Developer Studio.

Usage

s9s script {command} {options}

Command

Name, shorthand Description
−−tree Prints available scripts on the controller in tree-like format.
−−execute Executes a script from a local file.
−−system Executes a shell command or an entire shell script on the nodes of the cluster showing the output of the command as job messages. See Shell Command.
−−tree Print the names of the scripts stored on the controller in tree format.

Options

Name, shorthand Description
−−cluster-id=ID The target cluster ID.
−−cluster-name=NAME, -n The target cluster name.

5.2.14.1. Shell Command

If the --shell-command command line option is provided, the option argument will be executed. If a file name is provided as command line argument that content of the file will be executed as shell script. If neither found in the command line the s9s will read its standard input and execute the lines found as shell script on the nodes (this is the way to implement self contained remote executed shell scripts using a shebang).

Please note that this will be a job and it can be blocked by other jobs running on the same cluster.

$ s9s script \
        --system \
        --log \
        --cluster-id=1 \
        --shell-command="df -h"

The command/script will be executed using the pre-configured OS user of the cluster. If that user has no superuser privileges the sudo utility can be used in the command or the script to gain superuser privileges.

By default the command/script will be executed on all nodes, all members of the cluster except the controller. This can be changed by providing a node list using the --nodes= command line option.

To execute shell commands the authenticated user has to have execute privileges on the nodes. If the execute privilege is not granted the credentials of an other Cmon User can be passed in the command line or the privileges can be changed (see `s9s-tree`_ for details about owners, privileges and ACLs).

$ s9s script \
        --system \
        --cmon-user=system \
        --password=mysecret \
        --log \
        --cluster-id=1 \
        --timeout=2 \
        --nodes='192.168.0.127;192.168.0.44' \
        --shell-command="df -h"

Please note that the job will by default has a 10 seconds timeout, so if the command/script keeps running the job will be failed and the execution of the command(s) aborted. The timeout value can be set using the --timeout command line option.

Examples

Print the scripts available for cluster ID 1:

$ s9s script --tree --cluster-id=1

Execute a script called test_script.sh on all nodes in the cluster with ID 1:

$ s9s script \
        --system \
        --log \
        --log-format="%M\n" \
        --timeout=20 \
        --cluster-id=1 \
        test_script.sh

5.2.15. s9s server

Provisions and manages virtualization host to be used to host database containers.

Note

The supported virtualization platforms are LXC/LXD and Amazon Web Service EC2 instances. Docker is not supported at the moment.

5.2.15.1. LXC Containers

Handling lxc containers is a new feature added to the CMON Controller and the s9s command line tool. The basic functionality is available and tested, containers can created, started, stopped, deleted, even creating containers on the fly while installing clusters or cluster nodes is possible.

For the lxc containers one needs a container server, a computer that has the lxc software installed and configured and of course needs a proper account to access the container server from the CMON Controller. One can set up an lxc container server in two easy and one not so easy steps:

  1. Install a Linux server and set it up so that the root user can ssh in from the Cmon Controller with a key, without a password. Creating such an access for the superuser is of course not the only way, it is just the easiest.
  2. Register the server as a container server on the CMON Controller by issuing the s9s server --register --servers="lxc://IP_ADDRESS" command. This will install the necessary software and register the server as a container server to be used later.
  3. The hard part is the network configuration on the container server. Most of the distributions by default have a network configuration that provides local (host only) IP address for the newly created containers. In order to provide public IP address for the containers, the container server must have some sort of bridging or NAT configured.

A possible way to configure the network for public IP is described in this blog, Converting eth0 to br0 and getting all your LXC or LXD onto your LAN.

5.2.15.2. CMON-cloud Virtualization

The cmon-cloud containers are an experimental virtualization backend currently added to the CMON Controller as a brand new feature.

Usage

s9s server {command} {options}

Command

Name, shorthand Description
−−add-acl Adds a new ACL entry to the server or modifies an existing ACL entry.
−−create Creates a new server. If this option is provided the controller will use SSH to discover the server and install the necessary software packages, modify the configuration if needed so that the server can host containers.
−−get-acl List the ACL of a server.
−−list-disks List disks found in one or more servers.
−−list-images List the images available on one or more servers. With the --long command line option a more detailed list is available.
−−list-memory List memory modules from one or more servers.
−−list-nics List network controllers from one or more servers.
−−list-partitions List partitions from multiple servers.
−−list-processors List processors from one or more servers.
−−list-subnets List all the subnets exists on one or more servers.
−−register Registers an existing container server. If this command line option is provided the controller will register the server to be used as container server later. No software packages are installed or configuration changed.
−−start Boot up a server. This option will try to start up a server that is physically turned off (using e.g. the wake-on-lan feature).
−−stat Prints details about one or more servers.
−−stop Shuts down and power off a server. When this command line option is provided the controller will run the shutdown program on the server.
−−unregister Unregisters a container server, simply removes it from the controller.

Options

Name, shorthand Description
−−log If the s9s application created a job and this command line option is provided it will wait until the job is executed. While waiting the job logs will be shown unless the silent mode is set.
−−recurrence=CRONTABSTRING This option can be used to create recurring jobs, jobs that are repeated over and over again until they are manually deleted. Every time the job is repeated a new job will be instantiated by copying the original recurring job and starting the copy. The option argument is a crontab style string defining the recurrence of the job. See Crontab.
−−schedule=DATETIME The job will not be executed now but it is scheduled to execute later. The datetime string is sent to the backend, so all the formats are supported that is supported by the controller.
−−timeout=SECONDS Sets the timeout for the created job. If the execution of the job is not done before the timeout counted from the start time of the job expires the job will fail. Some jobs might not support the timeout feature, the controller might ignore this value.
−−wait If the application created a job (e.g. to create a new cluster) and this command line option is provided the s9s program will wait until the job is executed. While waiting a progress bar will be shown unless the silent mode is set.
−−acl=ACLSTRING The ACL entry to set.
−−os-key-file=PATH The SSH key file to authenticate on the server. If none of the operating system authentication options are provided (--os-key-file, --os-password, --os-user) the controller will try top log in with the default settings.
−−os-password=PASSWORD The SSH password to authenticate on the server. If none of the operating system authentication options are provided (--os-key-file, --os-password, --os-user) the controller will try top log in with the default settings.
−−os-user=USERNAME The SSH username to authenticate on the server. If none of the operating system authentication options are provided (--os-key-file, --os-password, --os-user) the controller will try top log in with the default settings.
−−refresh Do not use cached data, collect information.
−−servers=LIST List of servers.

Examples

Register a virtualization host:

$ s9s server --register --servers=lxc://storage01

Check the list of virtualization hosts:

$ s9s server --list --long

Create a virtualization server with operating system username and password to be used to host containers. The controller will try to access the server using the specified credentials:

$ s9s server \
        --create \
        --os-user=testuser \
        --os-password=passw0rd \
        --servers=lxc://192.168.0.250 \
        --log

5.2.16. s9s user

Manage users.

Usage

s9s user {command} {options}

Command

Name, shorthand Description
−−add-key Registers a new public key for an existing user. After the command, the user will be able to authenticate with the private part of the registered public key, no password will be necessary.
−−add-to-group Adds the user to a group.
−−change-password Modifies the password of an existing user. The password is not a simple property, so it can not be change using the --set option, this special command line option has to be used.
−−create Registers a new user (create a new user account) on the controller and grant access to the ClusterControl system. The user name of the new account should be the command line argument.
−−delete Deletes existing user.
−−disable Disables the user (turn on the “disabled” flag of the user). The users that are disabled are not able to log in.
−−enable Enables the user. This will clear the “disabled” flag of the user so that the user will be able to log in again. The “suspended” flag will also be cleared, the failed login counter set to 0 and the date and time of the last failed login gets deleted, so users who are suspended for failed login attempts will also be able to log in.
−−whoami Prints out the current user only.
−−list List the users registered for the ClusterControl controller.
−−list-groups List the user groups maintained by the ClusterControl controller.
−−list-keys Lists the public keys registered in the controller for the specified user. Please note that viewing the public keys require special privileges, ordinary users can not view the public keys of other users.
−−password-reset Resets the password for the user using the “forgot password” email schema. This option must be used twice to change the password, once without a token to send an email about the password reset and once with the token received in the email.
−−set Changes the specified properties of the user.
−−remove-from-group Removes the user from a group.
−−set-group Sets the primary group for the specified user. The primary group is the first group the user belongs to. This option will remove the user from this primary group and add it to the group specified by the --group command line option.
−−stat Prints detailed information about the specified user(s).
−−whoami Same as --list, but only lists the current user, the user that authenticated on the controller.

Options

Name, shorthand Description
−−cmon-user, -u Username on the CMON system.
−−group=GROUPNAME Set the name of the group. For example when a new user is created this option can be used to control what will be the primary group of the new user. It is also possible to filter the users by the group name while listing them.
−−create-group If this command line option is provided and the group for the new user does not exist the group will be created together with the new user.
−−first-name=NAME Sets the first name of the user.
−−last-name=NAME. Sets the last name of the user.
−−public-key-file=FILENAME The name of the file where the public key is stored. Please note that this currently only works with the --add-key option.
−−title=TITLE The title prefix (e.g. Dr.) for the user.
−−email-address=ADDRESS The email address for the user.
−−new-password=PASSWORD The new password when changing the password.
−−old-password=PASSWORD The old password when changing the password.
−−user-format[=FORMATSTRING] The string that controls the format of the printed information about the users. See User Formatting.

5.2.16.1. Creating the First User

To use the s9s command line tool a Cmon User Account is needed to authenticate on the Cmon Controller. These user accounts can be created using the s9s program itself by either authenticating with a pre-existing user account or bootstrapping the user management creating the very first user. The following section describes the authentication and the creation of the first user in detail.

If there is a username specified either in the command line using the --cmon-user (or -u) options or in the configuration file (either the ~/.s9s/s9s.conf or the /etc/s9s.conf file using the cmon_user variable name) the program will try to authenticate with this username. Creating the very first user account is of course not possible this way. The --cmon-user option and the cmon_user variable is not for specifying what user we want to create, it is for setting what user we want to use for the connection.

If no user name is set for the authentication and a user creation is requested the s9s will try to send a request to the controller through a named pipe. This is the only way a user account can be created without authenticating with an existing user account, this is the only way the very first first user can be created. Here is an example:

$ s9s user --create \
       --group=admins \
       --generate-key \
       --controller=https://192.168.1.127:9501 \
       --new-password="MyS3cr3tpass" \
       --email-address="[email protected]" \
       admin

Please consider the following:

  1. There is no --cmon-user specified, this is the first user we create, we do not have any pre-existing user account. This command line is for creating the very first user. Please check out the examples section to see how to create additional users.
  2. This is the first run, so we assume no ~/.s9s/s9s.conf configuration file exists, there is no user name there either.
  3. In the example, we create the user with the username “admin”, and it is the command line argument of the program.
  4. The command specifies the controller to be at https://192.168.1.127:9501. The HTTPS protocol will be used later, but to create this first user the s9s will try to use SSH and sudo to access the controller’s named pipe on the specified host. For this to work the UNIX user running this command has to have a passwordless SSH and sudo set up to the remote host. If the specified host is the localhost the user do not need SSH access, but still needs to be root or have a passwordless sudo access because of course the named pipe is not accessible for anyone.
  5. Since the UNIX user has no s9s configuration file it will be created. The controller URL and the user name will be stored in it under ~/.s9s/s9s.conf. The next time this user runs the program, it will use this “admin” user unless other user name is set in the command line and will try to connect this controller unless other controller is set in the command line.
  6. The password will be set for the user on the controller, but the password will never be stored in the configuration file.
  7. The --generate-key option is provided, so a new private/public key pair will be generated, then stored in the ~/s9s/ directory and the public key will be registered on the controller for the new user. The next time the program run it will find the username in the configuration file, find the private key in place for the user and will automatically authenticate without a password. The command line options will always have precedence, so this automatic authentication is simply the default way, the password authentication is always available.
  8. The group for the new user is set to “admins”, so this user will have special privileges. It is always a good idea to create the very first user with special privileges, then other users can be created by this administrator account.

5.2.16.2. User List

Using the --list and --long command line options a detailed list of the users can be printed. Here is an example of such a list:

$ s9s user --list --long worf jadzia
A ID UNAME  GNAME EMAIL           REALNAME
- 11 jadzia ds9   [email protected]     Lt. Jadzia Dax
A 12 worf   ds9   [email protected] Lt. Worf
Total: 12

Please note that there are a total of 12 users defined on the system, but only two of those are printed because we filtered the list with the command line arguments.

The list contain the following fields:

Field Description
A Shows the authentication status. If this field shows the letter ‘A’ the user is authenticated with the current connection.
ID Shows the user ID, a unique numerical ID identifying the user.
UNAME The username.
GNAME The name of the primary group of the user. All user belongs to at least one group, the primary group.
EMAIL The email address of the user.
REALNAME The real name of the user that consists first name, last name and some other parts, printed here as a single string composed all the available components.

5.2.16.3. User Formatting

When this command line option is used the specified information will be printed instead of the default columns. The format string uses the % character to mark variable fields and flag characters as they are specified in the standard printf() C library functions. The ‘%’ specifiers are ended by field name letters to refer to various properties of the users.

The %+12I format string for example has the +12 flag characters in it with the standard meaning: the field will be 12 character wide and the + or - sign will always be printed with the number. The properties of the user are encoded by letters. The in the %16N for example the letter N encodes the username field, so username of the user will be substituted. Standard \ notation is also available, \n for example encodes a new-line character.

The s9s-tools support the following fields:

Field Description
d The distinguished name of the user. This currently has meaning only for users originated from an LDAP server.
F The full name of the user.
f The first name of the user.
G The names of groups the given user belongs to.
I The unique numerical ID of the user.
j The job title of the user.
l The last name of the user.
M The email address of the user.
m The middle name of the user.
N The username for the user.
o The origin of the user, the place what used to store the original instance of the user. The possible values are “CmonDb” for users from the Cmon Database or “LDAP” for users from the LDAP server.
P The CDT path of the user.
t The title of the user (e.g. “Dr.”).

Examples

Create a remote s9s user and generate a SSH key for the user:

$ s9s user --create \
        --generate-key \
        --cmon-user=dba \
        --controller="https://10.0.1.12:9501"

List out all users:

$ s9s user --list --long
A ID UNAME      GNAME  EMAIL REALNAME
-  1 system     admins -     System User
-  2 nobody     nobody -     -
A  3 dba        users  -     -
-  4 remote_dba users  -     -

Register a new public key for the active user:

$ s9s user \
        --add-key \
        --public-key-file=/home/pipas/.ssh/id_rsa.pub \
        --public-key-name=The_SSH_key

Add user “myuser” into group “admins”:

$ s9s user \
        --add-to-group \
        --group=admins \
        myuser

Set a new password for user “pipas”:

$ s9s user \
        --change-password \
        --new-password="MyS3cr3tpass" \
        pipas

Create a new user called “john” under group “dba”:

$ s9s user \
        --create \
        --group=dba \
        --create-group \
        --generate-key \
        --new-password=s3cr3tP455 \
        --email-address=[email protected] \
        --first-name=John \
        --last-name=Doe \
        --batch \
        john

Delete an existing user called “mydba”:

$ s9s user \
        --delete \
        mydba

Disable user nobody:

$ s9s user \
        --cmon-user=system \
        --password=secret \
        --disable \
        nobody

Enable user nobody:

$ s9s user \
        --cmon-user=system \
        --password=secret \
        --enable \
        nobody

Reset password for a user called “dba”. One has to obtain the one-time token which will be sent to the registered email address of the corresponding user, followed by the actual password modification command with --token and --new-password parameters:

$ s9s user \
        --password-reset \
        dba
$ ## check the mail inbox of the respective user
$ s9s user \
        --password-reset \
        --token="98197ee4b5584cedba88ef1f583a1258" \
        --new-password="newp455w0rd"
        dba

Set a primary group for user “dba”:

$ s9s user \
        --set-group \
        --group=admins \
        dba

5.2.17. Limitations

Currently the s9s command line tool has a user management module that is not yet fully integrated with ClusterControl UI and ClusterControl Controller. For example, there is no RBAC (Role-Based Access Control) for a user (see Setup and Configuration how to create a user). This means that any user created to be used with the s9s command line tool has a full access to all clusters managed by the ClusterControl server.

Users created by the command line client will be shown in Job Messages, but it is not possible to use this user to authenticate and login from the UI. Thus, the users created from the command line client are all super admins.