blog

Severalnines Tools: Docker, Puppet, Chef, Vagrant and More

Ashraf Sharif

Published

ClusterControl integrates with a number of popular third-party tools for deployment and monitoring – from Docker images to Puppet modules, Chef cookbooks, Vagrant images, and Nagios/Zabbix/Syslog/PagerDuty plugins. In this blog, we’ll have a quick look at the latest available resources.

Docker

From v1.2.11, ClusterControl does not have to be deployed on the same OS as the database server’s operating system. Previously, ClusterControl on a Redhat system could only monitor database nodes running on a Redhat system, the same goes for Debian/Ubuntu. This is no longer the case.

We have rebuilt our Docker container image under a single operating system, CentOS 6 with latest minor version. The deployment instruction is now way simpler as you can see in the Docker Hub page. The remaining containers are removed from the list. Deploying ClusterControl on Docker is now as simple as:

$ docker pull severalnines/clustercontrol
$ docker run -d --name clustercontrol -p 5000:80 severalnines/clustercontrol

Next, setup SSH keys from ClusterControl to all target containers/nodes and start adding your existing DB infrastructure into ClusterControl. For details, check out the Docker Registry page.

 

Puppet Module & Chef Cookbooks

Previously, the Puppet module and Chef Cookbooks for ClusterControl were built with “add existing database cluster” in mind. The idea was that you already had a running database cluster deployed using Puppet/Chef, and wanted to manage it using ClusterControl.

We’ve now updated the cookbooks so that you install ClusterControl first and then use it to deploy your databases and clusters.

Note that, once you have deployed ClusterControl, it is also possible from the UI to manage an existing database setup.

We have also streamlined the installation method of these tools to follow our installer script, install-cc (as shown in the Getting Started page). The module/cookbooks will setup /etc/cmon.cnf with minimal configuration, and subsequent cluster configuration files will reside under /etc/cmon.d/ directory. This new version also introduces support for PostgreSQL instances.

Example Puppet host definition for ClusterControl node is now as simple as below:

node "clustercontrol.local" {
   class { 'clustercontrol':
           is_controller => true,
           api_token => 'b7e515255db703c659677a66c4a17952515dbaf5'
   }
}

Same goes to Chef Cookbooks, the simplest databag for ClusterControl node is now:

{
   "id" : "config",
   "clustercontrol_api_token" : "92ee6e2368df29ae52fba0e2aad2b28240e6908b"
}

For more details, please refer to Puppet Forge or Chef Supermarket.

Vagrant

Our Vagrantfile deploys 4 instances on VirtualBox platform, three for DB nodes plus one for ClusterControl. It installs the latest released ClusterControl version on Centos/Redhat or Debian/Ubuntu based distributions.

Compared to our previous version of Vagrantfile as described in this blog post, we are no longer depending on the custom Vagrant boxes that we have built. The new Vagrantfile will perform the following actions when firing up new instances:

  1. Download a base image from HashiCorp.
  2. Fire up 4 virtual machines from the downloaded base image.
  3. Add port forwarding for ClusterControl UI.
  4. Install ClusterControl latest version on 10.10.10.10.
  5. Set up SSH key on all virtual machines (ClusterControl + DB nodes).

The default memory allocated for each instance is 768MB. The default MySQL root user password on the ClusterControl host is ‘root123’. The default IP address for each virtual machine would be:

  • 10.10.10.10 – ClusterControl
  • 10.10.10.11 – DB #1
  • 10.10.10.12 – DB #2
  • 10.10.10.13 – DB #3

You can then use the available DB nodes to create a database cluster or single DB instances.

CentOS

Launches a VirtualBox’s instance for ClusterControl and three instances for DB Nodes, running on CentOS 7.1 base image:

$ git clone https://github.com/severalnines/vagrant
$ vagrant box add bento/centos-7.1
$ cd clustercontrol/centos
$ vagrant up

Ubuntu

Launches a VirtualBox’s instance for ClusterControl and three instances for DB Nodes, running on Ubuntu 14.04 base image:

$ git clone https://github.com/severalnines/vagrant
$ vagrant box add ubuntu/trusty64
$ cd clustercontrol/ubuntu
$ vagrant up

Once the virtual machines are up, setup password-less SSH access from the ClusterControl node to all DB nodes:

$ vagrant ssh vm1

Copy over the root’s public ssh key (default password for the vagrant user is ‘vagrant’):

[vagrant@n1 ~]$ for h in 10.10.10.11 10.10.10.12 10.10.10.13; do sudo cat /root/.ssh/id_rsa.pub | ssh $h "sudo mkdir -p /root/.ssh && sudo tee -a /root/.ssh/authorized_keys && sudo chmod 700 /root/.ssh && sudo chmod 600 /root/.ssh/authorized_keys"; done;

Open your web browser to http://localhost:8080/clustercontrol and create a ClusterControl default admin user. You can now create your databases and clusters on the available DB nodes by clicking on ‘Create Database Cluster’ or ‘Create Database Node’. Specify root as SSH user and /root/.ssh/id_rsa as the SSH key in the dialog and you are good to go.

Database Advisors – s9s-advisor-bundle

Advisors in ClusterControl are powerful constructs; they provide specific advice on how to address issues in areas such as performance, security, log management, configuration, storage space, etc. They can be anything from simple configuration advice, warning on thresholds or more complex rules for predictions or cluster-wide automation tasks based on the state of your servers or databases. In general, advisors perform more detailed analysis, and produce more comprehensive recommendations than alerts.

With ClusterControl, we ship a set of basic advisors that are open source. These include rules and alerts on security settings, system checks (NUMA, Disk, CPU), queries, innodb, connections, performance schema, Galera configuration, NDB memory usage, and so on. The advisors can be downloaded from Github. Through ClusterControl Developer Studio, you can create new ones. It is also easy to import the bundles written by our partners or users, or export your own for others to try out.

To build a bundle, navigate to the directory and run create_bundle.sh script:

$ cd /home/user
$ ./create_bundle.sh dirname

This will create a file: dirname.tar.gz. To import into Developer Studio, go to ClusterControl > Manage > Developer Studio > Import. You will be prompted if you want to overwrite the existing files.

For more information, check out the Github repository here. Full documentation on ClusterControl DSL is available at documentation page.

clustercheck-iptables

Are you using a TCP load balancer in front of Galera cluster, and it has limited healthcheck capabilities? This healthcheck script allows your favorite TCP load balancer like F5, nginx, balance, pen and others to correctly distribute database connections to the different nodes in a Galera cluster.
The idea is simple; If the Galera node is healthy, it is reachable on the mirror port (a redirection port to the real port) on the DB nodes. The TCP load balancer will then just need to forward the incoming MySQL connections to any available mirror port on the backend.

The simplest way to get it running on each DB node:

$ git clone https://github.com/ashraf-s9s/clustercheck-iptables
$ cp clustercheck-iptables/mysqlchk_iptables /usr/local/sbin/
$ mysqlchk_iptables -d --user=check --password=checkpassword

Details on this are explained in this Github repository page and this blog post. We also recorded an asciinema screencast on how the script works in action.

ClusterControl Plugins

We have made a number of improvements on the ClusterControl plugins. These now support the new CMON RPC interface, which was made available from ClusterControl v1.2.10. All of the plugins are compatible with 1.2.11 as well.

The following ClusterControl plugins are available under s9s-admin repository:

  • nagios – pull database cluster status and alarms from ClusterControl
  • pagerduty – push ClusterControl alarms to Pagerduty
  • syslog – push ClusterControl alarms to syslog
  • zabbix – pull database cluster status, backup status and alarms from ClusterControl

Note that Nagios and Zabbix plugins are only tested on the latest version of the respective products.

You can find a list of active tools on Severalnines’ Tools page. We highly recommend users to keep up-to-date with the latest release of ClusterControl to get the best out of these tools/plugins.

Subscribe below to be notified of fresh posts