blog

ClusterControl on Docker

Ashraf Sharif

Published

(This blog was updated on June 20, 2017)

We’re excited to announce our first step towards dockerizing our products. Please welcome the official ClusterControl Docker image, available on Docker Hub. This will allow you to evaluate ClusterControl with a couple of commands:

$ docker pull severalnines/clustercontrol

The Docker image comes with ClusterControl installed and configured with all of its components, so you can immediately use it to manage and monitor your existing databases. Supported database servers/clusters:

  • Galera Cluster for MySQL
  • Percona XtraDB Cluster
  • MariaDB Galera Cluster
  • MySQL/MariaDB Replication
  • MySQL/MariaDB single instance
  • MongoDB Replica Set
  • PostgreSQL single instance

As more and more people will know, Docker is based on the concept of so called application containers and is much faster or lightweight than full stack virtual machines such as VMWare or VirtualBox. It’s a very nice way to isolate applications and services to run in a completely isolated environment, which a user can launch and tear down the whole stack within seconds.

Having a Docker image for ClusterControl at the moment is convenient in terms of how quickly it is to get it up and running and it’s 100% reproducible. Docker users can now start testing ClusterControl, since we have an image that everyone can pull down and then launch the tool.

ClusterControl Docker Image

The image is available on Docker Hub and the code is hosted in our Github repository. Please refer to the Docker Hub page or our Github repository for the latest instructions.

The image consists of ClusterControl and all of its components:

  • ClusterControl controller, CLI, cmonapi, UI and NodeJS packages installed via Severalnines repository.
  • Percona Server 5.6 installed via Percona repository.
  • Apache 2.4 (mod_ssl and mod_rewrite configured)
  • PHP 5.4 (gd, mysqli, ldap, cli, common, curl)
  • SSH key for root user.
  • Deployment script: deploy-container.sh

The core of automatic deployment is inside a shell script called “deploy-container.sh”. This script monitors a custom table inside CMON database called “cmon.containers”. The database container created by run command or some orchestration tool shall report and register itself into this table. The script will look for new entries and perform the necessary actions using s9s, the ClusterControl CLI. You can monitor the progress directly from the ClusterControl UI or using “docker logs” command.

New Deployment – ClusterControl and Galera Cluster on Docker

There are several ways to deploy a new database cluster stack on containers, especially with respect to the container orchestration platform used. The currently supported methods are:

  • Docker (standalone)
  • Docker Swarm
  • Kubernetes

A new database cluster stack would usually consist of a ClusterControl server with a three-node Galera Cluster. From there, you can scale up or down according to your desired setup. We have built another Docker image to serve as database base image called “centos-ssh”, which can be used with ClusterControl for automatic deployment. This image comes with a simple bootstrapping logic to download the public key from ClusterControl container (automatic passwordless SSH setup), register itself to ClusterControl and pass the environment variables for the chosen cluster setup.

Docker (Standalone)

To run on standalone Docker host, one would do the following:

  1. Run the ClusterControl container:

    $ docker run -d --name clustercontrol -p 5000:80 severalnines/clustercontrol
  2. Then, run the DB containers. Define the CC_HOST (the ClusterControl container’s IP) environment variable or simply use container linking. Assuming the ClusterControl container name is ‘clustercontrol’:

    $ docker run -d --name galera1 -p 6661:3306 --link clustercontrol:clustercontrol -e CLUSTER_TYPE=galera -e CLUSTER_NAME=mygalera -e INITIAL_CLUSTER_SIZE=3 severalnines/centos-ssh
    $ docker run -d --name galera2 -p 6662:3306 --link clustercontrol:clustercontrol -e CLUSTER_TYPE=galera -e CLUSTER_NAME=mygalera -e INITIAL_CLUSTER_SIZE=3 severalnines/centos-ssh
    $ docker run -d --name galera3 -p 6663:3306 --link clustercontrol:clustercontrol -e CLUSTER_TYPE=galera -e CLUSTER_NAME=mygalera -e INITIAL_CLUSTER_SIZE=3 severalnines/centos-ssh

ClusterControl will automatically pick the new containers to deploy. If it finds the number of containers is equal or greater than INITIAL_CLUSTER_SIZE, the cluster deployment shall begin. You can verify that with:

$ docker logs -f clustercontrol

Or, open the ClusterControl UI at http://{Docker_host}:5000/clustercontrol and look under Activity -> Jobs.

To scale up, just run new containers and ClusterControl will add them into the cluster automatically:

$ docker run -d --name galera4 -p 6664:3306 --link clustercontrol:clustercontrol -e CLUSTER_TYPE=galera -e CLUSTER_NAME=mygalera -e INITIAL_CLUSTER_SIZE=3 severalnines/centos-ssh

Docker Swarm

Docker Swarm allows container deployment on multiple hosts. It manages a cluster of Docker Engines called a swarm.

Take note of some prerequisites for running ClusterControl and Galera Cluster on Docker Swarm:

  • Docker Engine version 1.12 and later.
  • Docker Swarm Mode is initialized.
  • ClusterControl must be connected to the same overlay network as the database containers.

In Docker Swarm mode, centos-ssh will default to look for ‘cc_clustercontrol’ as the CC_HOST. If you create the ClusterControl container with ‘cc_clustercontrol’ as the service name, you can skip defining CC_HOST variable.

We have explained the deployment steps in this blog post.

Kubernetes

Kubernetes is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts, and provides a container-centric infrastructure. In Kubernetes, centos-ssh will default to look for “clustercontrol” service as the CC_HOST. If you use other name, please specify this variable accordingly.

Example YAML definition of ClusterControl deployment on Kubernetes is available in our Github repository under Kubernetes directory. To deploy a ClusterControl pod using a ReplicaSet, the recommended way is:

$ kubectl create -f cc-pv-pvc.yml
$ kubectl create -f cc-rs.yml

Kubernetes will then create the necessary PersistentVolume (PV) and PersistentVolumeClaim (PVC) by connecting to the NFS server. The deployment definition (cc-rs.yml) will then run a pod and use the created PV resources to map the datadir and cmon configuration directory for persistency. This allows the ClusterControl pod to be relocated to other Kubernetes nodes if it is rescheduled by the scheduler.

Severalnines
 
MySQL on Docker: How to Containerize Your Database
Discover all you need to understand when considering to run a MySQL service on top of Docker container virtualization

Import Existing DB Containers into ClusterControl

If you already have a Galera Cluster running on Docker and you would like to have ClusterControl manage it, you can simply run the ClusterControl container in the same Docker network as the database containers. The only requirement is to ensure the target containers have SSH related packages installed (openssh-server, openssh-clients). Then allow passwordless SSH from ClusterControl to the database containers. Once done, use “Add Existing Server/Cluster” feature and the cluster should be imported into ClusterControl.

Example of importing the existing DB containers

We have a physical host, 192.168.50.130 installed with Docker, and assume that you already have a three-node Galera Cluster in containers, running under the standard Docker bridge network. We are going to import the cluster into ClusterControl, which is running in another container. The following is the high-level architecture diagram:

Adding into ClusterControl

  1. Install OpenSSH related packages on the database containers, allow the root login, start it up and set the root password:

    $ docker exec -ti [db-container] apt-get update
    $ docker exec -ti [db-container] apt-get install -y openssh-server openssh-client
    $ docker exec -it [db-container] sed -i 's|^PermitRootLogin.*|PermitRootLogin yes|g' /etc/ssh/sshd_config
    $ docker exec -ti [db-container] service ssh start
    $ docker exec -it [db-container] passwd
  2. Start the ClusterControl container as daemon and forward port 80 on the container to port 5000 on the host:

    $ docker run -d --name clustercontrol -p 5000:80 severalnines/clustercontrol
  3. Verify the ClusterControl container is up:

    $ docker ps | grep clustercontrol
    59134c17fe5a        severalnines/clustercontrol   "/entrypoint.sh"       2 minutes ago       Up 2 minutes        22/tcp, 3306/tcp, 9500/tcp, 9600/tcp, 9999/tcp, 0.0.0.0:5000->80/tcp   clustercontrol
  4. Open a browser, go to http://{docker_host}:5000/clustercontrol and create a default admin user and password. You should now see the ClusterControl landing page.

  5. The last step is setting up the passwordless SSH to all database containers. Attach to ClusterControl container interactive console:

    $ docker exec -it clustercontrol /bin/bash
  6. Copy the SSH key to all database containers:

    $ ssh-copy-id 172.17.0.2
    $ ssh-copy-id 172.17.0.3
    $ ssh-copy-id 172.17.0.4
  7. Start importing the cluster into ClusterControl. Open a web browser and go to Docker’s physical host IP address with the mapped port e.g, http://192.168.50.130:5000/clustercontrol and click “Add Existing Cluster/Server” and specify following information:

    Ensure you got the green tick when entering the hostname or IP address, indicating ClusterControl is able to communicate with the node. Then, click Import button and wait until ClusterControl finishes its job. The database cluster will be listed under the ClusterControl dashboard once imported.

Happy containerizing!

Subscribe below to be notified of fresh posts