Moodle is an open-source e-learning platform (aka Learning Management System) that is widely adopted by educational institutions to create and administer online courses. For larger student bodies and higher volumes of instruction, moodle must be robust enough to serve thousands of learners, administrators, content builders and instructors simultaneously. Availability and scalability are key requirements as moodle becomes a critical application for course providers. In this blog, we will show you how to deploy and cluster moodle/web, database and file-system components on multiple servers to achieve both high availability and scalability. 

We are going to deploy moodle on top of GlusterFS clustered file system and MariaDB Galera Cluster 10. To eliminate any single point of failure, we will use three nodes to serve the application and database while the remaining two are used for load balancers and the ClusterControl management server. This is illustrated in the following figure:


All hosts are running on CentOS 6.5 64bit with SElinux and iptables disabled. The following is the host definition inside /etc/hosts: 	moodle web db virtual-ip mysql 	moodle1 web1 db1 	moodle2 web2 db2 	moodle3 web3 db3 	lb1 	lb2 clustercontrol

Creating Gluster replicated volumes on root partition is not recommended. We will use another disk /dev/sdb1 on each web node for the GlusterFS storage backend, mounted as /storage. This is shown with the following mount command:

$ mount | grep storage
/dev/sdb1 on /storage type ext4 (rw)

Deploying MariaDB Galera Cluster 10

** The deployment of the database cluster will be done from lb1

MariaDB Galera Cluster 10 will be used as the database cluster backend for moodle. Note that you can also use any of the other Galera variants - Percona XtraDB Cluster or Galera Cluster for MySQL from Codership.

1. Go to the Galera Configurator to generate a deployment package. In the wizard, we used the following values when configuring our database cluster: 

  • Vendor: MariaDB
  • MySQL Version: 10.x
  • Infrastructure: none/on-premises
  • Operating System: RHEL6 - Redhat 6.4/Fedora/Centos 6.4/OLN 6.4/Amazon AMI
  • Number of Galera Servers: 3+1
  • OS user: root
  • ClusterControl Server:
  • Database Servers:

At the end of the wizard, a deployment package will be generated and emailed to you.

2. Login to lb2, download the deployment package and run

$ wget

$ tar xvfz s9s-galera-mariadb-3.5.0-rpm.tar.gz
$ cd s9s-galera-mariadb-3.5.0-rpm/mysql/scripts/install
$ bash ./ 2>&1 | tee cc.log

3. The database deployment is automated and takes about 15 minutes. Once completed, open a web browser and point it to You will now see your MariaDB Galera Cluster in the UI:


Load Balancers and Virtual IP

Since HAProxy and ClusterControl are co-located on the same server, we need to change the Apache default port to another port, for example port 8080. ClusterControl will run on port 8080 while HAProxy will take over port 80 to perform web load balancing. 

1. On lb2, open the Apache configuration file at /etc/httpd/conf/httpd.conf and make the following changes:

Listen 8080

Restart Apache web server to apply changes:

$ service httpd restart

At this point ClusterControl is accessible via port 8080 on HTTP at or you can connect through the default HTTPS port at

2. Before we start to deploy the load balancers, make sure lb1 is accessible using passwordless SSH from ClusterControl/lb2. On lb2, copy the SSH keys to

$ ssh-copy-id -i ~/.ssh/id_rsa root@

3. Login to ClusterControl, drill down to the database cluster and click Add Load Balancer button. Deploy lb1 and lb2 with Haproxy similar to below:


4. Install Keepalived on lb1(master) and lb2(backup) with as virtual IP:


5. The load balancer nodes have now been installed, and are integrated with ClusterControl. You can verify this by checking out the ClusterControl’s summary bar:


6. By default, our script will configure the MySQL reverse proxy service to listen on port 33306. We will need to add  HTTP load balancing capabilities to the newly installed load balancers. On lb1 and lb2, open /etc/haproxy/haproxy.cfg and add the following lines:

frontend http-in

    bind *:80
    default_backend web_farm
backend web_farm
    server moodle1 maxconn 32 check
    server moodle2 maxconn 32 check
    server moodle3 maxconn 32 check

7. Restart HAProxy to apply changes:

$ service haproxy restart

Deploying GlusterFS

* The following steps should be performed on moodle1, moodle2 and moodle3 unless specified otherwise.

1. Get the Gluster YUM repository file and save it into /etc/yum.repos.d:

$ wget -P /etc/yum.repos.d

2. Install GlusterFS:

$ yum install -y glusterfs glusterfs-fuse glusterfs-server

3. Create a directory called brick under /storage partition:

$ mkdir /storage/brick

4. Start gluster daemon:

$ service glusterd start

$ chkconfig glusterd start

5. On moodle1, probe the other nodes:

$ gluster peer probe

$ gluster peer probe

You can verify the peer status with the following command:

$ gluster peer status

Number of Peers: 2
Uuid: d8c23f23-518a-48f7-9124-476c105dbe91
State: Peer in Cluster (Connected)
Uuid: 32a91fda-7ab8-4956-9f50-9b5ad59e0770
State: Peer in Cluster (Connected)

6. On moodle1, create a replicated volume on probed nodes:

$ gluster volume create rep-volume replica 3

volume create: rep-volume: success: please start the volume to access data

7. Start the replicated volume on moodle1:

$ gluster volume start rep-volume

8. Ensure the replicated volume and processes are online:

$ gluster volume status

Status of volume: rep-volume
Gluster process                                         Port    Online  Pid
Brick                    49152   Y       25856
Brick                    49152   Y       21315
Brick                    49152   Y       20521
NFS Server on localhost                                 2049    Y       25875
Self-heal Daemon on localhost                           N/A     Y       25880
NFS Server on                           2049    Y       20545
Self-heal Daemon on                     N/A     Y       20546
NFS Server on                           2049    Y       21346
Self-heal Daemon on                     N/A     Y       21351
Task Status of Volume rep-volume
There are no active volume tasks

9. Moodle requires a data directory (moodledata) to be outside of webroot (/var/www/html). So, we’ll mount the replicated volume on /var/www instead. Create the path:

$ mkdir -p /var/www

7. Add following line into /etc/fstab to allow auto-mount:

localhost:/rep-volume /var/www glusterfs defaults,_netdev 0 0

8. Mount the GlusterFS to /var/www:

$ mount -a

Apache and PHP

* Following steps should be performed on moodle1, moodle2 and moodle3.

1. Moodle 2.7 requires PHP 5.4. To simplify the installation, install Remi YUM repository:

$ rpm -Uvh

2. Install all required packages with remi repository enabled:

$ yum install -y --enablerepo=remi httpd php php-xml php-gd php-mbstring php-soap php-intl php-mysqlnd php-pecl-zendopcache php-xmlrpc

3. Start Apache web server and enable it on boot:

$ service httpd start

$ chkconfig httpd on

Installing moodle

The following steps should be performed on moodle1.

1. Download moodle from and extract it under /var/www/html:

$ wget

$ tar -xzf moodle-latest-27.tgz -C /var/www/html

2. Moodle requires a data directory outside /var/www/html. The default location would be /var/www/moodledata. Create the directory and assign correct permissions:

$ mkdir /var/www/moodledata

$ chown apache.apache -R /var/www/html /var/www/moodledata

3. Before we proceed with the moodle installation, we need to prepare the moodle database. From ClusterControl, go to Manage > Schema and Users > Create Database and create a database called ‘moodle’.

4. Create the database user under Privileges tab:

5. Assign the correct privileges for moodle database user on database moodle:

6. Open the browser and navigate to You should see moodle’s installation page. Accept all default values and proceed to the Database settings page. Enter the configuration details as below:

7. Proceed to the last page and let the moodle installation process complete. After completion, you should see the admin login page:

8. Now configure Moodle to listen to virtual IP On moodle1, open /var/www/html/moodle/config.php and change following line:

$CFG->wwwroot = '';

From now on, you can access the load balanced Moodle on

Verifying the New Architecture

1. Check the HAProxy statistics by logging into the HAProxy admin page at lb1 host port 9600. The default username/password is admin/admin. You should see some bytes in and out on the web_farm and s9s_33306_production sections:

2. Check and observe the traffic on your database cluster from the ClusterControl overview page at

Congratulations, you have now deployed a scalable moodle infrastructure with clustering on the web, database and file system layers.

Related Post