In the next following posts we'll deploy a multi-master synchronous MySQL Galera Cluster with Amazon's VPC service. We're going to create a public facing subnet for app/web servers and a private subnet for our database cluster.
The deployment will look similar to the below diagram.
Amazon's VPC provides a secure environment where you can chose to isolate parts of your servers by having complete control of how to deploy your virtual networking infrastructure much like your own datacenter.
The steps that we'll go through are as following:
- Create a VPC with Public and Private subnets
- Define Security Groups (add rules later)
- Launch one instance for ClusterControl
- Launch three EBS optimized instances for the Galera/database nodes
- Format and mount an EBS volume (or RAID set) for each Galera nodes
- Create a Galera Configuration with Severalnines Galera Configurator
- Deploy and bootstrap Galera Cluster
- Add an internal load balancer
- Add a MySQL user for the internal load balancer
- Add web server instances
- Add an external load balancer
- Test the VPC database cluster setup
At the end we have the following instances available on the public subnet (Note your IP addresses would be different):
- 1 Elastic External Load Balancer, Elastic IP 126.96.36.199
- 2 Web servers: IP 10.0.0.28, Elastic IP 188.8.131.52 and IP 10.0.0.38, Elastic IP 184.108.40.206
- 1 ClusterControl server: IP 10.0.0.53, Elastic IP 220.127.116.11
and on the private subnet:
- 1 Elastic Internal Load Balancer, IP 10.0.1.17
- Galera Node 1, IP 10.0.1.13
- Galera Node 2, IP 10.0.1.16
- Galera Node 3, IP 10.0.1.26
In this example going forward we only deploy one private subnet however if you require a more fault tolerant setup you can for example create two private subnets for the database cluster one in each Availability Zone (AZ) which can protect you from single location failures within a single Amazon region.
There are a number issues that needs to be handled properly with a Galera cluster across two regions and/or AZs (which practically is two data centers). This will be addressed in a future post.
1. Create a VPC with Public and Private subnets
We'll use Amazon's VPC wizard to create our VPC that has a public and private subnet. Go into the Amazon VPC console dashboard, verify the region that your want your VPC to be created in and click on the 'Get started creating a VPC' button.
You will be prompted to select from a list of VPC templates and for this exercise we'll chose 'VPC with Public and Private Subnets'.
The final confirmation dialog shows you the VPC configuration that will be deployed. Using defaults your VPC will allow up to 65,531 IPs with the public and private subnet allocating up to 251 slots each. You can create the VPC with defaults or configure ranges and the subnets to your preference.
There will be also one NAT instance created which allows your EC2 instances on the private subnet to access the internet by routing traffic through that instance. Instances created on the public subnet will route traffic through the internet gateway.
Click on 'Create VPC' and shortly after you should have 1 VPC, 2 subnets, 1 network ACL, 1 internet gateway and route tables all setup.
2. Create Security Groups
Before continuing lets define some security groups to be used for our different EC2 instances.
This group is for the Galera database cluster which reside on the private subnet.
TCP: 4567 (Group comm), 4568 (IST), 4444 (rsync), 3306 (MySQL)
TCP: 9200 (HTTP health check ping port) xinetd invoked shell script
TCP: 22 (ssh) ClusterControl passwordless ssh
ICMP: Echo Request/Reply. Being able to ping the host is a requirement for the deployment/bootstrap scripts
The ClusterControl instance is our monitoring and administration access point to the database cluster and resides on the public subnet. It is also serving as our staging server from where we deploy and bootstrap the MySQL Galera cluster.
TCP port 22 (ssh) SSH access to our VPC
TCP port 80 (HTTP) ClusterControl web application
TCP port 3306 (MySQL) The ClusterControl's MySQL database and below we've also only allowed the source to be other instances with the GaleraCluster security group. This is important since the ClusterControl agent that is installed on the Galera nodes need access to this port.
This are public facing instances and in our example we'll create a couple of web servers.
TCP port 22 (ssh) Source only allows ssh connections from the ClusterControl.
3. Create EC2 instances
We are later going to use Severalnines's Galera configurator to quickly deploy our MySQL Galera Cluster. The number of instances needed are 1+3, one instance will be dedicated for the ClusterControl package and the rest for the Galera database nodes.
Use Amazon's Quick Launch wizard and select one of the supported OSs for Galera (http://support.severalnines.com/entries/21589522-verified-and-supported-operating-systems).
I'm going to use a Ubuntu Server 12.04 LTS image and create 1 small instance for the ClusterControl and 3 large EBS optimized instances for the Galera Nodes.
In order to launch the instance in a VPC subnet you need to edit the details and make sure to enable 'Launch into a VPC'. Select the private subnet for the Galera instances and the public subnet for the ClusterControl instance.
Select the public subnet.
Select the 'ClusterControl' security group.
Save the changed details.
Naming your instances makes is easier to identify it later on.
Next allocate an elastic IP. Instances in the public subnet that do not have an elastic IP are not able to access the internet.
Allocate an elastic IP and associate it to the ClusterControl instance
Go to the 'Elastic IPs' in the EC2 dashboard and click on the 'Allocate New Address'. Make sure that the selected 'EIP used in' says VPC and not EC2.
Associate the IP with the ClusterControl instance.
You should now be able to logon to your ClusterControl instance using ssh.You should now be able to logon to your ClusterControl instance using ssh.
Next launch 3 large instances for the Galera nodes and also make sure to launch it as an "EBS optimized instance" (500 Mbps bandwidth for large instance types) and create an EBS volume to store the database files in order to survive instance reboots. A separate EBS volume is great for taking backups/snapshots etc.
Using an EBS optimized instance should give give you an increase in throughput and a more consistent level of IOPS and latency between the EC2 instance and the EBS volume.
AWS quote: "Provisioned IOPS volumes are designed to deliver within 10% of your provisioned IOPS 99.9% of the time. So for a volume provisioned with 500 IOPS, it volume should deliver at least 450 IOPS 99.9% of the time."
Select the GaleraCluster security group.
Then add a provisioned IOPS EBS volume with the number of IOPS that you want.
Only a maximum ratio of 10:1 between the IOPS and volume is allowed so for example 10GB volume <=100 IOPS or 200GB <= 200 IOPS
If your database workload is very write intensive and/or if your "hot" data does not fit entirely into your InnoDB buffer pool then you can opt to create a RAID array with a bunch of EBS volumes to increase the throughput for disk-bound workloads.
Since AWS charge per GB used and the number of provisioned IOPS not volumes you could easily create for example 6 EBS volumes and setup a RAID 1+0 stripe.
Save and launch the instance. Repeat for the next two Galera instances.
4. "Pre-flight check" ClusterControl and Galera instances
Before deploying and bootstrapping the Galera Cluster there are a few pre-requisites that we need to do.
- Copy your AWS key to the ClusterControl instance
The location of the aws key will be needed later in the Galera Configurator section. Verify that you can use ssh to connect to your Galera instances
If you don't feel comfortable using the AWS key you easily generate your own passwordless ssh key instead to be used for the database cluster.
- Ping Galera hosts from the ClusterControl instance
This should work if you have setup the GaleraCluster security group properly allowing ICMP echo request/reply. The deployment script requires that you can ping the Galera nodes.
From your ClusterControl instance ping your Galera instances.
- hostname -i needs to resolve properly on all instances/hosts
If you get "hostname: Name or service not known" then the most painless way to fix this issue is to add the hostname to the /etc/hosts file (another way is to edit the Galera deployment scripts).
Currently the deployment script does not use 'hostname --all-ip-addresses' as default.
- Doing 'ssh 10.0.1.26 sudo ls' should not give you "sudo: unable to resolve host ip-10-0-1-26"
Once again the most painless way to resolve this is to add the hostname to the /etc/hosts file on each instance.
$ echo "10.0.1.26 ip-10-0-1-26" | sudo tee -a /etc/hosts
Make sure that 1. and 2. passes on all Galera instances
5. Format and mount EBS volume(s) on the Galera instances
On each Galera instance format a new ext4 (or use xfs) volume that we are going to use for our database files. If you created volumes to be used for a RAID setup then follow instructions further down.
Look for your EBS volume
Format the volume with an ext4 filesystem
Create the mount point, this is where we'll store the MySQL data files
Add the volume to the /etc/fstab so that it survives instance reboots.
Mount the new volume
|$ sudo mount -a|
|$ df -h|
Recommended optimized mount options for ext4:
If you for example created a few EBS volumes then you can create a RAID 1+0 setup in a few steps.
6 volumes striped as RAID 1+0
Verify the new array
Add the array
(ARRAY /dev/md/ip-10-0-1-27:md0 metadata=1.2 name=ip-10-0-1-27:md0 UUID=6e339907:9d9ff219:044ae233:47b4a362)
Mount the new volume
|$ sudo mount -a|
|$ df -hT|
6. Quick Disk IO Performance Test
Lets do a simple test with dd. Write 8GB to /data and perform a sync once before exit.
Standard EBS volume
RAID 1+0 on 6 provisioned IOP volumes, 10GBx6
Read 8GB test file
Standard EBS volume
RAID 1+0 on 6 provisioned IOP volumes, 10GBx6
Now that we have the EC2 instances prepared it's time to run the Severalnines's Galera Configurator and generate the deployment scripts to be run from the ClusterControl instance.
Go to http://www.severalnines.com/galera-configurator/ and create a Galera deployment package. The wizard should be pretty self explanatory.
Select Amazon EC2 as the cloud provider and the OS used for your instances.
Enter the OS user as 'ubuntu' if you use debian/ubuntu based distributions or 'root' for rhel/centos variants.
Here you should enable 'Skip DNS Resolve' and select a system memory size that is close to your Galera instance types, in our case m1.large.
The InnoDB buffer pool and other specific settings can be changed to better suite your workload and requirements. Just remember to change the MySQL configuration file before running the deployment script.
There are some settings like the InnoDB log file size that is easier to change before deploying although you can do that later as well but currently requires some manual steps.
Enter the IP address for the ClusterControl instance and the location of your passwordless ssh key that you copied over to or generated for the ClusterControl instance.
Enter the IP addresses for the Galera instances and make sure to change the default data directory path from '/mnt/data/mysql' to '/data/mysql' or to whatever path that you have mounted as the data volume for the database files.
Finally enter an email address and click on the 'Generate Deployment Scripts'.
7. Deploy and bootstrap a Galera Cluster
When you get the deployment package unpack it on the ClusterControl instance.
The my.cnf file that is initially used by all Galera is located under ~/s9s-galera-2.1.0/config. Before running the deployment script you might want to take a few minutes to change some settings in the MySQL configuration file to better suite your workload.
Deploy and bootstrap
This could take up to about 10-15 minutes depending on your network.
After the deployment script completes you should now be able to view your Galera cluster by going to the web application on the ClusterControl instance, http://18.104.22.168/cmon with your web browser.
Secure the Web application
If you want to secure the ClusterControl web interface you can try below script or follow our directions here: http://support.severalnines.com/entries/20637072-password-protecting-the-web-interface
Note: This script needs more testing...
Setup Internal and External Load Balancers
Next we are going to add two AWS load balancers, one internal for our database nodes and one external for our clients load balancing requests across our http servers. You can opt to install HAProxy (https://github.com/severalnines/haproxy) instead however you would need to handle failover for it.
Using Amazon's elastic load balancers (ELB) we don't need to worry about that however the ELBs are less configurable than HAProxy and lack other features as well.
Install the Galera http health check scripts
First on each Galera instances we are going to install a HTTP health check script that uses a Galera node's state to determine whether the node should be classified as up or down.
Next step is to setup the internal load balancer and making sure the internal LB has 3 instances and that the health check is working fine.
8. Add an Internal Load Balancer
The internal load balancer will load balance MySQL connections across our Galera nodes.
Make sure to enable 'Create an Internal load balancer'.
The HTTP health check ping port for the Galera instances is on port 9200. The ping path is not relevant in our case so just leave at default.
Select the private subnet for the internal load balancer. Next create a security group.
Since we're load balancing MySQL connections you need to allow port 3306 to be opened
Select your three Galera instances for the load balancer pool.
Review and finally create your internal load balancer.
In a few minutes you should see that all of your Galera instances have been registered and is now in the load balancer's pool of heathly instances.
Note: Because the set of IP addresses associated with a LoadBalancer can change over time, you should never create an "A" record with any specific IP address. If you want to use a friendly DNS name for your LoadBalancer instead of the name generated by the Elastic Load Balancing service, you should create a CNAME record for the LoadBalancer DNS name, or use Amazon Route 53 to create a hosted zone.
This applies to the MySQL user and the hostname that is granted for the load balancer. In our example we are going to use the IP address however in a production environment a CNAME is much easier to use.
Note 2: The hostname for MySQL grants can be at most 60 characters. The AWS autogenerated part of the internal LB name if often longer than 50 characters that sits ontop of your specified LB name. So in our case we only had about 7 characters left for the LB if we wanted to use the internal DNS name for grants.
9. Add a MySQL User for the Internal Load Balancer
Next we create a user for the load balancer node which the web/http server should use.
On one of the Galera nodes install the mysql client (if not available)
Lookup the IP address for the load balancer host
Add a MySQL user
NOTE: IP address used instead of internal DSN name since it is longer than 60 characters...
Verify that the load balancer works from the ClusterControl node by:
10. Add Web Servers Instances
Before creating the external load balancer lets create one or two web servers to be used for our external load balancer pool.
Allocate elastic IPs and associate it with the instances in order to reach the internet.
Install apache with a MySQL php driver to test the DB connection
In the next final post we'll create a simple php test file on each web instance to verify that the external load balancer, web servers and database cluster are working properly.
11. Add an External Load Balancer
Create an external load balancer that will be the "single point" of access for your users.
Make sure the load balancer is created on the correct public subnet.
Leave health check as is.
Select the public subnet.
Select the previous created 'Web' security group.
Select your web servers/instances for the load balancer's pool.
Review the configuation and create the external load balancer.
In a few minutes you should see that your web instances are registered and marked as healthy in the load balancer's pool.
sysbench is a popular system peformance benchmark tool and has a database OLTP benchmark module which is popular among MySQL users. We are going to use it to create the default OLTP schema that is used and run a light test.
The purpose is not to really bench our newly deployed Galera cluster but to get some test data that we can pull out from our web servers that we setup previously.
Prepare sysbench schema and table
Logon to your ClusterControl instance
Install the MySQL client (if you don't have it)
Create the 'sbtest' database schema. Test that the internal MySQL load balancer user is working by going via the internal load balancer.
Prepare the sysbench test table by inserting 1M records.
Do a quick test run
Monitor the Galera cluster performance by going to http:&< ClusterControl EIP >/cmon
12. Test the VPC database cluster setup
Check that the MySQL php driver is properly installed on your web instances.
or browse to the EIP of the web instance
Simple php select
On each web server instance add a sbtest.php file
Verify that the page loads by
or browse to the EIP of the web instances.
and finally test load the page through the external load balancer.
and/or point your web browser to http://ext-lb-153561883.ap-northeast-1.elb.amazonaws.com/sbtest.php