blog

Benchmark of Load Balancers for MySQL/MariaDB Galera Cluster

Ashraf Sharif

Published:

When running a MariaDB Cluster or Percona XtraDB Cluster, it is common to use a load balancer to distribute client requests across multiple database nodes. Load balancing SQL requests aims to optimize the usage of the database nodes, maximize throughput, minimize response times and avoid overload of the Galera nodes. 

In this blog post, we’ll take a look at four different open source load balancers, and do a quick benchmark to compare performance:

  • HAproxy by HAproxy Technologies
  • IPVS by Linux Virtual Server Project
  • Galera Load Balancer by Codership
  • mysqlproxy by Oracle (alpha)

Note that there are other options out there, e.g. MaxScale from the MariaDB team, that we plan to cover in a future post.

When to Load Balance Galera Requests

Although Galera Cluster does multi-master synchronous replication, you should really read/write on all database nodes provided that you comply with the following:

  • Table you are writing to is not a hotspot table
  • All tables must have an explicit primary key defined
  • All tables must run under InnoDB storage engine
  • Huge writesets must run in batch, for example it is recommended to run 100 times of 1000 row inserts rather than one time of 100000 row inserts
  • Your application can tolerate non-sequential auto-increment values.

If above requirements are met, you can have a pretty safe multi-node write cluster without the need to split the writes on multiple masters (sharding) as  you would need to do in a MySQL Replication setup because of slave lag problems. Furthermore, having load balancers between the application and database layer can be very convenient where load balancers may assume that all nodes are equal and no extra configuration such as read/write splitting and promoting a slave node to a master are required.

Note that if you run into deadlocks with Galera Cluster, you can send all writes to a single node and avoid concurrency issues across nodes. Read requests can still be load balanced across all nodes. 

Load Balancers

HAproxy

HAProxy stands for High Availability Proxy, it is an open source TCP/HTTP-based load balancer and proxying solution. It is commonly used to improve the performance and availability of a service by distributing the workload across multiple servers. Over the years it has become the de-facto open source load balancer, is now shipped with most mainstream Linux distributions.

As for this test, we were using HAproxy version 1.4 deployed via ClusterControl

 

IP Virtual Server (IPVS)

IPVS implements transport-layer load balancing, usually called Layer 4 LAN switching, as part of the Linux kernel. IPVS is incorporated into the Linux Virtual Server (LVS), where it runs on a host and acts as a load balancer in front of a cluster of servers.

We chose Keepalived, a load balancing framework that relies on IPVS to load balance Linux based infrastructures. Keepalived implements a set of checkers to dynamically and adaptively maintain and manage a load balanced server pool according their health. High availability and router failover is achieved by VRRP protocol. 

In this test, we configured Keepalived with direct routing where it provides increased performance benefits compared to other LVS networking topographies. Direct routing allows the real servers to process and route packets directly to a requesting user rather than passing all outgoing packets through the LVS router. The following is what we defined in keepalived.conf:

 

global_defs {
  router_id lb1
}
vrrp_instance mysql_galera {
  interface eth0
  state MASTER
  virtual_router_id 1
  priority 101
  track_interface {
    eth0
  }
  virtual_ipaddress {
    192.168.50.120 dev eth0
  }
}
virtual_server 192.168.50.120 3306 {
  delay_loop 2
  lb_algo rr
  lb_kind DR
  protocol TCP
  real_server 192.168.50.101 3306 {
    weight 10
    TCP_CHECK {
      connect_port    3306
      connect_timeout 1
    }
  }
  real_server 192.168.50.102 3306 {
    weight 10
    TCP_CHECK {
      connect_port    3306
      connect_timeout 1
    }
  }
  real_server 192.168.50.103 3306 {
    weight 10
    TCP_CHECK {
      connect_port    3306
      connect_timeout 1
    }
  }
}

 

Galera Load Balancer (glb)

glb is a simple TCP connection balancer built by Codership, the creator of Galera Cluster. It was inspired from Pen, but unlike Pen its functionality is limited only to balancing generic TCP connections. glb is multithreaded, so it can utilize multiple CPU cores. According to Seppo from Codership, the goal with glb was to have a high throughput load balancer which will not be a bottleneck when benchmarking Galera Cluster

The project is active (although at the time of writing, it is not listed on Codership’s download page). Here is our /etc/sysconfig/glbd content:

 

LISTEN_ADDR="8010"
CONTROL_ADDR="127.0.0.1:8011"
CONTROL_FIFO="/var/run/glbd.fifo"
THREADS="4"
MAX_CONN=256
DEFAULT_TARGETS="192.168.50.101:3306:10 192.168.50.102:3306:10 192.168.50.103:3306:10"
OTHER_OPTIONS="--round-robin"

MySQL Proxy

When MySQL Proxy was born, it was a promising technology and attracted quite a few users. It is extensible through the LUA scripting language, which makes it a very flexible technology. MySQL proxy does not embed an SQL parser and basic tokenization was made through LUA language. Although MySQL Proxy has been in alpha status for a long time, we do find it in use in production environments.

Here is how we configured mysql-proxy.cnf:

[mysql-proxy]

daemon = true
pid-file = /var/run/mysql-proxy.pid
log-file = /var/log/mysql-proxy.log
log-level = debug
max-open-files = 1024
plugins = admin,proxy
user = mysql-proxy
event-threads = 4
proxy-address = 0.0.0.0:3307
proxy-backend-addresses = 192.168.50.101:3306,192.168.50.102:3306,192.168.50.103:3306
admin-lua-script = /usr/lib64/mysql-proxy/lua/admin.lua
admin-username = admin
admin-password = admin

Benchmarks

We used 5 virtual machines in one physical host (each with 4 vCPU/2 GB RAM/10GB SSD) with following roles:

  • 1 host for ClusterControl to collect monitoring data from Galera Cluster
  • 1 host for load balancers. Sysbench 0.5 is installed in this node to minimize network overhead
  • 3 hosts for MySQL Galera Cluster (5.6.16 wsrep_25.5.r4064)

Above description can be illustrated as following figure:

Since they are almost different and have no standard line in configuration, we are going to use the fairest options among them:

  • Number of threads = 4
  • Load balancing algorithm = Round robin
  • Maximum connections = 256

We prepared approximately one million rows of data in 12 separate tables, taking 400MB of disk space: 

$ sysbench

--db-driver=mysql 
--mysql-table-engine=innodb 
--oltp-table-size=100000 
--oltp-tables-count=12 
--num-threads=4 
--mysql-host=192.168.50.101 
--mysql-port=3306 
--mysql-user=sbtest 
--mysql-password=password 
--test=/usr/share/sysbench/tests/db/parallel_prepare.lua 
run

InnoDB data file should fit into the buffer pool to minimize IO overhead and this test is expected to be CPU-bound. The following was the command line we used for the OLTP benchmarking tests:

$ sysbench

--db-driver=mysql 
--num-threads=4 
--max-requests=5000 
--oltp-tables-count=12 
--oltp-table-size=100000 
--oltp-test-mode=complex 
--oltp-dist-type=uniform 
--test=/usr/share/sysbench/tests/db/oltp.lua 
--mysql-host= 
--mysql-port= 
--mysql-user=sbtest 
--mysql-password=password 
run

The command above was repeated 100 times on each load balancer including one control test as baseline where we specified a single MySQL host for sysbench to connect to. Sysbench is also able to connect to several MySQL hosts and distribute connections on round-robin basis. We included this test as well.

Observations and Results

Observations

The physical host’s CPU constantly hit 100% throughout the test. This is not a good sign for proxy-based load balancer since they need to fight for CPU time. A better test would be to run on an bare metal servers, or isolate the load balancer host on a separate physical host.

The results obtained from this test are relevant only if you run in a virtual machine environment.

 

Results

We measured the total throughput (transactions per second) taken from the Sysbench result. The following chart shows the number of transactions that the database cluster can serve in one second (higher is better):

From the graph, we can see that IPVS (Keepalived) is the clear winner and has the slightest overhead since it runs in kernel level and mainly just routes packets. Keepalived is a userspace program to do health checks and manage the kernel interface to LVS. HAproxy, glb and MySQL proxy which operates on the higher layer performs ~40% slower than IPVS and 25% slower as compared to Sysbench round robin, as shown in the chart below:

Unlike IPVS which is similar to a router, proxy-based load balancers (glb, HAproxy, MySQL Proxy) operate on layer 7. It can understand a number of backend servers protocol and able to do packet-level inspection, protocol routing and is far more customizable. Proxy-based load balancers operate on application level, and is easier to configure them with firewalls. However, they are significantly slower.

IPVS on the other hand, is pretty hard to configure and can be confusing to deal with especially if you running on NAT where your topology has to be setup so that your LVS load balancer is also the default gateway. Since it is part of the kernel, upgrading LVS might mean kernel change and reboot. Take note that the director and realservers can’t access the virtual service, you need an outside client (another host) to access it. So LVS has an architecture impact, if you run on NAT, client and servers cannot run in the same VLAN and subnet. It is only a packet forwarder, and therefore is very fast.

If you need to balance solely on number of connections or your architecture is running on a CPU-bound environment, the layer 4 load balancer should suffice. On the other hand, if you want to have a more robust load balancing functionality with simpler setup, you can use the proxy-based load balancer like HAproxy. Faster doesn’t mean robust, and slower doesn’t necessarily mean it is not worth it. In a future post, we plan on looking at MaxScale. Let us know if you are using any other load balancers.

Subscribe below to be notified of fresh posts