blog

Webinar Replay: High Availability in ProxySQL for HA MySQL infrastructures

Krzysztof Ksiazek

Published

Thanks to everyone who participated in our recent webinar on High Availability in ProxySQL and on how to build a solid, scalable and manageable proxy layer using ProxySQL for highly available MySQL infrastructures.

This second joint webinar with ProxySQL creator René Cannaò saw lots of interest and some nice questions from our audience, which we’re sharing below with this blog post as well as the answers to them.

Building a highly available proxy layer creates additional challenges, such as how to manage multiple proxy instances, how to ensure that their configuration is in sync, Virtual IP and fail-over; and more, which we’ve covered in this webinar with René. And we demonstrated how you can make your ProxySQL highly available when deploying it from ClusterControl (download & try it free).

If you missed the webinar, would like to watch it again or browse through the slides, it is available for viewing online.

Watch the webinar replay

Webinar Questions & Answers

Q.: In a MySQL master/slave pair, I am inclined to deploy ProxySQL instances directly on both master and slave hosts. In an environment of 100s of master/slave pairs, with new hosts being built all the time, I can see this as a good way to combine host / MySQL / ProxySQL master/slave pair deploys via a single Ansible playbook. Do you guys have any thoughts on this?

A.: Our only concern here is that co-locating ProxySQL with database servers can make the debugging of database performance issues harder – the proxy will add overhead for CPU and memory and MySQL may have to compete for those resources.

Additionally, we’re not really sure what you’d like to achieve by deploying ProxySQL on all database servers – where would you like to connect? To one instance or to both? In the first case, you’d have to come up with a solution to handling potentially hundreds of failovers – when a master goes down, you’d have to re-route traffic to the ProxySQL instance on a slave. It adds more complexity than it’s really worth. The second case also creates complexity: instead of connecting to one proxy, the application would have to connect to both.

Co-locating ProxySQL on the application hosts is not that much more complex regarding configuration management than deploying it on database hosts. Yet it makes it so much easier for the application to route traffic – just connect to the local ProxySQL instance over the UNIX socket and that’s all.

Q.: Do you recommend for multiple ProxySQL instances to talk to each other or is it preferable for config changes to rely on each ProxySQL instance detecting the same issue at the same time? For example, would you make ProxySQL01 apply config changes in proxysql_master_switchover.sh to both itself and ProxySQL02 to ensure they stay the same? (I hope this isn’t a stupid question… I’ve not yet succeeded in making this work so I thought maybe I’m missing something!)

A.: This is a very good question indeed. As long as you have scripts which would ensure that the configuration is the same on all of the ProxySQL instances – it should result in more consistent configuration across the whole infrastructure.

Q.: Sometimes I get the following warning 2017-04-04T02:11:43.996225+02:00 Keepalived_vrrp: Process [113479] didn’t respond to SIGTERM. and VIP was moved to another server … I can send you the complete configuration keepalived … I didn’t find a solution as to why I am getting this error/warning.

A.: This may happen from time to time. Timeout results in a failed check which triggers VIP failover. And as to why the monitored process didn’t respond to signal in time, that’s really hard to tell. It is possible to increase the number of health-check fails required to trigger a VIP move to minimize the impact of such timeouts.

Q.: What load balancer can we use in front of ProxySQL?

A.: You can use virtually every load balancer out there, including ProxySQL itself – this is actually a topology we’d suggest. It’s better to rely on a single piece of software than to use ProxySQL and then another tool which would be redundant – more steep learning curve, more issues to debug.

Q.: When I started using ProxySQL I had this issue “access denied for MySQL user”; it was random, what is the cause of it?

A.: If it is random and not systematic, it may be worth investigating if it is a bug. We strongly recommend to open an issue on github.

Q.: I have tried ProxySQL and the issue we faced was that after using ProxySQL to split read/write , the connection switched to Master for all reads. How can we prevent the connection?

A.: This is most likely a configuration issue, and there are multiple reasons why this may happen. For example, if transaction_persistent was set to 1 and reads were all within a transaction. Or perhaps the query rules in mysql_query_rules weren’t configured correctly, and all traffic was being sent to the default hostgroup (the master).

Q.: How can Service Discovery help me?

A.: If your infrastructure is constantly changing, tools like etcd, Zookeeper or Consul can help you to track those changes, detect and push configuration changes to proxies. When your database clusters are going up and down, this can simplify configuration management.

Q.: In the discussion on structure, the load balancer scenario was quickly moved on from because of its single point of failure. How about when having a HA load balancer using CNAMES (not IP) for example AWS ElasticLoadBalancer on TCP ports. Would that be a structure that could work well in production?

A.: As long as the load balancer is highly available, this is not a problem, because it’s not a single point of failure. ELB itself is deployed in HA mode, so having a single ELB in front of anything (database servers, a pool of ProxySQL instances) will not introduce a single point of failure.

Q.: Don’t any of the silo approaches have a single point of failure in the proxy that is fronting the silo?

A.: Indeed, although it is not a single point of failure – it’s more like multiple points of failure introduced in the infrastructure. If we are talking about huge infrastructure of hundreds or thousands of proxies, a loss of very small subset of application hosts should be acceptable. If we are talking about smaller setups, it should be manageable to have a ProxySQL per application host setup.

Watch the webinar replay

Subscribe below to be notified of fresh posts