blog

Tips for Reducing Production Database Infrastructure Costs

Ashraf Sharif

Published:

The database tier is one of the most important layers in a system architecture. It must be set up correctly from the beginning due to it being stateful, it is harder to scale as compared to other tiers. If the growth is exponential, the initial decision might get caught in the middle with outrageous total cost of ownership (TCO) which could inhibit database scaling and eventually affect business growth.

In this blog post, we are going to look into some tips on how to reduce the overall TCO of our production database infrastructure costs.

Use Open-Source Software & Tools

Using open source software is the very first step to lower down the database infrastructure cost. Almost every commercial software available in the market has an equivalent of it in the open-source world. The most flexible and cost-effective way to optimize your database strategy is to use the right tool for the right job.

It is possible to build the whole database tier with open source softwares and tools, for example:

Component

Product/Tool/Software

Infrastructure

OpenStack, CloudStack

Hypervisor

Virtualbox, KVM, Xen, QEMU

Firewall

PFSense, OPNsense, Untangle, Simplewall

Containerization

Docker, rkt, lxc, OpenVZ

Operating system

Ubuntu Server, CentOS, Debian, CoreOS

Relational DBMS

MySQL, MariaDB, PostgreSQL, Hive, SQLite

Document-based DBMS

MongoDB, Couchbase, CouchDB

Column-based DBMS

Cassandra, ClickHouse, HBase

Key-value DBMS

Redis, memcached

Time-series DBMS

InfluxDB, OpenTSDB, Prometheus, TimeScaleDB

Database backup tool

Percona Xtrabackup, MariaDB Backup, mydumper, pgbackrest

Database monitoring tool

PMM, Monyog, Zabbix, Nagios, Cacti, Zenoss, Munin

Database management tool

PHPMyAdmin, HeidiSQL, PgAdmin, DBeaver

Database load balancer

ProxySQL, HAProxy, MySQL Router, Pgbouncer, pg-pool, MaxScale

Topology manager

Orchestrator, MaxScale, MHA, mysqlrpladmin

Configuration management tool

Ansible, Puppet, Chef, Salt

Keyring server

Vault, CyberArk Conjur, Keywhiz

Service discovery

etcd, consul, Zookeeper

ETL tools

Talend, Kettle, Jaspersoft

As listed above, there is a plethora of open source software and tools in various categories available that you can choose from. Although the software is available ‘for free’, many offer a dual licensing model – community or commercial, where the latter comes with extended features and technical support. 

There are also free companion and helper tools that are created and maintained as open-source projects which can improve the usability, efficiency, availability and productivity of a product. For example, for MySQL you can have PHPmyAdmin, Percona Xtrabackup, Orchestrator, ProxySQL and gh-ost, amongst many others. For PostgreSQL we have for example Slony-I, pgbouncer, pgAdmin and pgBackRest. All of these tools are free to use and are driven by community. 

Using open source software will also make us free from vendor lock-in, makes us independent from a vendor for products and services. We are free to use other vendors without substantial switching costs.

Run on Virtual Machines or Containers

Hardware virtualization allows us to make use of all of the resources available in a server. Despite the performance overhead due to physical resource sharing by the guest hosts, it gives us a cheaper alternative to have multiple instances running simultaneously without the cost of multiple physical servers. It is easier to manage, reusable for different purposes like testing and understanding how well our application and database communicate and scale across multiple hosts. 

Running your production database on bare-metal servers is the best option if performance matters. Most of the time, the performance overhead on hardware virtualization can be minimized if we plan proper isolation of the guest hosts with fair load distribution and if we allocate sufficient resources to avoid starvation when sharing resources.

Containers are better placed, at least theoretically, to achieve lower TCO (total cost of ownership) than traditional hardware virtualization. Containers are an operating system-level virtualization, so multiple containers can share the OS. Hardware virtualization uses a hypervisor to create virtual machines and each of those VMs has its own operating system. If you are running on virtualization with the same operating system over guest OSes, that could be a good justification to use container virtualization instead. You can pack more on to a server that is running containers on one version of an OS compared to a server running a hypervisor with multiple copies of an OS.

For databases, almost all popular DBMS container images are available for free in DockerHub:

There are also tons of articles and guidelines on how to run your open source database on Docker containers, for example this one which I like (because I wrote it! 🙂 ), MySQL Docker Containers: Understanding the Basics.

Embrace Automation

Automation can greatly reduce cost by shrinking the DBA/DevOps team size with all sorts of automation tools. Managing the database infrastructure lifecycle involves many risky and repetitive tasks which require expertise and experience. Hiring talented candidates, or building up a team to support the infrastructure can take a significant amount of time, and it comes with a handsome cost for salary, benefits and employee welfare. 

Human beings have feelings. They have bad days, personal problems, pressure for results, many types of distractions, and so on. It’s common to forget a step, or misfire a destructive command especially on a daily repetitive task. A well-defined configuration creates a stable process. The machine will never miss a single step.

Repetitive tasks like database deployment, configuration management, backup, restore and software upgrade can be automated with infrastructure provisioning tools like Terraform, Heat (OpenStack) or CloudFormation (AWS) together with configuration management tools like Ansible, Chef, Salt or Puppet. However, there are always missing parts and pieces that need to be covered by a collection of custom scripts or commands like failover, resyncing, recovery, scaling and many more. Rundeck, an open source runbook automation tool can be used to manage all the custom scripts, which can bring us closer to achieving full automation.

A fully automated database infrastructure requires all important components to work in-sync together like monitoring, alerting, notification, management, scaling, security and deployment. ClusterControl is a pretty advanced automation tool to deploy, manage, monitor and scale your MySQL, MariaDB, PostgreSQL and MongoDB servers. It supports handling of complex topologies with all kinds of database clustering and replication technologies offered by the supported DBMS. ClusterControl has all the necessary tools to replace specialized DBAs to maintain your database infrastructure. We believe that existing sysadmins or devops teams alongside ClusterControl would be enough to handle most of the operational burden of your database infrastructure.

Utlize Automatic Scaling

Automatic scaling is something that can help you reduce the cost if you are running on multiple database nodes in a database cluster or replication chain. If you are running on cloud infrastructure with on-demand or pay-per-use subscription, you probably want to turn off underutilized instances to avoid accumulating unnecessary usage charges. If you are running on AWS, you may use Amazon CloudWatch to detect and shut down unused EC2 instances, as shown in this guide. For GCP, there is a way to auto-schedule nodes using Google Cloud Scheduler.

There are a number of ways to make database automatic scaling possible. We could use Docker containers with the help of orchestration tools like Kubernetes, Apache Mesos or Docker Swarm. For Kubernetes, there are a number of database operators available that we can use to deploy or scale a cluster. Some of them are:

Automatic database scaling is somehow trivial with the ClusterControl CLI. It’s a command line client that you can use to control, manage, monitor your database cluster and it can perform basically anything that the ClusterControl UI is capable of. For example, adding a new MySQL slave node is just a command away:

$ s9s cluster --add-node --cluster-id=42 --nodes='192.168.0.93?slave' --log

Removing a database node is also trivial:

$ s9s cluster --remove-node --cluster-id=42 --nodes='192.168.0.93' --log

The above commands can be automated with a simple bash script, where you can combine with infrastructure automation tools like Terraform or CloudFormation to decommission unused instances. If you are running on supported clouds (AWS, GCP and Azure), ClusterControl CLI can also be used create a new EC2 instance in the default AWS region with a command line:

$ s9s container --create aws-apsoutheast1-mysql-db1 --log

Or you could also remove the instance created in AWS directly:

$ s9s container --delete aws-apsoutheast1-mysql-db1 --log

The above CLI makes use of the ClusterControl Cloud module where one has to configure the cloud credentials first under ClusterControl -> Integrations -> Cloud Providers -> Add Cloud Credentials. Note that the “container” command in ClusterControl means a virtual machine or a host that sits on top of a virtualization platform, not a container on top of OS-virtualization like Docker or LXC.

Subscribe below to be notified of fresh posts