blog

Database Sharding: How Does it Work?

Krzysztof Ksiazek

Published

Database systems with large data sets or high throughput applications can challenge the capacity of a single database server. High query rates can exhaust CPU capacity, I/O resources, RAM or even network bandwidth.

Horizontal scaling is often the only way to scale out your infrastructure. You can upgrade to more powerful hardware, but there is a limit on how much load a single host can handle. You may be able to purchase the most expensive and the fastest CPU or storage on the market, but it still may not be enough to handle your workload. The only feasible way to scale beyond the constraints of a single host is to utilize multiple hosts working together as a part of a cluster or connected using replication.

Horizontal scaling has its limits too, though. When it comes to scaling reads, it is very efficient – just add a node and you can utilize additional processing power. With writes, things are completely different. Consider a MySQL replication setup. Historically, MySQL replication used a single thread to process writes – in a multi-user, highly concurrent environment, this was a serious limitation. This has changed recently. In MySQL 5.6, multiple schemas could be replicated in parallel. In MySQL 5.7, after addition of a ‘logical clock’ scheduler, it became possible for a single-schema workload to benefit from the parallelization of multi-threaded replication. Galera Cluster for MySQL also allows for multi-threaded replication by utilizing multiple workers to apply writesets. Still, even with those enhancements, you can get just some incremental improvement in the write throughput – it is not the solution to the problem.

One solution would be to split our data across multiple servers using some kind of a pattern and, in that way, to split writes across multiple MySQL hosts. This is sharding.

The idea is really simple – if my database server cannot handle the amount of writes, let’s split the data somehow and store one part, generating part of the write traffic, on one database host and the other part on another host. In that way, each host will have to handle half of the writes which should be well within their hardware limits. We can further split the data and distribute it on more servers if our write workload grows.

The actual implementation is more complex as there are numerous issues you need to solve before you can implement sharding. The first, very important question that you need to answer is – how are you going to split your data?

Functional Sharding

Let’s imagine your application is built out of multiple modules, or microservices if we want to be fashionable. Assume it’s a large online store with a backend of several warehouses. Such site may contain a module to handle warehouse logistics – check the availability of an item, track shipment from a warehouse to a customer. Another module may be an online store – a website with a presentation of available goods. Yet another module would be a transaction module – collect and store credit cards, handle transaction processing and so on. Maybe the online store has a large, buzzing forum where customers share opinions on goods, discuss support issues etc. You may start your voyage in the world of shards by using a separate database per module. This will allow you to gain some breathing space and plan for next steps. On the other hand, the next step may not be necessary at all if each shard can comfortably handle its workload. Of course, there are downsides of such setup – you cannot easily query data across modules (shards) – you have to execute separate queries to separate databases and then combine together resultsets.

Expression-Based Sharding

Another method of splitting the data across shards would be to use some kind of expression or function/algorithm to help us decide where the data should be located. Let’s imagine you have a database with one large table that is commonly accessed and written to. For example, assume a social media site and our largest table contains data about users and their activities. This table uses some kind of id column as primary key – we need to split it somehow and one of the ways would be to apply an expression to the ID value. A very popular choice is to use a modulo function – if we want to generate 128 shards, we can just apply expression of ‘id % 128’ and this would calculate the shard number where a given row should be stored. Another method include making use of a date range, e.g., all user activity in year 2015 is stored in one database, activity in year 2016 is stored in a separate database). Yet another one would be to distribute data based on a list of attributes, e.g., all users from a specific country end up in the same shard. 

Metadata-Based Sharding

As we discussed above, both functional sharding and expression-based sharding have limitations when it comes to scaling out in terms of number of shards. There’s still one more method which gives you more flexibility in managing shards – a metadata-based sharding. The idea is very simple – instead of using some kind of hard-coded algorithm, let’s just write down where a given row is located: row of id=1 – shard 1, row with id=2 – shard 5. Finally, let’s build a database to keep this metadata.

This approach has a huge benefit – you can store any row in any shard. You can also easily add new shards to the mix – just set them up and start to store data on them. You can also easily migrate data between shards – nothing stops you from copying data between shards and then making an adjustment in the metadata. In reality it’s more complex than it sounds as you have to make sure you move all the data so some kind of data locking is required. For example, to copy data between shards, you’d have to do an initial copy of the data across shards, lock access to the part of the data which is migrated, make a final sync and, finally, change an entry in the metadata database and unlock the data.

This is it for now. If you would like to learn more about sharding, you may want to check out this ebook:

Subscribe below to be notified of fresh posts