blog

Determining the Best Architecture for a MongoDB Cluster Deployment

Onyancha Brian Henry

Published

Cluster deployments are of great significance in ensuring the high availability of data as well as protecting it. MongoDB enhances this through replication and sharding, whereby replication ensures vertical scaling through lifting redundancy whereas sharding inflates horizontal scaling.  

In general, both approaches try to distribute the workload among the members and thus reduce the workload over which a single node could be subjected to. The database performance can then be seen as fast in serving users with throughput operations. However, without a prime cluster architecture, you may not see the same level of results, even if you try sharding and replication. 

If the replica set members are even, then it will be hard for the members to vote and elect for a new primary if the existing one fails at some point. In this blog we are going to discuss the standard deployment architecture one can use but this may vary in accordance to application requirements.

MongoDB Deployment Strategies

The architecture of replica sets is very determinant of the capacity and capability of MongoDB.

A three node replica set is the standard cluster deployment for MongoDB in any production environment as it provides data redundancy and fault tolerance. Redundancy is important especially in database recovery after disaster strikes. A three node replica set may be the basic deployment architecture, but this may vary according to application specifications and requirements. However, do not make it too complex as it may lead you to some bigger configuration problems.

MongoDB Sharding Strategies

Sharding reduces the workload over which the database is to work on for a given query by reducing the number of documents that have to be acted upon. It therefore uplifts horizontal scaling allowing the database to grow beyond the hardware limits of a single server. Depending on the workload demand, nodes can be added or removed from the cluster and MongoDB will rebalance the data in an optimal way without operation intervention.

Some of the best deployment strategies for a sharded cluster include:

Ensuring Shard Keys are Distributed Uniformly

The reason behind sharding is to scale the database horizontally and reduce the number of throughput operations a single instance could be subjected to. If you don’t distribute the shard keys uniformly, you may end up with a small number of shards. With few shards, operations may be limited by the capacity of a single shard hence making read and write operations slow. 

Chunks Should be Pre-Split and Distributed First

Shards have data chunks that are grouped according to some shard key criteria. When creating a new sharded collection, before loading it with data, you should create empty chunks and distribute them evenly on all shards. When you will be populating MongoDB with data, it will be easy to balance the load across the involved shards. The numInitialChunks option can be used to do these automatically if you are using hash-based sharding. The integer value however should be less than 8192 per shard.

Number of Shards

Two shards are often required as the minimum number for sharding significance to be achieved. A single shard is only useful if you want to lay out the foundation of enabling sharding in future and no need during the deployment time.

Prefer Range-Based Sharding Over Hash-Based Sharding

Range-based sharding is beneficial as it provides more shards, hence operations can be routed to the fewest shards necessary and more often a single shard. Practically this may be difficult not unless you have a good understanding of the data and query patterns involved. Hashed sharding improves the uniform distribution of throughput operation at the expense of providing inferior range-based operations.

Use Scatter-Gather Queries for Only Large Aggregation Queries

Queries that cannot be routed based on a shard key should be broadcast to all shards for evaluation and since they involve multiple shards for each request, they do not scale linearly as more shards are added hence incurring an overhead that degrades the performance of the database. This operation is known as scatter-gather and can only be avoided if you include the shard key in your query. 

The approach is only useful when dealing with large aggregation queries that each query can be allowed to run in parallel on all shards.

MongoDB Replication Strategies

Replication enhances vertical scaling in MongoDB such that the workload is distributed among the involved members. In the production environment, these are some of the considerations one should make for an optimal cluster architecture.

Number of Nodes

The maximum number of nodes a replica set can have is 50  with 7 voting members. Any member after the 7th is considered to be non-voting. A good cluster should therefore have 7 voting members to make the election process convenient. 

Deploy an odd number of voting members and if you only have less than 7 but even number of members, then you will need to deploy an arbiter as another voting member. Arbiters do not store a copy of the data hence will require fewer resources to manage.  Besides, one can subject them to an environment you could not subject the other members.

Fault Tolerance Considerations

Sometimes some members may become unavailable as a result of factors such as power outages or network transients and disconnections. The number of members that remain in the set and capable of electing a primary create a situation known as Fault tolerance. It is therefore the difference between the total number of replica set members and the majority of voting members needed to elect a primary. Absence of a primary dictates that write operations cannot be executed.

The table below shows a sample relationship between the three.

Total replica set members

Majority required to Elect new primary

Fault Tolerance

3

2

1

4

3

1

5

3

2

6

4

2

7

5

2

 

The relationship is not that direct in that if you add more members to the set it is not given that the fault tolerance will increase as seen from the table. Additional members provide support for dedicated functions such as backups and reporting.

Capacity Planning and Load Balancing for Heavy Reads

You need to have a spare capacity for your deployment by adding new members before the current demand saturates the capacity of the existing set.

For very high read traffic, distribute the throughput reads to the secondary members and whenever the cluster grows, add or move members to alternate data centers in order to achieve redundancy and increase data availability. 

You can also use target operations with tag sets to target read operations to specific members or modify write concern to request acknowledgement from specific members.

Nodes Should be Distributed Geographically

Data centers may also fail because of some catastrophe . One is therefore advised to keep at least one or two members in a separate data center for data protection purposes. If possible, use an odd number of data centers and select a distribution that maximizes the likelihood that even with loss of a data center, remaining replica set members can form a majority or at minimum provide a copy of the data.

Employ Journaling for Unexpected Failures

By default this is enabled in MongoDB. You should ensure this option is enabled so as to protect data loss in an event of service interruptions such as sudden reboots and power failures.

Deployment Patterns

There are mainly two deployment approaches that is:

  • Three Member replica sets which provide minimum recommended architecture for a replica set.
  • Replica set distributed across two or more data centers to protect against facility-specific failures like power outages.

The patterns however are dependent on application requirements but if possible, you can combine features of these two in your deployment architecture.

Hostnames and Replica Set Naming

Use logical DNS hostname rather than ip address when configuring replica set members or sharded cluster members. This is to avoid the pain involved with configuration changes you will need to make as a result of changed ip addresses.

In the case of replica set naming, use distinct names for the sets since some drivers group replica set connections by replica set name.

Conclusion

Architecture layout for a replica set determines the capacity and capability of your deployment. Data protection and system performance are the core considerations when setting up the architecture. One should consider vital factors such as Fault tolerance, number of replica set members, optimal sharding key and deployment patterns for high availability & data protection. Geographical distribution of the replica set nodes can address much of these factors by ensuring redundancy and providing fault tolerance if one of the data centers is absent.

Subscribe below to be notified of fresh posts