Partitioning: Approaches to partitioning

Before diving into approaches to partition a database, let us clear up one thing in aspect of partition and its relationship with replication. Partitioning & replication are not an either/or concept but they are mostly used in conjunction with each other to build a robust system. After we partition a database, each partition is replicated across nodes so that it can be recovered in case of a failure. Although you can use just replication if your dataset fits on a single node but you might not be able to operate a partitioned system without replication if your system is built of commodity hardware. Always remember that in distributed computing server failure is a question of when and not if.

On a higher level partitioning can look like the best solution ever. If you have an existing storage where you store 1 million records and you have started seeing issues with high traffic load and you also feel that your datastore will soon run out of memory, then partitioning can look like a great option. Just split 1 million records into 10 partitions with 100k records each and now you gain both in sense of performance and also solve the out of memory issues. But on what basis do you split these records?

  • You can do it on the basis of a record attribute. Example zip code. But if this attribute is not diversified enough across your dataset i.e. 90% of users belong to one zip code then you will end up with a single partition handling 90% of load and remaining 9 partitions being under-utilized by handling the remaining 10% of load.
  • You can just spin up a random number generator and assign each record to a partition based on the random number. Considering that your random number generator is correct, based on the probability you will end up dividing your data evenly. Though now you don’t have the mapping of which record is stored in which partition. You can store this mapping while assigning partitions but then that is additional data that you need to handle in a distributed system.

One approach can be to assign a key to each record. This key can be as simple as the primary_id of the record and as complex as some multi-level hash of a string consisting of primary id coupled with a bunch of other record specific attributes. This approach is better than the one we discussed above as now you control the criteria based on which you are distributing the dataset. Choosing something as simple as modulus of primary_id by number of nodes works as each record is going to have a unique primary_id and will be evenly spread across all records. Also now you have the information to locate a partition on which a particular record exists because you are aware of the partitioning criteria. We will discuss both partitioning based on key and based on hash of the key.

Based on Key

Partitioning based on a key is pretty straightforward to implement and understand. What you need to do is to initially make a logical partition of your data based on a key and then allocate these partitions to respective nodes. So for example if we are maintaining a database of students which contains 1000 records. In order to split it into 10 partitions we can choose the student_id as key and divide the database according to student_id. Doing this we can move records from student_id 1 to 100 to partition 1, student_id 101 to 200 to partition 2 and so on.
This becomes easier to understand on the implementation layer but at the same time this can lead to hot spots if the key is not chosen correctly. In the above example if we would have chosen a key as a class in which a student studies then we might end up with an uneven distribution. It might very well be the case that more than 50% of the students are in the primary standard and the remaining 50% are spread across the other standards. Hence choosing a key for partitioning requires understanding the dataset and deciding a criteria that is able to even out the partitions.

Based on hash of Key

To avoid ending up in a hot spot a preferable approach is to use hash of the decided key. A good hash function ensures that a range of keys are evenly distributed among all partitions. Hash function isn’t required to be overly complex as it is required to decide a partition and not store any user specific information. So instead of assigning a range of keys to a partition, a range of hashes are assigned. Though using this approach we cannot be sure of the spatial boundary of a key. So if we get a key with value 5 on a partition then we cannot be sure of the fact that we will get a key with value 1 or value 10 on the same partition (Assuming each partition has 100 keys) which could have been true in the key based partition. This might not be a big problem in general but there are certain applications that rely on this kind of query pattern. Eg a weather application might query data for n-2 & n+2 days while fetching the data for a date n. This is based on the fact that users usually compare weather with last or next few days and this kind of query pattern allows to merge multiple queries into one in turn improving the performance of the application.


Both the above mentioned approaches solve the partitioning problem to a certain extent. But we have not yet discussed the most common scenario that we are going to face in a distributed system i.e. failure. What happens when after partitioning our database, we start seeing node failures. Or what happens when we need to add more nodes to our system. The 1 million records that we partitioned initially onto 10 partitions won’t scale when we reach 10 million records. Adding more partitions will require updating our hashing logic which was initially dependent upon number of nodes in our system. Same goes for the case when we encounter a node failure as number of nodes in our system decreases by one for a single node failure that in turn affects our hashing logic.

Most of these problems are solved by Consistent Hashing that I am going to cover as part of the next post. As with any new concept of distributed computing, you will have to weigh in the pros and cons of that approach and decide how it fits with your use case.

One Reply to “Partitioning: Approaches to partitioning”

Leave a Reply

Your email address will not be published.