Karger et al. introduced the concept of consistent hashing and gave an algorithm to implement it. Consistent hashing specifies a distribution of data among servers in such a way that servers can be added or removed without having to totally reorganize the data. It was originally proposed for web caching on the Internet, in order to address the problem that clients may not be aware of the entire set of cache servers.
https://arxiv.org/abs/1406.2294
https://www.eecs.umich.edu/techreports/cse/96/CSE-TR-316-96.pdf
https://ai.google/research/pubs/pub44824 (section 3.4)
- Start with a circle in line with Karger et al
- N nodes can be replicated R times to improve shard distribution. Replicated nodes to be termed Virtual nodes.
- Shard replicated nodes' hashes to angles on the cicle
- Add sharded nodes' hashes to a sorted map - key (angle) : value (node id)
- Circle is now primed
- Operations provided: a) data ops: add(), get(), remove() - key (angle) : value (kv pair) b) node ops: addNode(), removeNode()
- Don't deal with get() misses. get() misses should be rehydrated, responsibility of clients to rehydrate from permanent storage
- Don't apply algorithmic operations to storage stratum server-side, rather apply only client-side
- Tunables available: a) replication factor (by cluster size, by hardware homogeneity) b) choice of hashing algorithm
- Open questions: a) no perfect hashing algorithms - how to cheaply deal with collisions? b) how to handle hot replicas that show a major K/N skew? c) can we do better than Krager? d) what is a good value for the upper-bound of N times R?
- Consider improvements afforded by HRW / Rendezvous Hashing.
- Is support for CAS operations needed?
Add mvn dependency:
<dependency>
<groupId>com.github.consistenthash</groupId>
<artifactId>consistenthash</artifactId>
<version>1.0-SNAPSHOT</version>
</dependency>