-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bulk Optimization with Auto-generated Doc IDs #14260
Labels
enhancement
Enhancement or improvement to existing feature or request
Indexing:Performance
RFC
Issues requesting major changes
Comments
khushbr
added
enhancement
Enhancement or improvement to existing feature or request
untriaged
labels
Jun 13, 2024
Benchmark Results#### Summary:
1. DocID Generation Latency
2. Uber level Benchmarks
Comments:
|
@khushbr please link the POC branch code for reference to co-relate with nos. |
@shwetathareja Adding the links for Pre-Generated DocID Store Cache: |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
enhancement
Enhancement or improvement to existing feature or request
Indexing:Performance
RFC
Issues requesting major changes
Is your feature request related to a problem? Please describe
OpenSearch Bulk API executes multiple indexing/update/delete operations in a single call. For each of these operation, the name of index/stream or the alias is required and user can additionally also provide a custom doc ID. In case the doc ID is not provided, OpenSearch auto-generates a docID, a 128 bit UUID, which for practical purposes is unique. In absence of custom routing, the doc ID value is used to determine the shard routing info for the document through a function of Mod murmur3 hash on doc ID. Post this,TransportBulkAction on co-ordinator node generates per-Shard TransportShardBulkAction and sends them to the corresponding primaries. The co-ordinator node waits for response from all shards before sending the response back to the client.
In case of a slow shard/node scenario (say, shard in INITIALIZING state or node undergoing Garbage Collection) - the shard ends up becoming a bottleneck in bulk flow, increasing the tail latencies and holding up the resources, queue on the co-ordinator, potentially causing rejections.
The goal of the project is to tweak the document routing logic for auto-generated Doc IDs to better handle slow shard/node, provide better latencies by saving on network roundtrip and reduce the chatter b/w the co-ordinator and nodes.
The above optimizations should work within following constraints:
Describe the solution you'd like
In this section, we discuss the approache to solve the [Part-1].
Pre-generated DocIDs Cache/Store: Maintain a store with doc IDs tagged per-shard. In the TransportShardBulkAction.doRun() execution, instead of computing the docIDs on the fly, the pre-computed values are assigned. A background thread periodically refills the cache store (we can also explore using async futures to execute the cache refill), the per key (ShardID) refill count is a function of shard throughput. In case the store doesn’t have IDs for a shard, the algorithm falls back to the brute force approach. In the bulk request, for ‘m’ DocWriteRequests and ‘n’ Routing Shards, generate (n * m) docIDs and reject the docIDs not mapping to our randomly selected Target Shard. Minimal locking is used to get/refill/evict the store in a thread-safe manner using ConcurrentLinkedQueue and semaphores.
The above approaches need to be Benchmark for one shard is slow vs many shards are slow. Measure for indexing speed, cpu and memory usage (coordinator node), storage efficiency and lookup speed. Another dimension, measure for cluster throughput and rejects.
Related component
Indexing:Performance
Describe alternatives you've considered
Biased Hash Function: Currently in OpenSearch, the routing Shard is closely coupled with docID. The docID generated is a UUID with requirement of no collision for practical purposed. This docID is ran through a Murmur3 Hash (non-cryptographic) and then a mod function to generate the shard ID integer value to uniformly distribute the documents across the shards. We want to explore if it is possible to maintain these 2 primitives.
One of the simplest approach is to encode the shardId (randomly selected) information in the docID, along with the UUID and forgo the Murmur3 Hash Mod for routing shard calculation
<encode_version>:<base36_shard_id>:<document_id>
.The update by doc _id or get by ids query can be handled at the client communication layer by returning the doc ID as the concatenated string value. At co-ordinator node on transport, a decoder extracts the version, shard ID and routes the doc ID to the specific shard ID. Since there is no hash calculation, the document routing is fast, with no footprint on the CPU cycles and JVM. However, there are drawbacks to this approach:
The below table provides the performance trade-off for the various approaches mentioned:
Additional context
No response
The text was updated successfully, but these errors were encountered: