-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tiered Caching] [META] Performance benchmark plan #11464
Comments
@reta @anasalkouz @andrross would like to get your feedback ! |
Thanks @kiranprakash154 for the performance benchmark plan. Some of the things that you need to consider for disk based cache.
|
Thanks for your comments @anasalkouz
Yes, we are limiting the size of disk cache after which it starts evicting. So in terms of space we should not have any issues unless the customer sets the config very high.
We don't have to handle this separately, the way we have it working now is, when we have the in-mem cache(that is already at the node level) filling up instead of just evicting from the in heap, we evict and add to disk tier.
I think the scenario 5 should cover the behavior during high disk io, regardless of what is causing it, but I get your point, I will also make sure this is not regressing the segrep feature. |
This issue captures the different performance benchmarks which we plan to do as part of evaluating Tiered Caching which is tracked here.
We will gather feedback from the community to see if we are missing anything that needs to be considered.
Tiered Caching is described here
Goals of Benchmarking
Scenarios
1. Readonly vs Changing data
Mimic a log analytics behavior by having index rollover policy and perform searches on
2. Concurrent Queries trying to cache into disk based caching
Stress test the tiered cache with continuous search traffic by forcing to cache the requests. This will help us understand if there is a limit we have to be aware of or the sweet spot after which tiered caching might not make too much sense.
This is limited by number of threads in search threadpool. So we should consider different instances like (xl/2xl//8xl/16xl) which will increase number of concurrent queries running on the node. Then with increasing concurrency how the latency profile looks like
3. Long Running Performance Test
Run a workload for long duration.
4. Tune parameters of Ehcache
We use Ehcache as the disk cache and it has many parameters exposed to tune it for specific use cases, we can experiment with them to see how it impacts the performance overall.
like the below parameters.
5. Invalidate while adding to disk based cache (High Disk I/O)
Have high throughput of search queries being forced to be cached and parallely have frequent invalidations happening due to a refresh and bulk writes in the mix
6. Varying clean up intervals
Mimic a use case with varying refresh intervals, currently on heap caches are being cleaned up every min.
What is the behavior with too often vs not.
7. Varying Disk Threshold
Mimic a use case with varying disk threshold, we have thought of keeping 50% as a default threshold. So the cache cleaner only cleans from the disk cache if the keys to cleanup account for 50% of keys (by count not space) in disk.
Test Dimensions
Workload
Concurrent segment search was benchmarked with Http_logs & nyc_taxis. We need to generate unique queries to create many cache entries. Will use the existing search query and introduce randomness to them.
Search Clients
OSB lets you control the number of clients through a parameter.
Shard size
Following our recommendation of shards ranging between 20 - 50 GB
Various Instance types
EBS vs SSD
EBS Based instance types - r5.large, r5.2xlarge, r5.8xlarge
SSD Based instance types - i3.large, i3.2xlarge, i3.8xlarge
Number of cores - Less to Many
Graviton vs Intel
The text was updated successfully, but these errors were encountered: