Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
io_queue: Oversubscribe to drain imbalance faster
The natural lack of cross-shard fairness may lead to a nasty imbalance problem. When a shard gets lots of requests queued (a spike) it will try to drain its queue by dispatching requests on every tick. However, if all other shards have something to do so that the disk capacity is close to be exhausted, this overloaded shard will have little chance to drain itself because every tick it will only get its "fair" amount of capacity tokens, which is capacity/smp::count and that's it. In order to drain the overloaded queue a shard should get more capacity tokens than other shards. This will increase the pressure on other shards, of course, "spreading" one shard queue among others thus reducing the average latency of requests. When increasing the amount of grabbed tokens there are two pitfals to avoid. Both come from the fact that under described curcumstances shared capacity is likely all exhausted and shards are "fighting" for tokens in the "pending" state -- i.e. when they line up in the shared token bucket for _future_ tokens, that will get there eventually as requests complete. So... 1. If the capacity is all claimed by shards and shards continue to claim more, they will end-up in the "pending" state, which is -- they grab extra tokens from the shared capacity and "remember" their position in the shared queue when they are to get it. Thus, if an urgent request arrives at random shard in the worst case it will have to wait for this whole over-claimed line before it can get dispatched. Currently, the maximum length of the over-claimed queue is limited by one request per shard, which eventually equals to the io-latency-goal. If claiming _more_ than that, this would violate this time by the amount of over-claimed tokens, so it shouldn't be too large. 2. When increasing the pressure on the shared capacity, a shard has no idea if any other shard does the same. This means, that shard should try to avoid increasing the pressure "just because", there should be some yes-no reason for doing it, so that only "overloaded" shards try to grab more. If all shards suddenly get into this aggressive state, they will compensate each other, but according to p.1 the worst-case preemption latency would grow too high. With the above two assumptions at hands, the proposed solution is to a. Over-claim at most one (1) request from the local queue b. Start over-claim once the local queue length goes above some threshold, and apply hysteresis on exisiting this state to avoid resonance. The thresholds are pretty-much random in this patch -- 12 and 8 -- and that's the biggest problem of it. The issue can be reproduced with the help of recent io-tester over a /dev/null storage :) The io-properties.yaml: ``` disks: - mountpoint: /dev/null read_iops: 1200 read_bandwidth: 1GB write_iops: 1200 write_bandwidth: 1GB ``` The jobs conf.yaml: ``` - name: latency_reads_1 shards: all type: randread data_size: 1GB shard_info: parallelism: 80 rps: 1 reqsize: 512 shares: 1000 - name: latency_reads_1a shards: [0] type: randread data_size: 1GB shard_info: parallelism: 10 limit: 100 reqsize: 512 class: latency_reads_1 ``` Running it with 1 io group and 12 shards would result in shard 0 suffering from not-draining-ever queue and huge final latencies: shard p99 latency (usec) 0: 1208561 1: 14520 2: 17456 3: 15777 4: 15488 5: 14576 6: 19251 7: 20222 8: 18338 9: 21267 10: 17083 11: 16188 With this patch applied shard-0 would scatter its queue among other shards within several ticks lowering its latency at the cost of other shards's latencies: shard p99 latency (usec) 0: 108345 1: 102907 2: 106900 3: 105244 4: 109214 5: 107881 6: 114278 7: 114289 8: 113560 9: 105411 10: 113898 11: 112615 However, the larger the testing time, the smaller latencies become for the 2nd test (and for the 1st too, but not for shard-0) refs: #1083 Signed-off-by: Pavel Emelyanov <[email protected]>
- Loading branch information