-
Notifications
You must be signed in to change notification settings - Fork 13
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix --block-io #310
Fix --block-io #310
Conversation
include/lo2s/monitor/bio_monitor.hpp
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Empty file?
|
||
BlockDevice dev = block_device_for<RecordBlock>(event); | ||
|
||
if (sector_cache_.count(dev) == 0 && sector_cache_[dev].count(event->sector) == 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if (sector_cache_.count(dev) == 0 && sector_cache_[dev].count(event->sector) == 0) | |
if (sector_cache_.count(dev) == 0 || sector_cache_.at(dev).count(event->sector) == 0 || sector_cache_.at(dev).at(event->sector) == 0 ) |
it can't be both, right?
include/lo2s/perf/bio/writer.hpp
Outdated
|
||
BlockDevice dev = block_device_for<RecordBlock>(event); | ||
|
||
if (sector_cache_.count(dev) == 0 && sector_cache_[dev].count(event->sector) == 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if (sector_cache_.count(dev) == 0 && sector_cache_[dev].count(event->sector) == 0) | |
if (sector_cache_.count(dev) == 0 || sector_cache_.at(dev).count(event->sector) == 0) |
this only not crashes because you use operator[]
include/lo2s/perf/bio/writer.hpp
Outdated
if (sector_cache_.count(dev) == 0) | ||
{ | ||
sector_cache_.emplace(std::piecewise_construct, std::forward_as_tuple(dev), | ||
std::forward_as_tuple(std::map<uint64_t, uint64_t>())); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this is necessary, as you use operator[]
and only default arguments.
include/lo2s/perf/bio/writer.hpp
Outdated
sector_cache_.emplace(std::piecewise_construct, std::forward_as_tuple(dev), | ||
std::forward_as_tuple(std::map<uint64_t, uint64_t>())); | ||
} | ||
sector_cache_[dev][event->sector] = size; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sector_cache_[dev][event->sector] = size; | |
sector_cache_[dev][event->sector] += size; |
why not accumulate instead of overwriting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we want to accumulate here? We want to cache the size of that specific queue-ing operation here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what happens if there's another request for the same sector?
include/lo2s/perf/bio/writer.hpp
Outdated
handle, size, event->sector); | ||
handle, sector_cache_[dev][event->sector], | ||
event->sector); | ||
sector_cache_[dev].erase(event->sector); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sector_cache_[dev].erase(event->sector); | |
sector_cache_[dev][event->sector] = 0; |
39cbc61
to
5c9696f
Compare
include/lo2s/perf/bio/writer.hpp
Outdated
uint64_t sector; // the accessed sector on the device | ||
|
||
uint32_t nr_sector; // the number of sector_cache_ written | ||
// 512) for complete: the error code of the operation |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment no verb?
include/lo2s/perf/bio/writer.hpp
Outdated
sector_cache_.emplace(std::piecewise_construct, std::forward_as_tuple(dev), | ||
std::forward_as_tuple(std::map<uint64_t, uint64_t>())); | ||
} | ||
sector_cache_[dev][event->sector] = size; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what happens if there's another request for the same sector?
Due to a bug, when the multi-reader switches from one reader with later events to one that has earlier events, a block I/O event was discarded from the reader with the later events. This predictably led to event loss. Further, the insert event was switched from block_rq_insert to block_bio_queue. The reason for this is that block_rq_insert never matched the amount of data reported by block_rq_issue and block_rq_complete. I could not find another block_rq_* tracepoint that accounts for the block I/O request that do not enter through block_rq_insert. However, block_bio_queue matches the amount of data that block_rq_issue and block_rq_complete report. This results in somewhat of a mismatch, as block_bio_* is one level higher up than block_rq_*. This is most notable in that one block_bio_queue "struct bio" is split into multiple (usually less than 10) "struct rq" on the block_rq_* level. I remedy this situation by only writing the first block_rq_insert/issue event that I encounter (which is the one with the matching sector to the block_bio_queue event) and discarding the others. We might want to discuss if using the timestamp of the last block_rq_* event is the more correct variant.
931a8f8
to
c7dcbcb
Compare
Due to a bug, when the multi-reader switches from one reader with later events to one that has earlier events, a block I/O event was discarded from the reader with the later events. This predictably led to event loss.
Further, the insert event was switched from block_rq_insert to block_bio_queue. The reason for this is that block_rq_insert never matched the amount of data reported by block_rq_issue and block_rq_complete. I could not find another block_rq_* tracepoint that accounts for the block I/O request that do not enter through block_rq_insert. However, block_bio_queue matches the amount of data that block_rq_issue and block_rq_complete report.
This results in somewhat of a mismatch, as block_bio_* is one level higher up than block_rq_. This is most notable in that one block_bio_queue "struct bio" is split into multiple (usually less than 10) "struct rq" on the block_rq_ level.
I remedy this situation by only writing the first block_rq_insert/issue event that I encounter (which is the one with the matching sector to the block_bio_queue event) and discarding the others.
We might want to discuss if using the timestamp of the last block_rq_* event is the more correct variant.
This Fixes #290