[Enhancement] split chunk of HashTable (backport #51175) #52117
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Why I'm doing:
JoinHashTable::build_chunk
is aChunk
which contains all data from build side, it means it can be very large for particular cases. As a result, it can easily encounter the memory allocation issue, when jemalloc/os cannot allocate a large continuous memory, as above exception.The particular cases can be:
What I'm doing:
Split that chunk into multiple smaller segments(whose rows is usually 131072) to get rid of this issue:
SegmentedChunk
andSegmentedColumn
to replace originalChunk
andColumn
offset%segment_size
, rather than maintaining a index for it. It's effective enough with static segment_size.Potential downside and considerations of this approach:
JoinHashMap
, it needs to randomly copy data from thebuild_chunk
according tobuild_index
. WithSegmentedChunk
, since the memory address is not continuous anymore, we need to lookup the segment first then lookup the record in it. To deal with it, we try best to use theSegmentedChunkVisitor
to reduce this overhead via eliminating the virtual function callkey_column
ofJoinHashMap
cannot not use columns ofbuild_chunk
anymore. Since their memory layout is different,key_column
use a continuous column, butbuild_chunk
uses a segmented way. It would introduce some memory overhead and memory copy overhead.key_column
segmented ? The overhead is relatively larger for the probe procedure, and also it needs to change a lot of code, which is beyond the scope. So we choose the easy pathPerformance
The
bench_segmented_chunk_clone
is still slower than regularchunk_clone
, it mostly comes from the unpredictable random memory access during copy. Considering it can help memory allocation, i think it's worth to do it.We can further optimize the performance through make the memory access more sequential.
Fixes #issue
What type of PR is this:
Does this PR entail a change in behavior?
If yes, please specify the type of change:
Checklist:
Bugfix cherry-pick branch check:
This is an automatic backport of pull request #51175 done by [Mergify](https://mergify.com). ## Why I'm doing:
JoinHashTable::build_chunk
is aChunk
which contains all data from build side, it means it can be very large for particular cases. As a result, it can easily encounter the memory allocation issue, when jemalloc/os cannot allocate a large continuous memory, as above exception.The particular cases can be:
What I'm doing:
Split that chunk into multiple smaller segments(whose rows is usually 131072) to get rid of this issue:
SegmentedChunk
andSegmentedColumn
to replace originalChunk
andColumn
offset%segment_size
, rather than maintaining a index for it. It's effective enough with static segment_size.Potential downside and considerations of this approach:
JoinHashMap
, it needs to randomly copy data from thebuild_chunk
according tobuild_index
. WithSegmentedChunk
, since the memory address is not continuous anymore, we need to lookup the segment first then lookup the record in it. To deal with it, we try best to use theSegmentedChunkVisitor
to reduce this overhead via eliminating the virtual function callkey_column
ofJoinHashMap
cannot not use columns ofbuild_chunk
anymore. Since their memory layout is different,key_column
use a continuous column, butbuild_chunk
uses a segmented way. It would introduce some memory overhead and memory copy overhead.key_column
segmented ? The overhead is relatively larger for the probe procedure, and also it needs to change a lot of code, which is beyond the scope. So we choose the easy pathPerformance
The
bench_segmented_chunk_clone
is still slower than regularchunk_clone
, it mostly comes from the unpredictable random memory access during copy. Considering it can help memory allocation, i think it's worth to do it.We can further optimize the performance through make the memory access more sequential.
Fixes #issue
What type of PR is this:
Does this PR entail a change in behavior?
If yes, please specify the type of change:
Checklist: