-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Improve BKD Tree DocIds Encoding for 24 and 32 bit variations #13686
Comments
@peternied I will run some OSB benchmarks with my changes to Lucene core and update here with the findings. |
Performed a POC on OS 2.13 ( Lucene 9.10 ) by prefix encoding the docIds for 24 and 32 bit cases.
Baseline OS 2.13 => 5.603 GB
Prefix Encoded OS 2.13 => 4.663 GB
I am seeing poor performance across different queries and the reason looks like CPU branch mis-prediction. Async profiler command used
Flamegraph of CPU Branch miss for baseline OS 2.13 Overall Branch mis-prediction in Flamegraph of CPU Branch miss for prefix encoding OS 2.13 Overall Branch mis-prediction in I will try to explore the possibility of changing the decoding logic for unpacking prefix encoded docIds ( will try changing encoding later if this is successful ) to not use any conditional (if-else) statements. |
TL;DR DETAILS I tried 2 more approaches of decoding the docIds :
All the approaches tried so far take longer than the gains achieved by reading less data from Disk. The configuration of For the range query benchmark in Please note this is only for a sub-section of the flamegraph ( the biggest part ) but can be seen throughout. EXISTING It takes around 15% for the STORING COMMON PREFIX It takes around 11% for the It takes around 9% exclusively for decoding the packed DocIds after the leaf blocks are read into memory ( Worse ) Considering these, the 4% gain in loading less data from disk in the common prefix approach is getting offset by the time taking to decode the docIds by around 5%. On closer inspection of the existing implementation, the range query docId traversal is being performed using This exclusive operation of I am purposefully ignoring the for loop operators as they seem to have negligible overhead ( verified by breaking down
So, this helps us in reaching the conclusion that unless we change the decoding to use 1-2 bitwise/fast arithmetic (like +/- ) operators per docId ( not sure if possible with common prefix approach ) it will be hard to offset the performance gains achieved by reading less data from disk. All the new changes done since the last update can be found with this commit |
Opened apache/lucene#13521 in Lucene to introduce BPV_21 for BKD Tree DocId encodings |
Is your feature request related to a problem? Please describe
Lucene represents the docIds using 24 bits if the maximum docId within a leaf is <=
16,777,215
here else it represents the docId as a regular integer using 32 bits without any encoding.A lucene segment can have around 2 billion docs and a significant amount of docIds might be represented using the 24 and 32 bits representation.
Most of the time spent in range queries are while collecting docIds readDelta16,
readInts24 and readInts32
If we can reduce the amount of bytes used to represent docIds, we can improve the reading time ( search ) especially for range queries over larger ranges and reduce the index size.
Some existing tickets that have already discussed this area are :
Describe the solution you'd like
I was trying to find the common prefix amongst the docIds when they are represented using 24 and 32 bit variation based on the NYC Taxi data for the field
fare_amount
with around 20 million docs.I found that a lot of leaf blocks that were represented using 24 bits had common prefixes ( from MSB/Leftmost bit ) ranging from 8bits to 15 bits. Having 8 bits common doesn't really help in encoding as the docIds will be represented using 24 bit. But, having 15 common bits can help us save 7 bits per docId in a leaf of 512 docIds reducing the number of bytes used to represent a leaf block by 445 bytes. ((512* 7 ) / 8 ) = 448- 3 bytes ( used to represent the common prefix as VInt)
Similarly when performing a force merge on the segment, Lucene also used 32 bits representation for around 1000 leaf blocks and I saw common prefixes ranging from 7-15 bits amongst them. The best case of 15 bit common prefix in this case would save 956 bytes per leaf block.
I am planning to introduce a new encoding for 24 and 32 bit representation which will also store the common prefix and benchmark the changes in indexing and search performance.
Related component
Search:Performance
Describe alternatives you've considered
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: