You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
With the introduction of Lucene compatible loading layer within NativeMemoryLoadStrategy - the IndexLoadStrategy.load() takes care of loading the graph file into the native memory cache using IndexInput
The synchronized nature of the block to deal with cache sizing causes a premature bottleneck with the memory load, especially in the case of concurrent segment search where multiple threads are forced to be synchronized for graph load operations
What solution would you like?
An ideal solution here would be to ensure that the preload to memory and any preliminary operations (for eg download segments in case of remote store or searchable snapshots or checksumming) can be performed outside of the synchronized block to allow for better parallelism
A suggested approach would look like adding in a new API ensureLoadable to the NativeMemoryEntryContext which will make sure that the graph file is accessible, and ready to be loaded into memory once space is available
What alternatives have you considered?
N/A
Do you have any additional context?
N/A
The text was updated successfully, but these errors were encountered:
My proposal is to basically refactor the load functionality into 2 steps -
preload (which will happen outside the synchronized block)
load, which will basically use the JNI service to get the mapped address of the graph file and proceed to createIndexAllocation. (This will still happen in the synchronized block)
What this will achieve is that we can ensure that the index is loadable in all scenarios -
for a regular index which is readily available in memory, there will be no change in behavior.
For remote-store and searchable snapshots case, preload will ensure that the data is downloaded into memory before doing the load phase.
Is your feature request related to a problem?
NativeMemoryLoadStrategy
- theIndexLoadStrategy.load()
takes care of loading the graph file into the native memory cache usingIndexInput
What solution would you like?
ensureLoadable
to theNativeMemoryEntryContext
which will make sure that the graph file is accessible, and ready to be loaded into memory once space is availableWhat alternatives have you considered?
N/A
Do you have any additional context?
N/A
The text was updated successfully, but these errors were encountered: