-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Optimize the Database Search API #2668
Comments
The DatabaseImpl.searchForwardReferencedResources and DatabaseImpl.searchReverseReferencedResources functions can also benefit from the same optimization and should be included. |
Very excited to have this merged in the future :D |
This seems like an interesting approach. What are the memory implications of creating a new Parser per iteration?
I have a few questions related to this issue that are related to the use case:
|
@joiskash I've added the results from the benchmarking to the PR - see link here The resources do not contain binary data. We can investigate the chunking approach, however since the objective is to reduce the performance hit that occurs when mapping from serialized json to the corresponding FHIR Resource object we might not get an improvement that way. |
I suppose though if the idea is to use chunking to change the overall approach in terms of improving the UX then yeah, that works okay. We have implemented batching on our record fetches (10 records) by using pagination for registers and infinite scrolling e.g. for searches. |
Looks like creating a new JsonParser for each iteration should not be a concern after all (provided the parent class FHIR Context is already created) as mentioned on the comment here.
I will go ahead and update the PR with this variant of the optimization. |
Is your feature request related to a problem? Please describe.
Did some profiling on this DatabaseImpl method here that has the below code:
The mapping block takes a lot of time since each serialized resource has to be deserialized sequentially for every element in the list returned from the database. The time taken is linear and the more the db results returned the longer the API takes to return a result.
Describe the solution you'd like
It should be possible to optimize this block by introducing a parallelized implementation e.g. Using coroutines and async within each iteration and then collecting the results and get a significant improvement.
Describe alternatives you've considered
Instead of letting the async launch with the current dispatcher(probably IO at that point) we could instead switch to the Default dispatcher since it is a computationally expensive task. This will however require us to create a new JSONParser object per iteration since it is not thread safe.
Additional context
Add any other context or screenshots about the feature request here.
Would you like to work on the issue?
Yeah
The text was updated successfully, but these errors were encountered: