-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Debugger: Reduces RAM useage for searches (Act 1) #10893
Debugger: Reduces RAM useage for searches (Act 1) #10893
Conversation
It is still high, but it's a lot lower than it was on the prior version (half the prior peak during most points of the search process). This was meant to be more of a quick improvement to reduce impact in the mean time while I'm working on a solution that will address it fully. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fix wasn't as great as I thought it'd be. I spent a good hour messing with Visual Studio's heap profiler (that breaks with a large amount of memory usage) and didn't come to a silver bullet of a solution.
There is still a copy somewhere, let's say the results take 3GB, the total memory usage for a search will peak at 6GB.
The QMap clear appears to take O(n) time. When added on to the time to search and copy, is a very long time.
Thankfully, unlike before, clearing the search results results in the previous map data being destroyed instead of leaking.
I definitely think using a vector type will be better here. Discarding the results will be much, much faster.
This will address the fact that the ui freezes when you change the search type (due to it clearing the m_searchResultsMap
).
When you swap over to using something other than a vector, I might open up the profiler again and take a look.
Otherwise, LGTM.
This will be fixed by switching to a vector, it's at the loading of results into the UI.
I will do that, I originally didn't swap to it because I had some rougher changes that didn't make as much impact, however when I worked on this branch, I added some approaches I didn't try with the vector one. I'll swap it to using a vector and try to get it cleaned up. It should really just go into this PR anyhow. 👍 |
@F0bes I have updated it to use the This should have a positive impact for the loading of results as well as the mentioned clearing times. Any feedback would certainly be appreciated. |
088405b
to
1f99473
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested and didn't even need to bring the memory profiler out, it works great.
~1GB of memory usage for ~26 million results. As soon as the results are cleared the memory is freed. Good job.
I haven't tested it thoroughly, but so far it looks good to me (just fix the warning please :) ).
Reduces memory useage during memory searching process. - Minimizes passing by copy, uses references to avoid causing copies - Deletes FutureWatcher ptr when no longer needed to avoid leaking memory (thanks Fobes) - Updates the search results vector in-place so that no extra copies are needed. - Makes use of std::move to transfer resources instead of keeping two copies. - No longer keeps a copy of prior search results in addition to the active search, the stored results are transferred to the search worker and then transferred back. More improvements to be made, but making a first that should make a decent impact. Also: Updates storage of prior results a s a vector now. Prior since the results were stored in a hashmap, the .keys() function needed to be used to index at an arbitrary point when loading results into the UI. This caused a big spike in memory usage when the results count is particularly large. Using a vector optimizes this as we don't need to add any memory when indexing in this way. Also unlike before when we used vector, we're also removing elements in place when doing filter searches so we don't need two vectors.
Fixes crash issues when trying to use a search comparison that requires prior results with the New Search button instead of Filter Search.
1f99473
to
64a3dce
Compare
Description of Changes
Reduces memory useage during memory searching process.
More improvements to be made, but making a first that should make a decent impact.
Also fixes an issue where a crash occurred when using searches that require prior results with a New Search by giving a message and refusing the search, letting the user that the search comparison only works with
Filter Search
. Thanks @kamfretoz for catching this!Rationale behind Changes
Storing prior results with debugger searches to be able to check prior state based comparisons (such as increased/decreased/changed) caused a quite noticeable increase in ram usage that needs to be resolved.
Suggested Testing Steps
Validating searches are still working as expected and comparing peak ram usage from main branch to this branch.