-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Disable Python Garbage Collection #142
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
cc'ing @gjoseph92 for visibility
@quasiben mentioned Gabe had turned off GC and this had a notable effect. Interested in seeing if we can reproduce this in our run as well. Also curious what the new bottlenecks are with this change |
Though more generally if GC's effect is this pronounced we may want to consider adding this as a config parameter in Distributed. Or perhaps take a more active role of controlling when GC happens in the Scheduler. Maybe you or Gabe have thoughts on this :) |
@jakirkham I'll post results and my code soon, but this is what I was doing: def disable_gc():
# https://github.com/benfred/py-spy/issues/389#issuecomment-833903190
import gc
gc.disable()
gc.set_threshold(0)
print("Disabling GC on scheduler")
client.run_on_scheduler(disable_gc) I think the
That'll probably save us a couple days of confusion!
I doubt that disabling GC is a solution per se. (The Instagram post is a good comparison, because it's very different from our case, in that they have lots of forked child processes and are trying to retain shared memory pages between children and the parents, and the pages would otherwise be dirtied by mutating GC pointers on The main reason I disabled GC initially is because it can lead to skewed profiling results. Because whatever function happen to be running when GC pauses the world will show up as having a hugely exaggerated runtime, and you might end up going off and trying to optimize some code that profiling accuses of being slow, when in fact the code itself wasn't the issue at all. |
Thanks for the update Gab 😄
Eager to see them! 😁
Ah good point. Added 👍
Got it. Yep have read this post before. Wouldn't have guessed it would be relevant to our use case.
Interesting. Does this only happen at
Likewise, which is what I was hinting at near the end. Anyways it's good to know GC is part of the problem.
Agreed this sounds important. Though I guess I'm not too surprised with all the different references to other objects held in the various
Am interested to see where the time is spent once it is disabled 🙂 |
It seems MsgPack use to do this, but no longer does ( msgpack/msgpack-python@235b928 ). Not seeing any more references to GC, but feel free to point them out if I'm missing them |
Nice find! I didn't even bother to look in msgpack or try without the |
Somewhat tangential, but something that has been on the back of my mind is using Until now there also hasn't been much value in doing something like this (allocation/deallocation didn't appear to take that much time). It might be worth profiling memory usage by type to see if these extension objects we are creating for the graph are eating up a lot of space. If we find that to be true, we can pick some |
Results can be seen in issue ( #144 ) |
There doesn't appear to be an environment variable to disable Garbage Collection. So this disables Garbage Collection in
sitecustomize.py
to ensure it is off at process startup. Though we will need to reinstall this file on the machine running the benchmarks to get it to work