-
Notifications
You must be signed in to change notification settings - Fork 429
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
traces include time spent in garbage collector #389
Comments
If it's GC time you might be able to see it by enabling native stack traces - we include certain python functions like GC in the stack traces. Do you have a script that reproduces? Python objects are ref-counted and the GC will only kick in if you have cyclic references to objects. I'm wondering if this is caused by a different problem like GIL contention instead of GC - unless you're allocating a ton of objects with cyclic references there shouldn't be a large GC pause. Also worth checking out https://twitter.com/tmr232/status/1387049432319873035 =) |
yeah, I'm certainly not ruling out user error here :) I've got a long-running application, and one reason to suspect GC is that, each time I profile it, I see it getting "stuck" at a completely different point in the code.
Another datapoint: I'm running with |
hmm - that does sound suspicious. Can you try disabling GC altogether and see if the problem still happens? Instagram does this https://instagram-engineering.com/dismissing-python-garbage-collection-at-instagram-4dca40b29172 and found some benefits. I'm wondering if you call something like import gc
gc.disable()
gc.set_threshold(0) and then you don't get stuck at any point you can say with some confidence that its GC related. |
yep. Of course, doing so requires restarting the process, which changes things somewhat. Still, I've tried it, and it certainly looks like the results are more consistent with what I would expect. |
yup - that definitely looks like it is GC issues. I don't quite know how to figure out if the target process is in a GC cycle right now - this might be pretty tricky to implement =( |
This might be naive, but you already inspect the |
There is a prototype of this at #515 - this will display the percentage of samples in the Note that this currently on workings on x86_64 linux/mac , with python 3.9+ and doesn't work for profiling subprocesses. If you want to try it out, you can download the wheels from the github actions artifacts here https://github.com/benfred/py-spy/actions/runs/2794391704 |
Often, when I record a trace, it will contain one or two lines with very large (and not entirely believable) sample counts. I suspect this is because the garbage collector is kicking in.
Whilst it's in theory useful to know that the garbage collector is using CPU time, it's not terribly helpful from the point of view of looking for parts of my app I can optimise.
Would it be possible to detect that the GC is active, and omit such samples from the report?
The text was updated successfully, but these errors were encountered: