-
-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix exception swallow in fixture due to parent cancel scope #77
Fix exception swallow in fixture due to parent cancel scope #77
Conversation
beff403
to
4f24fb2
Compare
This test showcase a bug where an exception occuring in a yield fixture is silently swallowed when a parent cancel_scope get cancelled during teardown.
…g fixture teardown
d9b4eb5
to
ab985c7
Compare
Codecov Report
@@ Coverage Diff @@
## master #77 +/- ##
==========================================
+ Coverage 99.78% 99.79% +<.01%
==========================================
Files 19 19
Lines 476 483 +7
Branches 41 42 +1
==========================================
+ Hits 475 482 +7
Misses 1 1
|
@njsmith Would you have an eye on this fix ? |
I think this is the same issue as python-trio/trio#455. An exception is propagating in a cancelled context; some aexit or finally executes a checkpoint; you wind up with a Cancelled propagating with the original exception as I'm not super comfortable with the proposed fix -- it feels special-cased against the particular example you've used to demonstrate the problem, rather than fixing the underlying problem.
I have some doubts about that. Note that the @pytest.fixture protocol is different from the @asynccontextmanager protocol, because the former doesn't throw in exceptions. That means that for an apples-to-apples comparison, the @asynccontextmanager version would have to say
at which point I believe the same error will recur, demonstrating that this isn't really a pytest-trio problem. |
Thanks for the feedback @oremanj
I didn't know about this issue, but it seems pretty close indeed. My guess is the weird yield in pytest-trio make the issue more likely to occurs there and that's why I only encountered this kind of issue in pytest-trio so far.
I totally agree, the point is currently we have test that swallow errors with is the worst kind of behavior (it's a pain to detect the bug, then an even bigger paint to find out the issue) So I would say a hacky fix is still better than the current situation... |
This is a tough issue. @touilleMan can you share the real example where you ran into this? The problem with the minimal example is that it's artificial, so it isn't obvious what it should do. Arguably it's already working correctly – you explicitly requested that the code inside the async def myfixture():
try:
async with open_nursery() as nursery:
await nursery.start(die_soon)
yield
except RuntimeError:
pass So we probably need to do something, but the minimal example doesn't make it obvious what change to make. |
@njsmith We encountered this issue multiple time with the unittests of our project Parsec
One important thing about the client (called Now going back to our test, if a monitor is buggy and blows up it coroutine, the I've created a branch in the project that demonstrate this nicely
The more I think about this issue, the more I wonder about this "fixture never raise exception" policy. This is useful to make the trio fixture working like regular pytest fixture, but I don't see a good reason for pytest to do this in the first place (I guess the reason is teardown should happen no matter the outcome of the test, however this what try/finally are for and Python is all about consenting adults...). |
Hmm, OK, I think I get part of it though. What's anomalous about Ick. So first, yeah, if The simplest solution would be that if a fixture calls If we want to do better, it would be by giving more information to help debug the problem. For something like the There are also cases where you probably can't do any better. For example: async def silly_fixture():
async def cancel_me(cancel_scope):
cancel_scope.cancel()
async with trio.open_nursery() as nursery:
nursery.start_soon(cancel_me, nursery.cancel_scope)
yield In this case, the fixture crashes the test, but there is no exception to report. I guess this is an example of a case where it would be useful if But For fixtures using the In any case where this problem happens, the first step is for the fixture And, it's not just any So I guess the general advice is:
Does that sound right? |
Well, score one for logic I guess :-). This context manager indeed does an explicit cancel, but not at the end of the nursery: So I predict that if you remove 4 levels of indentation from that last line, then it will fix your problem. (I guess you might also need to add a |
@njsmith didn't read your answer yet, but seeing such a big answer 3mn after my post feels like you are batman 🦇 😄 |
I've just tested, indeed your prediction was true ;-) On top of that your explanation really helped me understanding the logic of this swalled-by-cancelled exception. |
@touilleMan I wish! But it took me like an hour to write, I just started sooner :-) |
OK, so we definitely understand the problem better than we did before, that's good. And it seems like we can at least say it would be better if pytest-trio detected when a fixture called But you're also right that this is a super subtle footgun. I've been trying to figure out if this can happen when your code is run outside pytest-trio. I think the answer is that there is one rare situation where if you get unlucky enough, you can lose background exceptions. Consider what happens if a background task crashes just as the body of the async context manager finishes and your async with logged_core_factory(...):
await checkpoint()
And now your Now... in your application, I don't know if this is a problem or not. Once the body of the What makes the pytest-trio case so weird is that the background task exception causes the body of the .....I really don't know what to do about this! You're totally right about this part:
If pytest fixtures had followed the usual @fixture
async def my_fixture():
async with my_existing_context_manager:
yield and now we're basically taking the weird pytest fixture semantics and forcing them onto Maybe it wouldn't be so bad if literally the only exception that can be raised from a fixture Can you look at your fixtures and see if there are any that rely on running code after Other than that... I think the only option is to document it. You do seem to need a pretty specific combination of factors to trigger this, so I guess if the error message pointed people to a detailed writeup of what to look for, then most people would be able to find the cause pretty quickly? |
I think this can be closed now that #83 is merged... please re-open if I'm wrong |
Following #75, I've finally isolated the original issue:
die_soon
causes the fixture to crashdie_soon
has crashed doesn't make yield to return with an exceptionnursery1
is cancelledasync_finalizer
end up with a Cancelled exceptionnursery1
's which conclude no other exceptions has occurredThe sad part is if we convert the fixture into a regular async context manager, everything works fine.
I'm not really sure how we could fix that (well I'm not even sure about the steps I've listed 😄 ), @njsmith any idea ?