-
-
Notifications
You must be signed in to change notification settings - Fork 344
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cancelled __aexit__ handlers eat exceptions #455
Comments
Sorry this has been biting you! I'd like to understand better what exactly's happening. I guess it must be something like:
Does that sound right? It would be helpful to hear more concrete details about how you managed to trigger this combination of circumstances -- maybe it would give clues about how it could be handled better. |
I should also say I'm very wary about making a blanket recommendation that people use shielding in any |
Ok I just got bit bad by something similar but incorporating async gens.. Check it: import trio
at_exit_entered = False
class bar:
async def __aenter__(self):
return self
async def __aexit__(self, *tb):
print(tb)
global at_exit_entered
at_exit_entered = True
await trio.sleep(0)
async def nums(seed):
async with bar():
for i in range(seed):
await trio.sleep(0)
yield i
async def iterate_nums():
with trio.open_cancel_scope() as cs:
async for i in nums(10):
print(i)
# cs.shield = True
await trio.sleep(0.1)
# cs.shield = False
async def foo():
async with trio.open_nursery() as n:
n.start_soon(iterate_nums)
await trio.sleep(0.8)
n.cancel_scope.cancel()
trio.run(foo)
# fails since ``bar.__aexit__()`` is never triggered
assert at_exit_entered Un-commenting the |
See python-trio/trio#455 for the deats...
@tgoodlet ah, yeah, what you're hitting is a different frustrating problem. The problem is that neither you nor Python are taking responsibility for making sure your async generator object gets cleaned up properly. There's more discussion of this in PEP 533 (which is stalled, unfortunately). For now the only real workaround is to replace your from async_generator import aclosing
async def iterate_nums():
...
async with aclosing(nums(10)) as ait:
async for i in ait:
... See also:
|
BTW @smurfix, I'd still be interested in seeing how you managed to trigger the original issue in real code, in case you still remember. You say it's a common mistake, and I don't doubt you, but I also can't quite visualize how you're triggering it :-). |
@njsmith ahah! ok I'll try this Also, fwiw, I have the same issue as @smurfix and have to specifically shield in order to get reliable cancellation working for actor nurseries. |
See python-trio/trio#455 for the deats...
#746 was a duplicate of this. @oremanj, can you give an example of what you were doing when you ran into this? I'm still having trouble getting an intuition for when it happens in practice :-) There is a plausible justification for the current behavior: the task was going along, executing some arbitrary python code, which got cancelled. And in this case the arbitrary python code happened to be an exception handler. But of course what makes this counterintuitive is that we often think of Over there @smurfix suggested that when an exception propagates into an Though tbh we have this problem for nurseries too, and I'm not sure yet how that will all play out. Probably we should give MultiError v2 some time to shake out before we make decisions about |
I'm not too fond of the "always-MultiError-or-never" idea. It's trivial to teach
instead of a plain
but that can't really be avoided in any case. |
Yeah, it's trivial to teach |
Well, that gets more trivial if everything raises a |
While investigating python-trio/pytest-trio#75, I ran into the same issue. I managed to reproduce the exception swallowing with a simpler (and probably more common) example: async def main():
with trio.CancelScope() as scope:
try:
raise RuntimeError("Oooops")
finally:
scope.cancel()
await trio.sleep(0) Note that the exception doesn't get swallowed if |
The discussion in python-trio/pytest-trio#77 managed to isolate a case where this could happen accidentally in real-world code. That example seems to rely on a combination of multiple factors: two nested nurseries inside a context manager, a crashing task in the inner nursery, a self-cancellation, and, crucially, the weird semantics in pytest-trio where fixture |
Another case of an exception getting dropped on the floor, here due to a plain old with trio.move_on_after(1):
try:
raise ZeroDivisionError()
except ZeroDivisionError:
await trio.sleep(10)
if some_condition():
raise I'm genuinely unsure what should happen in this case. |
This can also happen the other way 'round, with even more interesting consequences. Bottom line: you need to shield async code inside exception blocks (in Ideally the Python code checkers should warn about this, but I don't know if any of them do. |
flake8-async's |
It looks similar to the opening example in #1559. How much are these issues related? Because for #1559, we work around it with a context manager (originally prototyped by oremanj ) class _preserve_current_exception:
"""A context manager which should surround an ``__exit__`` or
``__aexit__`` handler or the contents of a ``finally:``
block. It ensures that any exception that was being handled
upon entry is not masked by a `trio.Cancelled` raised within
the body of the context manager.
""" |
this is now implemented and released as |
This is what happens when you don't wrap your async context's exit handler in a shielded cancel scope:
… trick question. Nothing happens – the error gets dropped on the floor. Since this is a fairly common mistake (at least IME) I wonder whether we can do something about it. If not (which is what I suspect) we need to document this pitfall more loudly.
The text was updated successfully, but these errors were encountered: