-
-
Notifications
You must be signed in to change notification settings - Fork 342
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fine-tuning channels #719
Comments
Now immortalized in python-triogh-719
Would it be helpful to have a way to explicitly mark a channel as broken, e.g. While I'm here, also check out this mailing list post from 2002 with subject "Concurrency, Exceptions, and Poison", by the same author. It's basically about how exceptions and nurseries should interact. |
Yes, |
Hello, Would it be possible to document how we should replace I'm currently using a Queue as a pool of objects. These objects are basically wrappers around The pseudo code looks like: class MonitorPool:
pool_size = 5
pool = trio.Queue(pool_size)
async def handle(self, server_stream: trio.StapledStream):
# do something
ident = next(CONNECTION_COUNTER)
data = await server_stream.receive_some(8)
async with self.get_mon(ident) as mon:
response = mon.status(data)
# do other things
async def launch_monitor(self, id):
mon = Monitor(ident=id)
await self.pool.put(mon)
async def cleanup_monitor(self):
while not self.pool.empty():
mon = await self.pool.get() # noqa
del mon
@asynccontextmanager
async def get_mon(self, ident) -> Monitor:
mon = await self.pool.get() # type: Monitor
yield mon
await self.pool.put(mon)
async def run(self):
async with trio.open_nursery() as nursery:
for i in range(self.pool_size):
nursery.start_soon(self.launch_monitor, i + 1)
try:
await trio.serve_tcp(self.handle, self.port, host=self.bind)
except KeyboardInterrupt:
pass
async with trio.open_nursery() as nursery:
nursery.start_soon(self.cleanup_monitor) |
@ziirish Oo, that's clever! If I were implementing a pool I probably would have reached for a Generally, to replace a class MonitorPool:
def __init__(self, pool_size=5):
self.pool_size = pool_size
self.send_channel, self.receive_channel = trio.open_memory_stream(pool_size)
async def launch_monitor(self, id):
mon = Monitor(ident=id)
await self.send_channel.send(mon)
@asynccontextmanager
async def get_mon(self, ident) -> Monitor:
mon = await self.receive_channel.receive() # type: Monitor
yield mon
await self.pool.send_channel.send(mon) The one thing that doesn't transfer over directly is Also, this code: async with trio.open_nursery() as nursery:
nursery.start_soon(self.cleanup_monitor) is basically a more complicated way of writing: await self.cleanup_monitor() And it's generally nice to run cleanup code even if there's an exception... async def run(self):
async with self.receive_stream:
async with trio.open_nursery() as nursery:
for i in range(self.pool_size):
nursery.start_soon(self.launch_monitor, i + 1)
try:
await trio.serve_tcp(self.handle, self.port, host=self.bind)
except KeyboardInterrupt:
pass BTW, the type annotation for |
Speaking of type annotations, I wonder if we should make the channel ABCs into parametrized types, like: T_cov = TypeVar("T_cov", covariant=True)
T_contra = TypeVar("T_contra", contravariant=True)
class ReceiveChannel(Generic[T_cov]):
async def receive(self) -> T_cov:
...
class SendChannel(Generic[T_contra]):
async def send(self, obj: T_contra) -> None:
...
def open_memory_channel(max_buffer_size) -> Tuple[SendChannel[Any], ReceiveChannel[Any]]:
... (For the variance stuff, see: https://mypy.readthedocs.io/en/latest/generics.html#variance-of-generic-types. I always get confused by this, so I might have it wrong...) It might even be nice to be able to request a type-restricted memory channel. E.g. @ziirish might want to do something like: s, r = open_memory_channel[Monitor](pool_size)
# or maybe:
s, r = open_memory_channel(pool_size, restrict_type=Monitor) Ideally this would both be enforced at runtime (the channels would check for Complications: Runtime and static types are kind of different things. They overlap a lot, but I don't actually know if there's any way to take an arbitrary static type and check it at runtime? And PEP 563 probably affects this somehow too... It's possible we might want both a way to pass in a runtime type (and have the static type system automatically pick this up when possible), and also a purely static way to tell the static type system what type we want it to enforce for these streams without any runtime overhead? Even if we ignore the runtime-checking part, I don't actually know whether there's any standard syntax for a function like this in python's static type system. Maybe it would need a mypy plugin regardless? (Of course, we're already talking about maintaining a mypy plugin for Trio, see python/mypy#5650, so this may not be a big deal.) Of course for a pure static check you can write something ugly like: s, r = cast(Tuple[SendStream[Monitor], ReceiveStream[Monitor]], open_memory_stream(pool_size)) but that's awful. |
Actually, it is possible to do something like @overload
def open_memory_channel() -> Tuple[SendChannel[Any], ReceiveChannel[Any]]:
pass
@overload
def open_memory_channel(*, restrict_type: Type[T]) -> Tuple[SendChannel[T], ReceiveChannel[T]]:
pass
def open_memory_channel(*, restrict_type=object):
...
reveal_type(open_memory_channel()) # Tuple[SendChannel[Any], ReceiveChannel[Any]]
reveal_type(open_memory_channel(restrict_type=int)) # Tuple[SendChannel[int], ReceiveChannel[int]] You would think you could write: def open_memory_channel(*, restrict_type: Type[T]=object):
... but if you try then mypy gets confused because it checks the default value against the type annotation before doing generic inferencing (see python/mypy#4236). One annoying issue with this is that it doesn't support Also, you can't express types like open_memory_channel(...)
open_memory_channel[int](...) BUT AFAICT you can't make them work at the same time. The way you make class open_memory_channel(Tuple[SendChannel[T], ReceiveChannel[T]]):
def __new__(self, max_buffer_size):
return (SendChannel[T](), ReceiveChannel[T]())
# Never called, but must be provided, with the same signature as __new__
def __init__(self, max_buffer_size):
assert False
# This is basically a Tuple[SendChannel[int], ReceiveChannel[int]]
reveal_type(open_memory_channel[int](0))
# This is Tuple[SendChannel[<nothing>], ReceiveChannel[<nothing>]], and you have to start throwing
# around manual type annotations to get anything sensible
reveal_type(open_memory_channel(0)) Did I mention it was gross? It's pretty gross. [Edit: I filed an issue on mypy to ask if there's any less gross way to do this: https://github.com/python/mypy/issues/6073] Now... in this version, a bare |
Thanks @njsmith for the explanations. But the question then will be about this:
The purpose of the pool is to be always filled with objects and then to block/pause the execution of the job when it is empty. So at the end of your job, the pool should be "full" resulting in "lost" data. |
I see that type annotations are used in a few spots already. @njsmith would you accept a PR to add type annotation to open_memory_channel() return value?
|
@belm0 What annotation are you planning to use? :-) |
This is a perfect use-case for |
+1, but suggest unqualified names ("send", "receive") |
Not super eager to rename things again, but I'm still mixing them up on a regular basis :-(. The least-obviously-terrible idea I've had so far: In other words: versus |
On 11.01.19 02:31, Nathaniel J. Smith wrote:
Not super eager to rename things again, but I'm still mixing them up
on a regular basis :-(.
We could just bite the bullet and rename the thing back to "queue". :-P
…--
-- Matthias Urlichs
|
It's true that if we have |
Some thoughts from more experience working with not-in-memory channels:
Poison: I've been working locally with a clone() being tedious to implement makes me think more generally: Both streams and channels have a bunch of documented requirements on the implementation (new and currently-in-progress operations raise ClosedResourceError when it's closed, various parameter checks, aclose() and send_eof() are idempotent, send_all() and receive_some() do conflict detection, ...) which provide a low-friction experience for users of the interface but require a decent amount of boilerplate to be added by implementors of the interface. One could imagine a converse situation where the requirements are expressed as preconditions rather than guarantees: "you must not call aclose() twice, you must not make simultaneous calls to send_all() in different tasks, you must not call receive() on a closed channel", etc -- which would be low-friction for implementors and annoying for users. If we have to pick, of course it's better to make things low-friction for users, but I wonder if there's a way we can get the best of both worlds? Like, someone writes a channel or stream following the easy-to-implement version of the contract, and then we provide some generic wrapper/decorator/metaclass/something that turns it into one that follows the easy-to-use version of the contract. This could be a good solution for clone() as well. |
On further thought... I think I don't like this, because I think we're moving in the direction of having Speaking of bidirectional channels... here's a slightly wacky idea. What if
The I guess in many cases it won't be too hard to support some kind of One possibility would be to move the
Hmm, I see what you mean. It would be pretty trivial if you allowed I can certainly imagine cases though where you have a single cross-process channel, and you want to share it among multiple tasks. Like it's easy to imagine a program with multiple tasks all sending stuff occasionally on the same websocket. So it seems like Though... then again, maybe not! In this case, you almost certainly need a background task to manage the underlying transport. And that background task needs to live in a nursery. And probably all your tasks that use it live inside that nursery. So probably, when those tasks exit, your channel will automatically be shut down too, because the nursery exits. Basically this is doing the same thing as I guess the advantage of ...this also makes me wonder if we should get rid of Let's check the multi-producer/multi-consumer example in the docs. The version with import trio
import random
async def main():
async with trio.open_nursery() as nursery:
send_channel, receive_channel = trio.open_memory_channel(0)
async with send_channel, receive_channel:
# Start two producers, giving each its own private clone
nursery.start_soon(producer, "A", send_channel.clone())
nursery.start_soon(producer, "B", send_channel.clone())
# And two consumers, giving each its own private clone
nursery.start_soon(consumer, "X", receive_channel.clone())
nursery.start_soon(consumer, "Y", receive_channel.clone())
async def producer(name, send_channel):
async with send_channel:
for i in range(3):
await send_channel.send("{} from producer {}".format(i, name))
# Random sleeps help trigger the problem more reliably
await trio.sleep(random.random())
async def consumer(name, receive_channel):
async with receive_channel:
async for value in receive_channel:
print("consumer {} got value {!r}".format(name, value))
# Random sleeps help trigger the problem more reliably
await trio.sleep(random.random())
trio.run(main) And then, here's the version where instead of import trio
import random
async def main():
async with trio.open_nursery() as nursery:
send_channel, receive_channel = trio.open_memory_channel(0)
nursery.start_soon(producers, send_channel)
nursery.start_soon(consumers, receive_channel)
async def producers(send_channel):
async with send_channel:
async with trio.open_nursery() as nursery:
nursery.start_soon(producer, "A", send_channel)
nursery.start_soon(producer, "B", send_channel)
async def producer(name, send_channel):
for i in range(3):
await send_channel.send("{} from producer {}".format(i, name))
# Random sleeps help trigger the problem more reliably
await trio.sleep(random.random())
async def consumers(receive_channel):
async with receive_channel:
async with trio.open_nursery() as nursery:
nursery.start_soon(consumer, "X", receive_channel)
nursery.start_soon(consumer, "Y", receive_channel)
async def consumer(name, receive_channel):
async with receive_channel:
async for value in receive_channel:
print("consumer {} got value {!r}".format(name, value))
# Random sleeps help trigger the problem more reliably
await trio.sleep(random.random())
trio.run(main) (They both do work the same way, for once I actually tested before posting a comment.) I dunno. The second version is longer and a bit more tiresome to type. I'm not 100% convinced
Interesting! I don't think I have anything more useful to say than that right now and there's probably enough in this post already :-). But interesting! |
Intriguing! If we expect most applications to still be unidirectional, we run into the problem that different users might pick different conventions for which return value is send and which is receive, which feels like it's not great for mutual intelligibility of the ecosystem? Also, the main selling point of Trio channels over asyncio/threading/etc queues is the closure tracking, which is pretty easy to reason about for unidirectional channels but (IMO) much subtler for bidirectional. As you pointed out on the other thread about
I definitely see the value of these for memory channels, and I see your point that they could have a reasonable implementation for cross-process channels that use a background task. But -- if a channel is implemented using a background task, I'm pretty sure its user-visible component is just going to be a memory channel. The background task holds onto one end of a memory channel and shovels objects between it and the transport, and the user sends or receives using the other end of the memory channel. There's no need for a different type of Channel subclass at all. I feel like the main reason to write a new Channel subclass is if you don't want to use background tasks for some reason. Like, in an HTTP/2 implementation it's probably true that you don't want two background tasks per stream, which calls for a "push" rather than a "pull" sort of interface on the sending side, which you can't really do using memory channels. Maybe there are only ever two families of channels -- the existing memory channels, plus one that supports the opposite-polarity interface using an ABC of some kind. (In memory channels, calling send() makes receive() on the other side return; in the opposite-polarity interface, calling send() on the channel would call send() on some interface that was passed when the channel was created.) Or people who need the opposite polarity could just write their own full Channel implementations, but then we're back to clone() being annoying.
IMO, clone() is basically reference counting for an async resource. Which is useful! The concept isn't really specific to channels, but channels are the type of async resource that's friendliest to concurrent utilization, so I don't think there's necessarily anything wrong with only supporting it for channels. The main downside of reference counting here is that it becomes unclear when the resource actually goes away. Clarity about scoping is kind of Trio's thing, but with memory channels being such a foundational synchronization primitive, this might be a case where practicality beats purity? |
I dunno, I feel like if I saw one library do: producer, consumer = open_memory_channel(...) and another one do consumer, producer = open_memory_channel(...) then... well, I'd think the second library author was a bit odd, I guess, because no-one ever talks about "consumer/producer architectures" :-). But I wouldn't be confused or anything.
If producers only call
Eh, not necessarily. A websocket is conceptually a
Sorry, I didn't follow this part :-)
It is useful, but I'm not sure whether it's so useful that it belongs on every channel. Especially since as you note, it's a hassle to implement :-). You can certainly implement it generically if you want: class shared_ptr(AsyncResource, Generic[T]): # name stolen from c++
def __init__(self, resource: T):
self.resource: Optional[T] = resource
self._counter = [1]
def clone(self) -> async_shared_ptr[T]:
if self.resource is None:
raise trio.ClosedResourceError
else:
new = async_shared_ptr(self.resource)
new._counter = self._counter
self._counter[0] += 1
return new
async def aclose(self) -> None:
if self.resource is None:
return
self._counter[0] -= 1
if not self._counter[0]:
resource = self.resource
self.resource = None
await resource.aclose()
async def producer_consumer_example():
p, c = open_memory_channel(...)
async with trio.open_nursery() as nursery:
async with shared_ptr(p) as p_shared, shared_ptr(c) as c_shared:
nursery.start_soon(producer, p_shared.clone())
nursery.start_soon(producer, p_shared.clone())
nursery.start_soon(consumer, c_shared.clone())
nursery.start_soon(consumer, c_shared.clone()) Either way though this seems way harder to explain than the version with separate nurseries for the producers and consumers. |
If you have a bidirectional memory channel type, it's pretty reasonable to want to send and receive on it using the same object. This is how Go's channels work, for example. If we don't expose methods to close just the send side or just the receive side, then we'll always have the same number of senders as receivers, so it won't ever be possible to get BrokenResourceError from a send or EndOfChannel from a receive -- because someone could come along later and receive/send on the same object you're sending/receiving on. Here's a concrete proposal to elicit opinions:
|
I think we had a miscommunication :-). When I say "bidirectional" I mean "full duplex" – handle A can send/receive to handle B, and handle B can send/receive to handle A, but with a separate buffers for each direction. Think I'm not suggesting having a "loopback" channel where |
As a follow up to the discussion around cloned channels versus separate nurseries, I had the use case (from a test) where the consumer task is the lone task inside the nursery block. Before adding an It seems to me that I was wondering might it be possible to only implicitly create a clone if It seems to me the main purpose of clones is for making channels sort of task safe in terms of closure tracking. So maybe clones can just be something implicit and only activated if at least one task actually indicates that it expects closure tracking?
I probably missing something but couldn't it be as simple as: async def __aenter__(self):
self._state.open_receive_channels += 1
return self And then removing the increment from As far as I'm grok-ing the first closing example in the docs doesn't really need 2 tasks spawned since you can do what I did above (one task is the nursery body): import trio
async def main():
async with trio.open_nursery() as nursery:
send_channel, receive_channel = trio.open_memory_channel(0)
nursery.start_soon(producer, send_channel)
async with receive_channel:
async for value in receive_channel:
print("got value {!r}".format(value))
async def producer(send_channel):
async with send_channel:
for i in range(3):
await send_channel.send("message {}".format(i))
trio.run(main) To make this more like my test you need more then one producer task (see below). So what I was thinking is, would it be crazy to suggest not expecting async def main():
async with trio.open_nursery() as nursery:
send_channel, receive_channel = trio.open_memory_channel(0)
for _ in range(2):
nursery.start_soon(producer, send_channel)
async with receive_channel:
async for value in receive_channel:
print("got value {!r}".format(value))
async def producer(send_channel):
async with send_channel:
for i in range(3):
await send_channel.send("message {}".format(i))
trio.run(main) If that's the case then can't we just forget about |
Aha, ok. I agree that that would work fine with closure tracking, but I worry that it introduces complexity that most people won't use and that adds additional chances to hide errors (if you call send/receive on (a clone of?) the wrong object, you get a hang instead of a TypeError; if you're using static type checking, the type checker can't tell you you got mixed up). It also makes it much harder for us to later support
We could call it a If we added such a "loopback channel" or "queue" that let you fork off new send and/or receive channels that share ownership of its internal buffer, and that could be closed without affecting those forked-off channels, then I would be a lot happier with making |
@goodboy - interesting ideas! It sounds like you're not proposing anything that introspects "what task am I running from?" per se, but rather suggesting a different way of tracking whether the channel is still in use: instead of using clones, close the channel when the last
|
Other thoughts on the questions in the initial post:
I think we should. It's not the correct default for all applications, but it's the correct default for more applications than any other single number we could choose, and it makes it immediately obvious that sending on a channel can block. If we have a small but nonzero default buffer size, people who use it might write code that misbehaves when send() blocks (or blocks for too long), which will seem to work fine when testing using small amounts of data and then fall apart under load when the backpressure starts to build up. Also, using zero as a default means the default case doesn't have the possibility of "lost" values. Speaking of which:
Exceptions thrown from I don't think we should necessarily enforce senders-must-close-before-receivers in all cases. Maybe something like:
I've run into a few cases where a sync close() would be useful. I think it should be supported for memory channels. |
Explicit is better than implicit: If you want to use a channel in more than one task independently, clone it. Yes, "async with:" should close its resource. Files also get closed when their context ends, so this is not going to surprise anybody. No, you should not be able to re-open it (you can't do that with files either); if you want to use the channel in another context manager, clone() it. Or use a single encapsulating context. Or just, you know, use it without a context and call The only problem I see with channels, in fact, is that you can call |
Another possible name for channels: It has the nice property that it obviously refers to an endpoint, not the connection itself. Referring to "channel endpoints" all the time is a bit tiresome. One issue is that Erlang purists might complain because mailboxes have features that our channels don't, in particular they're unbounded and support a "selective receive" operation (you don't say "give me the next message", you say "give me the next message that matches the following predicate: ..."). Another issue is that it feels a bit weird to send and receive from the same mailbox. (In Erlang mailboxes are receive-only.)
Yeah, OTOH, if someone does have a bug where a channel endpoint gets lost, then ideally they should get some better feedback than just a deadlock. I guess the two cases where
|
Another strike against |
Make MemorySendChannel and MemoryReceiveChannel into public classes, and move a number of the methods that used to be on the abstract SendChannel/ReceiveChannel interfaces into the Memory*Channel concrete classes. Also add a Channel type, analogous to Stream. See: python-triogh-719 Still to do: - Merge MemorySendChannel and MemoryReceiveChannel into a single MemoryChannel - decide what to do about clone - decide whether to add some kind of half-close on Channel - refactor this PR's one-off solution to python-triogh-1092 into something more general.
Make MemorySendChannel and MemoryReceiveChannel into public classes, and move a number of the methods that used to be on the abstract SendChannel/ReceiveChannel interfaces into the Memory*Channel concrete classes. Also add a Channel type, analogous to Stream. See: python-triogh-719 Still to do: - Merge MemorySendChannel and MemoryReceiveChannel into a single MemoryChannel - decide what to do about clone - decide whether to add some kind of half-close on Channel - refactor this PR's one-off solution to python-triogh-1092 into something more general.
I'll second @oremanj's yes for this. It seems like a minor wart that you can use a memory channel from sync-coloured code (with the There are already extra public methods on memory channels that aren't part of the channel ABC, so code written for memory channels won't just work with arbitrary other channels. |
@lordmauve made another vote for sync-close privately, discussing an IRC-like system where new messages get broadcast to all consumers through a bunch of finite-buffer memory channels, and if a memory channel buffer fills up then that consumer gets booted off:
|
partially addresses python-trio#719
partially addresses python-trio#719
partially addresses python-trio#719
partially addresses python-trio#719
partially addresses python-trio#719
I've made PR #1797 with a sync |
Now that #1797 was merged, what is left to decide here? |
Whew. Did I miss anything? |
Channels (#586, #497) are a pretty complicated design problem, and fairly central to user experience, so while I think our first cut is pretty decent, we'll likely want to fine tune it as we get experience using them.
Here are my initial notes (basically questions that I decided to defer while implementing version 1):
In
open_memory_channel
, should we makemax_buffer_size=0
default? Is this really a good choice across a broad range of settings? The docs recommend it right now, but that's not really based on lots of practical experience or anything. And it's way easier to add defaults later than it is to remove or change them.For channels that allow buffering, it's theoretically possible for values to get "lost" without there ever being an exception (
send
puts it in the buffer, then receiver never takes it out). This even happens if both sides are being careful, and closing their endpoint objects when they're done with them. We could potentially make it so thatReceiveChannel.aclose
raises an exception (I guessBrokenResourceError
) if the channel wasn't already cleanly closed by the sender. This would be inconsistent with the precedent set byReceiveStream
(which inherited it from BSD sockets), but OTOH,ReceiveStream
is a much lower-level API – to use aReceiveStream
you need some non-trivial engineering work to implement a protocol on top, whileReceiveChannel
is supposed to be a simple high-level protocol that's usable out-of-the-box. Practically speaking, though, it would be pretty annoying if some code inside a consumer crashes, and thenReceiveChannel.__aexit__
throws away the actual exception and replaces it withBrokenChannelError
.Should memory channels have a sync
close()
method? They could easily, they only reason not to is that it makes life simpler for hypothetical IPC channels. I have no idea yet how common those will be, or whether it's actually important for in-process channels and cross-process channels to have compatible APIs.I have mixed feelings about
open_memory_stream
returning a bare tuple(send_channel, receive_channel)
. It's simple, and allows for unpacking. But, it requires everyone to memorize which order they go in, which is annoying, and it forces you to juggle two objects, as compared to the traditionalQueue
design that only has one object. I'm convinced that having two objects is important for closure-tracking and IPC, but there is an option that's a kind of hybrid: it could return aChannelPair
object, that's a simple container for a.send_channel
and.receive_channel
object, and, if we want, theChannelPair
could havesend
andreceive
methods that simply delegate to its constituent objects. Then people who hate having two objects could treatChannelPair
like aQueue
, while still breaking it into pieces if desired, or doing closure tracking likeasync with channel_pair.send_channel: ...
. But... I played with this a bit, and found it annoying to type outchannel_pair.send_channel
all the time. In particular, if you do want to split the objects up, then you lose the option of deconstructing-assignment, so you have to do cumbersome attribute access instead. (Or we could also support deconstructing-assignemtn by implementing__iter__
, but that's yet another confusing way then...) For now my intuition is that closure tracking is so broadly useful that everyone will want to use it, and that's easiest with destructuring assignment. But experience might easily prove that intuition wrong.Is
clone
confusing? Is there anything we can do to make it less confusing? Maybe a better name? So far people have flagged this multiple times, but we didn't have docs yet, so it could just be that...And btw, the
Channel
vsStream
distinction is much less obvious than I'd like. No-one would look at those and immediately guess that ah, aChannel
carries objects while aStream
carries bytes. It is learnable, and has some weak justification, and I haven't though of anything better yet, but if someone thinks of something brilliant please speak up.Should the
*_nowait
methods be in the ABC, or just in the memory implementation? They don't make a lot of sense for things like inter-process channels, or like, say... websockets. (A websocket can implement theSendChannel
andReceiveChannel
interfaces, just with the proviso that objects being sent have to be typestr
orbytes
.) The risk of harm is not large, you can always implement these as unconditionalraise WouldBlock
, but maybe it will turn out to be more ergonomic to move them out of the ABC.We should probably have a
trio.testing.check_channel
, like we do for streams.The text was updated successfully, but these errors were encountered: