You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Very subject to change, I'm just playing around right now but this feels very possible in ways it hasn't before.
A rust based bining following the abstract binding api that be be used instead of the bindings package today. I'm considering the new bindings api because I think it will be easier to make, but the usage of serialport/stream is so high that this has to work with it eventually.
as performant as the current implementation for reads and writes
prebuilt binaries
Nice to haves
similar if not improved behavior from serialport/bindings
rpi support
Design exploration
A napi.rs based rust bindings that provides a binding class and utility functions. This gives us prebuilt binaries for many platforms and N-API support for electron and worker threads.
The async patterns we need to use here are not very clear to me. I think a mix of serialport-rs and custom async code will get us to the performance metrics. To start we can stick all the serialport-rs code behind tokio's task::spawn_blocking() but this spawns temporary threads for every call which isn't performant for any operation.
The current bindings put all c++ operations on an event loop provided by v8. And put the unix fs read/write operations onto an IO thread pool. For windows the operations are "async" and don't need the thread pool as I understand it. In any case this isn't consistent.
The bindings design has a read and write queue. The current inflight read, inflight write and any administrative function (eg, not a read or write) can all be inflight at the same time being worked by the event loop or by the thread pool. I wonder if drain blocks the event loop in our current setup?
What is blocking?
Reads
Writes
drain
open (maybe)
Opening, setting baud rates and other com settings don't appear to be blocking. This isn't a comprehensive list. Tokio claims to be able to do async io on unix platforms unclear about windows and unclear about serialports.
Possible designs
Dedicating threads for reads, writes and drain, and having a single tokio thread for administrative functions. Pros, everything has dedicated resources, cons we'll need to provide our own communication channels for reads and writes and wont be able to use tokio for that, possible lower performance without a poller telling us when we can resume reading/writing.
Assuming we can do unix async io for serialports with tokio, we use spawn_blocking for windows serialport reading and reading and writing through serialport-rs. Everything else goes on the default tokio thread pool except drain which we use spawn_blocking as well. Pros, easy setup, easy reasoning about execution, cons possible poor performance for windows.
Assuming we can do unix async io for serialports with tokio, we write an async windows serialport struct for reading and writing. Everything goes on the default tokio thread pool except drain which we use spawn_blocking for the blocking thread_pool. Pros, easy setup, easy reasoning about execution, cons, a lot of work possibly for windows serialport writing/reading.
todo
research thread communication methods, maybe I don't need tokio to make this easy ryon is interesting in this space
confirm the read and write queue behavior in the bindings spec
confirm that spawn_blocking spawns a new thread each time. I see references to that and to a blocking pool and how you can turn a normal thread into a blocking thread. Its just very confusing. it does use a thread pool for blocking operations
The text was updated successfully, but these errors were encountered:
Very subject to change, I'm just playing around right now but this feels very possible in ways it hasn't before.
A rust based bining following the abstract binding api that be be used instead of the bindings package today. I'm considering the new bindings api because I think it will be easier to make, but the usage of serialport/stream is so high that this has to work with it eventually.
Must haves
Nice to haves
Design exploration
A napi.rs based rust bindings that provides a binding class and utility functions. This gives us prebuilt binaries for many platforms and N-API support for electron and worker threads.
The async patterns we need to use here are not very clear to me. I think a mix of serialport-rs and custom async code will get us to the performance metrics. To start we can stick all the serialport-rs code behind tokio's
task::spawn_blocking()
but this spawns temporary threads for every call which isn't performant for any operation.The current bindings put all c++ operations on an event loop provided by v8. And put the unix fs read/write operations onto an IO thread pool. For windows the operations are "async" and don't need the thread pool as I understand it. In any case this isn't consistent.
The bindings design has a read and write queue. The current inflight read, inflight write and any administrative function (eg, not a read or write) can all be inflight at the same time being worked by the event loop or by the thread pool. I wonder if
drain
blocks the event loop in our current setup?What is blocking?
Opening, setting baud rates and other com settings don't appear to be blocking. This isn't a comprehensive list. Tokio claims to be able to do async io on unix platforms unclear about windows and unclear about serialports.
Possible designs
Dedicating threads for reads, writes and drain, and having a single tokio thread for administrative functions. Pros, everything has dedicated resources, cons we'll need to provide our own communication channels for reads and writes and wont be able to use tokio for that, possible lower performance without a poller telling us when we can resume reading/writing.
Assuming we can do unix async io for serialports with tokio, we use
spawn_blocking
for windows serialport reading and reading and writing throughserialport-rs
. Everything else goes on the default tokio thread pool exceptdrain
which we usespawn_blocking
as well. Pros, easy setup, easy reasoning about execution, cons possible poor performance for windows.Assuming we can do unix async io for serialports with tokio, we write an async windows serialport struct for reading and writing. Everything goes on the default tokio thread pool except
drain
which we usespawn_blocking
for the blockingthread_pool
. Pros, easy setup, easy reasoning about execution, cons, a lot of work possibly for windows serialport writing/reading.todo
spawn_blocking
spawns a new thread each time. I see references to that and to a blocking pool and how you can turn a normal thread into a blocking thread. Its just very confusing. it does use a thread pool for blocking operationsThe text was updated successfully, but these errors were encountered: