Skip to content

2018, [warner]: event driven API

piegames edited this page Nov 6, 2020 · 1 revision

I came up with a new API approach today: the "wormhole core" will have (almost) exactly one method, called "execute" or something. It takes a single "event", which describes something that the IO/glue layer wants to deliver to the wormhole core: either an API request from the application, or some IO event like "connection made" or "data received".

This method returns a vector of "actions", which are either API notifications (e.g. "got_versions" or "got message"), or IO actions that the glue layer needs to take (like "make connection" or "send websocket message").

In execute(), all the API events are delivered to the "Boss" state machine, which is also the only one that emits API actions (the joke is: only the boss is allowed to talk to the customers, because all the rest of the workers are engineers and they'll scare them away :-).

All the IO events are delivered to the Rendezvous state machine (which manages the WebSocket connect/wait/reconnect cycle), since that's currently the only one that does any IO. That might change, specifically if we added retry timers to some other layer.

Internally, there's a third event type called MachineEvent, which is generated by one state machine for delivery to some other state machine. For example, when Rendezvous receives the IOEvent::WebSocketConnectionMade, it sends NameplateEvent::Connected to the "Nameplate" machine (which is responsible for claiming the server-side "nameplate" object, which is the "4-" is "4-purple-sausages", and results in a pointer to some "Mailbox" object where we can send/fetch messages). The Nameplate machine will then react to that event by changing some internal states and maybe emitting more messages.

(BTW you should probably read the python docs/api.md for any of this to make sense, and also study the state machine diagrams in docs/state-machines, which I'm afraid aren't documented very well).

The execute() method collects all generated events in a queue, and runs a loop which dispatches them to whichever machine they're for. It also collects the APIAction and IOAction events that need to go back out to the glue layer. When execute() runs out of events to process, it returns the vector of actions, giving the glue layer some work to do.

There's a hell of a lot of boilerplate in the enum names: I'm sure I'm missing some important tricks in Rust that would make this easier. And the unit tests are really unsatisfying: I only exercised a bare minimum of the events being generated, and the amount of work is enormous. There's gotta be a better way.

But I kind of like the overall approach, if we can find a way to make it less wordy.

There are probably several subprojects that we could parallelize on:

  • Writing the stubs for each state machine. There are about 13 of them (basically everything in src/wormhole/_*.py .. grep for "MethodicalMachine"). Each one goes into a separate .rs file (the files are already there). If you look at core/src/nameplate.rs , you'll see what a stub looks like: just enough to be instantiated and then to ignore all the messages you send it. The dispatch function in lib.rs (WormholeCore.process_machine_event) should also get updated to route messages to that machine.

  • Once most of the stubs are in place, we can start on porting over each state machine. We'll have to study the python version, figure out the states and transitions and other processing, and then build the execute() method that does the same thing.

(We don't really have to build a whole stub for e.g. Mailbox before we can start porting other machines that send events to Mailbox. We have to update the process_machine_event() dispatch function to accept messages for the new machine, but we can throw them away instead of delivering them to something that doesn't exist yet. The merge conflicts might be smaller if we write the stubs first, though).

  • Figure out some way to test these machines better than what I have in rendezvous.rs right now. I'm hoping for some concise way to say "send in events A,B,C, assert that I get back events X,Y,Z". But the events are currently enums of enums of enums, and to e.g. check that the WebSocketHandle inside event Z matches the one that was previously provided in event X takes a huge amount of code.

  • write at least one proper glue layer. I think this can mostly be done without any of the state machines working (although we probably can't compile it yet), just based on the stuff in api.rs . Something with a couple of channels and a sensible async frontend API, some threads. The current core/examples/ws.rs hard-codes the expectation that the first IOAction will be a request to open a new WebSocket, and then it opens that websocket, and then it goes into the general "do whatever Actions the Core wants" loop. Instead, it should just call start() and then go into the loop, only making a connection when it gets the WebSocketOpen action. (nominally the glue should be prepared to make multiple simultaneous websocket connections, although the Core is only actually ever going to make one at a time).

  • figure out how Input is going to work. This is the wormhole half of the tab-completion code-input routines: there's an API that lets the frontend ask for the list of nameplates that start with a given prefix, then it can claim the nameplate (i.e. once someone types the first "-" of the code), then it can ask for completions of the wordlist, etc. The python client has a Readline completion helper function that uses this API to do tab-completion. We need to figure out what the API should be (more Events and Actions), port _input.py and _rlcompleter.py, and then eventually write some kind of rlcompleter for some rust frontend CLI libraries.

  • cleanups:

    • enum names are too big. Maybe use an "Event" trait instead of nested enums? with.. maybe a method that says which machine it should be dispatched to?
    • API names don't make much sense. "Events" could be "Commands" or "Requests"? Something to make the direction clear: the app telling the wormhole what to do, or the wormhole asking the IO glue layer to do something for it
    • make a Trait for WormholeCore as API documentation? Or just leave it as a struct with some public methods?

Oh and all of this only gets us as far as the "wormhole send --text" level of functionality. To get actual files, we need to either port wormhole's "Transit" code, or the new (not yet working) "Dilation" code, or both (to provide interoperability with existing python clients). I haven't figured out too much of that level yet. I think we can achieve it with the same basic sans-io API approach (by adding Actions like "open a TCP connection to HOST:PORT", and "pause reading on TCP connection 4", and Events like connectionMade, dataReceived, pauseProducing). But it'll take a bunch of new work beyond the framework we've gotten so far.

Clone this wiki locally