Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: single-tick non-tracking event loop runner #56

Closed
wants to merge 1 commit into from

Conversation

guybedford
Copy link
Contributor

ComponentizeJS will run the event loop without interest as a way to ensure that async work is being done.

We do this in the post_call for functions so that the event loop is able to progress, without that affecting the primary exported function behaviour.

Certainly with async support this picture will change, but even with async support there is the concept I think this concept of "running a single tick" is useful.

The problem is that running a single tick does not do any task selection due to the risk of blocking. Thus, this PR implements a non-blocking ready call as an alternative to select in the specific situation when you are running the event loop without async interest explicitly registered. This allows things like timeouts and other events that aren't tracked to still complete when they are ready without causing unnecessary blocking.

I think this is a worthwhile use case for seeing WebAssembly/wasi-io#75 implemented as well.

@guybedford
Copy link
Contributor Author

I've also added some fixes here to the event loop interest, such that event loop interest now exactly corresponds to the is_active boolean function check on fetch event.

Copy link
Member

@tschneidereit tschneidereit left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes sense to me. I was initially concerned about spec compliance for FetchEvent, since this will cause us to ponentially process async events without waitUntil being called, but I just checked again, and the service workers spec doesn't restrict this: waitUntil gives a guarantee that events are processed, instead of being a condition for it.

Comment on lines 88 to 96
// Perform a non-blocking select in the case of there being no event loop interest
// (we are thus only performing a "single tick", but must still progress work that is ready)
std::optional<size_t> maybe_task_idx = api::AsyncTask::ready(tasks);
if (!maybe_task_idx.has_value()) {
break;
}
task_idx = maybe_task_idx.value();
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IIUC this will always run either a single ready task if there happens to be one, or none at all. Would it perhaps be better to continue running until no tasks are ready and we'd be blocking?

I think that'd be achieved by reverting the event loop to while (true), since then the break statements here and in line 76 above would ensure that we still end the loop.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is if you have something like setInterval(() => console.log('busy')), that will just keep looping if we do that, so we do need to just do the one loop I think, to avoid potential long-running cycles between newly created handles and job processing.

Perhaps an alternative would be to process all ready handles on each tick, effectively adding an inner loop for ready tasks. This gets back to our previous discussion though about interleaving of job processing with task processing, where we determined jobs should progress after each task. Perhaps another call to RunJobs in the inner loop can get that correct while supporting processing all tasks here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The problem is if you have something like setInterval(() => console.log('busy')), that will just keep looping.

I am curious to understand the motivation as to why we would want to prevent that. Is it not just similar to the user writing while(true) {console.log('busy')}. Running a script that contains just setInterval(() => console.log('busy')) in node.js or the browser just loops forever. Wouldn't this just be deviating from the standard behaviour?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The concept here is when there is no interest in the event loop, to only run a single "tick". This way we can represent the concept of progressing "some time", even when there is no real interest in its progression, while interleaving work with other component executions.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes me wonder if we shouldn't instead expose a way to drive the event loop explicitly, or rely on embedders registering interest explicitly.

I fear that the concept of "a single tick" is very hard to define. Perhaps the best intuition I can come up with is to say "without interest, don't ever become blocked on I/O". That however would not address your concerns about setInterval(() => console.log('busy')). But maybe that actually is okay? If we rule that out, there are still the normal while (true) and the slightly more elaborate while (await Promise.resolve(true)) to implement iloops with, and we don't really have a way to address them.

I guess the real issue might to some degree be that we don't have a holistic, cross-component view of the event loop in any of this, because we're checking for readiness instead of doing a poll. Maybe that's another argument for exposing a way to drive the event loop? The loop itself could then again use poll for each iteration, properly integrating with the overall event queue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Having the primitives for event loop construction could well be more amenable to these custom use cases as well as for preview 3 support.

I do think we should avoid the saturation case entirely - cooporative event loop construction should really be non-task-saturating, by adhering to the event loop spec without having one loop overwhelm the other and being able to cooporatively form a wider event loop. For example, two ComponentizeJS components, each with a setInterval(() => console.log("tick")) should interleave their ticks in this model.

Again, using poll also risks task saturation as one event loop will uncooporatively stall the other's progression, hence why I'm using ready here.

Perhaps we can introduce a new function for this cooporative case nonblocking_check_next_tasks or similar?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For example, two ComponentizeJS components, each with a setInterval(() => console.log("tick")) should interleave their ticks in this model.

I very much agree! But isn't this something the runtime should ensure? And that should apply to interest-full event loops, too?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

While the model of async we are designing may be one based entirely on interest at each call site, there is still the background task nature of JS that surely should be maintained for multiple JS components interacting. Think more like a long-running command component situation. Where we would want interleaved zero interest calls between all the components in a loop to represent background task processing. Perhaps via some special background_tasks() export, which by its nature is zero interest. At least for now I'm using the post returns in ComponentizeJS to somewhat enable this with this PR.

runtime/event_loop.cpp Show resolved Hide resolved
runtime/event_loop.cpp Show resolved Hide resolved
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants