Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Experimenting with abortable algorithms #1443

Closed
wants to merge 2 commits into from
Closed

Conversation

jakearchibald
Copy link
Contributor

@jakearchibald jakearchibald commented Jun 25, 2019

As part of #1440, I've tried to create a system that allows us to define how algorithms are aborted, and what cleanup steps, if any, need to run.

How do we feel about it? Is it useful, or just confusing? The alternative is "abort when", but I think this is harder to follow. I've tried to outline reasons in whatwg/infra#257.

cc @mattto @jungkees


Preview | Diff

@mfalken
Copy link
Member

mfalken commented Jun 25, 2019

This is an interesting idea. It may help to see a real example in practice. However, I’m kind of leaning toward the second example in whatwg/infra#257 (the 7 step one with step 6 being “If aborted”) as being OK. It would seem to match typical implementation code more closely. E.g., we have ServiceWorkerRegisterJob::Abort (https://cs.chromium.org/chromium/src/content/browser/service_worker/service_worker_register_job.cc?l=134&rcl=42482d4f5324ceefe37d7a84777c53febfcf4efa) which has to go through and clean up any half-baked state in one sequential set of steps. Though a better implementation might have done this differently.

@wanderview
Copy link
Member

Yea, I agree "if abort" with explicit checking in algorithms would make aborting most predictable. The reality is that the c++ will have to do this at certain times and we should agree when in order to maintain interop.

This is also going to be a pain to test, but I don't know if there is a good solution for that.

@jakearchibald
Copy link
Contributor Author

I've used it in https://pr-preview.s3.amazonaws.com/w3c/ServiceWorker/pull/1443.html#register-algorithm + update if you're looking for a real world example.

"Abort when" is ok, but you often end up with really big blocks of steps, and it becomes difficult to associate the abort steps with the steps. Eg https://fetch.spec.whatwg.org/#http-network-or-cache-fetch.

The things I like about this proposal, is it's easy to check if a step has an appropriate abort step. And when editing, it's easier to remember to change the abort step if the main step is altered.

Also, the 'throwing' behaviour seems to work better if you have algorithms calling algorithms.

However, it may be possible to get these benefits in a less funky way.

@asakusuma
Copy link

@jakearchibald would it be fair to say that the differentiating scenario for abort step usage is when you have really big blocks of steps with lots of shared state mutation? The service worker spec has several big algorithms, but the aborts don't feel too unwieldy simply because there's not a lot of shared state to rollback. But I can totally see this being really nice if there was a lot of shared state that had to be reset.

Assuming the main use of abort steps is shared state reset, I'm not sure that abort steps are actually directly useful for #1440, since I would prefer the last resort killswitch algorithm itself to nuke a top level container for all service worker global state, rather than relying on each running algorithm. Pulling the ripcord on a state container seems easier to reason about and verify the guarantees of the last resort killswitch, though I'm not sure if that's feasible.

I can also see abort steps being nice for making the abort return values clear, but similarly, I'd rather keep things simple for #1440 and just return null or some other directive indicating that the algorithm has "hard killed" and let the user agent deal with it.

@jakearchibald
Copy link
Contributor Author

I would prefer the last resort killswitch algorithm itself to nuke a top level container for all service worker global state

Yeah, although I'm hoping to reuse this stuff for reg.unregister({ immediate: true }) or whatever. We still need to define what happens to in-flight fetches etc.

@asakusuma
Copy link

We still need to define what happens to in-flight fetches etc.

@jakearchibald for both the killswitch or for unregister({ immediate: true })? Even for the later, if we are immediately getting rid of the entire registration, is there anything special we would need to do within the algorithm to abort? For fetch, can we simply drop the task and fallback to network?

When aborting individual algorithms and keeping the registration, I think it makes sense to ensure the algorithms are clear on how to safely abort without corrupting the rest of the service worker state. If we are already getting rid of the entire registration, I'm don't see how that's necessary.

@jakearchibald
Copy link
Contributor Author

For fetch, can we simply drop the task and fallback to network?

Queued fetches aren't really the issue. It's more fetches that are in progress, as in the request has been made, or is in progress.

If we are already getting rid of the entire registration, I'm don't see how that's necessary.

I'll take a look and see if we can shortcut this.

@asakusuma
Copy link

Queued fetches aren't really the issue. It's more fetches that are in progress, as in the request has been made, or is in progress.

Even if an HTTP request is already in progress, can we not just drop the task? Like if the user agent gets the response, and the requesting task no longer exists, is that not gracefully handled? Not sure if it's possible to have an underlying assumption that, by default, any running service worker task can simply be dropped that the user agent will gracefully fail any dependent tasks cc @mattto @wanderview @asutherland

What does step 5 of Terminate service worker do if there are running tasks, like fetch tasks?

  1. Abort the script currently running in serviceWorker.

@jakearchibald
Copy link
Contributor Author

Even if an HTTP request is already in progress, can we not just drop the task? Like if the user agent gets the response, and the requesting task no longer exists, is that not gracefully handled?

Fetch never happens as part of a task (except in the case of synchronous XHR). It happens "in parallel", which can begin within a task, but there isn't a formal link between "in parallel" and the potential parent task.

What does step 5 of Terminate service worker do if there are running tasks, like fetch tasks?

  1. Abort the script currently running in serviceWorker.

Fetches that have the service worker as their client will terminate (since the fetch group is terminated. The fetch spec defines how this happens. But we don't use a client for things like service worker updates.

Some algorithms already handle termination and task discarding, like https://w3c.github.io/ServiceWorker/#installation-algorithm step 11.3.1 (If task is discarded). But it's still kinda hand-wavy.

@asakusuma
Copy link

Fetches that have the service worker as their client will terminate (since the fetch group is terminated. The fetch spec defines how this happens. But we don't use a client for things like service worker updates.

re: Step 5 of Terminate service worker, Is the fetch group termination implicit or explicit? (I can't seem to find any explicit language around terminating fetch groups). Can we create a similar group construct for service workers? Either per origin or per registration. So that we can associate algorithms (like update) with the service worker group, and then transitively terminate everything in one swoop via the group.

@jakearchibald
Copy link
Contributor Author

Closing this. Fun experiment, but I don't think we need additional infrastructure for this.

@jakearchibald jakearchibald deleted the aborting-algos branch September 15, 2019 06:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants