Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spin Timer Tigger Loop Exit Feature #1381

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

suneetnangia
Copy link
Contributor

This change allow components triggered by timer trigger plugin to exit when they want to, effectively, inversing the control to exit the timer loop as per their business logic. This is useful if you are composing a service which consist of multiple components (some based on pub-sub and others are not) and not all components needs to run in infinite loop.

@suneetnangia suneetnangia force-pushed the spin-timer-tigger-update branch 2 times, most recently from b6006ad to 609168f Compare April 14, 2023 14:43
@suneetnangia suneetnangia marked this pull request as ready for review April 14, 2023 14:59
Copy link
Contributor

@itowlson itowlson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a really interesting idea, and I see the value in it, but I have deep reservations about whether this is the right way to implement it. The fundamental issue is with maintaining the 'completed' state. Exiting the loop is fine within a single local Spin instance, but what happens when that instance restarts - for example, because spin watch restarts it, or because you need to update the app, or because the computer needs to be rebooted, or because the VM is being upgraded and the workload needs to be moved to a different VM? Because all we've done is exit the timer loop in the original instance, the new instance has no memory that the component's work is done, and starts a new timer loop for it. We are relying on transient program flow in the host rather than structured, persistent application state.

And right now I don't have a better solution. Better solutions may be hard: Spin has an unarticulated design philosophy that not only are component instances stateless, but even the runtime itself is sorta-kinda stateless (stateful within a session for sure, but between sessions, not so much) - everything should be in the manifest. This shows up particularly in the cloud where applications can be evicted and rehydrated willy-nilly. A feature like this requires a concept of application state that we just don't have at the moment.

So I am not sure how to proceed. Part of me doesn't want to turn a simple PR into a vast and foundational architectural investigation. But at the same time, I don't feel comfortable introducing this behaviour ad hoc without thinking through the architectural implications (particularly for a sample that is meant to guide others).

I hope this makes sense, and I'd be interested to hear how you and other Spin folks feel about this and how to manage it. Thanks for the PR and for all the thoughts it has sparked!

examples/spin-timer/src/main.rs Outdated Show resolved Hide resolved
examples/spin-timer/spin-timer.wit Outdated Show resolved Hide resolved
@suneetnangia
Copy link
Contributor Author

suneetnangia commented Apr 17, 2023

This is a really interesting idea, and I see the value in it, but I have deep reservations about whether this is the right way to implement it. The fundamental issue is with maintaining the 'completed' state. Exiting the loop is fine within a single local Spin instance, but what happens when that instance restarts - for example, because spin watch restarts it, or because you need to update the app, or because the computer needs to be rebooted, or because the VM is being upgraded and the workload needs to be moved to a different VM? Because all we've done is exit the timer loop in the original instance, the new instance has no memory that the component's work is done, and starts a new timer loop for it. We are relying on transient program flow in the host rather than structured, persistent application state.

And right now I don't have a better solution. Better solutions may be hard: Spin has an unarticulated design philosophy that not only are component instances stateless, but even the runtime itself is sorta-kinda stateless (stateful within a session for sure, but between sessions, not so much) - everything should be in the manifest. This shows up particularly in the cloud where applications can be evicted and rehydrated willy-nilly. A feature like this requires a concept of application state that we just don't have at the moment.

So I am not sure how to proceed. Part of me doesn't want to turn a simple PR into a vast and foundational architectural investigation. But at the same time, I don't feel comfortable introducing this behaviour ad hoc without thinking through the architectural implications (particularly for a sample that is meant to guide others).

I hope this makes sense, and I'd be interested to hear how you and other Spin folks feel about this and how to manage it. Thanks for the PR and for all the thoughts it has sparked!

Thanks @itowlson, I don't disagree with what you alluded to above, it's a valid concern. One way of thinking about this is that we can decouple the application level state management from Spin framework (to your point keeping it stateless) and let the components handle state between restarts as per their business logic and state store options provide by Spin e.g. KV store on a single machine or Redis if it's running on multiple machines, as an example. This is how I am addressing some of these concerns in my local branch where app restarts do not lose this state, as the state is maintained in KV store (single machine use-case).
I guess the other example of this pattern where state is maintained externally would be where we checkpoint the events from Azure Events Hub as we read them, this process can run for a long time and we may need app/vm/host restarts in-between for the very same reason you forementioned.

Would welcome yours/others thoughts on it.
Ideally, I'd prefer to have http 202/accept pattern in Spin along side this polling/timer trigger to allow e2e service composition but that's something for another PR/issue (we discussed some of it in Discord).

Signed-off-by: Suneet Nangia <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: 👀 In review
Development

Successfully merging this pull request may close these issues.

2 participants