Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pubsub delayed scheduling proposal #12
base: main
Are you sure you want to change the base?
Pubsub delayed scheduling proposal #12
Changes from all commits
5467828
3367ebb
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will not work in every broker. Not acking a message can cause no other messages to be consumed in some configurations. In Kafka, there is in-order guarantee per partition, so not acking a message will clog the partition for other messages - even for those that are already due.
Instead, this should implement an outbox pattern. In the outbox pattern, messages are only published after a business transaction takes place - in this case, the business transaction is done by the clock. Then, the component must scan states in a state store to identify which messages are due and publish those, deleting the corresponding record.
The state store keys would basically be in segments of 1h (or any other granularity) and contain the ids for every message that must be delivered in that hour. Then, daprd would look at the current hour and previous (configurable by the user to be -n hours) and publish every message that is due in those time windows.
This will make this solution work in a predictable way without depending on broker-specific handling of "not acking messages" by just relying on the state store pattern underneath.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@yaron2 has this comment been integrated yet? If so can you resolve the conversation :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It hasn't been resolved yet