-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Receive flow for application messages #326
Conversation
207852d
to
82d1eb7
Compare
|
||
let result = group.receive().await; | ||
if let GroupError::ReceiveError(errors) = result.err().unwrap() { | ||
assert_eq!(errors.len(), 1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The OpenMLS manual states: To guarantee the best possible Forward Secrecy, the key material used to encrypt messages is immediately discarded after encryption. This means that the message author cannot decrypt application messages. If access to the messages content is required after creating the message, a copy of the plaintext message should be kept by the application.
So I think we'll need to do a few things in the commit flow:
- Save the plaintext message when sending
- Advance the last message timestamp in the topics table, to indicate the payload has already been consumed. I think we need to correlate this based on the hash of the encrypted bytes - I didn't see any way to detect the sender without decrypting the message, which we can't do here
Saving these for later PR's.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The plaintext and the message hash are already saved in the intent to tee this flow up
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For application messages, we can just use the plaintext saved in the intent.
For commits, we just need to merge_pending_commit
. Technically, there is some ambiguity around which pending commit is actually being merged since the function doesn't take any arguments. We might want some more metadata on the intent to ensure the pending commit in the mls store matches the intent. In practice it should be fine since we can only have one pending commit at a time.
As long as we properly clear out failed intents it'll always be committing the right one, but it does feel a bit brittle to not know exactly which staged commit you are merging.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@neekolas it might make sense to save the plaintext during intent publishing directly onto the messages table, with a 'status' field like 'uncommitted', that allows us to support optimistic UI on send if apps want it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed. We are going to need optimistic sending. Especially if we don't intend to fully sync after every message publish, which I think makes the most sense.
let receive_errors: Vec<MessageProcessingError> = envelopes | ||
.into_iter() | ||
.map(|envelope| -> Result<(), MessageProcessingError> { | ||
let mls_message_in = MlsMessageIn::tls_deserialize_exact(&envelope.message)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The thing that gets messy here is error handling. There are some errors we want to just log and move on from (garbage message, conflicting epochs/race conditions).
But there are some errors that we need to retry. For example, any sort of IOError saving to the DB or getting a connection from the pool. If Client A processes a commit successfully and Client B fails on the same commit, their group state will fall out of sync and Client B will be unable to read future messages.
One way to solve this would be to have some sort of IsRetryable
method available on GroupError
s. Retryable errors would abort the process_messages
flow entirely, since we don't want to apply any future commits or increment the last_message_timestamp
until we've fully processed the preceding message. The caller could then retry later. We'd just have to be sure that retryable errors can eventually either succeed or turn into non-retryable errors (maybe by keeping track of the number of attempts). Otherwise it could be a never-ending retry loop.
Open to other solutions.
It also might be valid to completely ignore this edge case and just have really good mechanisms for re-adding yourself to the group once the client gets into this bad state, assuming the bad state is reliably detectable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a great point - I added this to the list of things to follow up on above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a similar issue to figure out for the message publishing flow as well.
This reads in application messages, decrypts them, and stores them in the database.
There's still lots of missing pieces here:
topic_state
table separatelyThere's plenty of space left here also to add in the logic for other message types