Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Receive flow for application messages #326

Merged
merged 16 commits into from
Nov 14, 2023

Conversation

richardhuaaa
Copy link
Contributor

@richardhuaaa richardhuaaa commented Nov 10, 2023

This reads in application messages, decrypts them, and stores them in the database.

There's still lots of missing pieces here:

  1. Sync from last processed payload, rather than syncing all payloads - working on adding the topic_state table separately
  2. Handle concurrency of syncing
  3. Handle payloads coming from yourself
  4. Better unit tests - we can't decrypt payloads coming from ourselves, so we need to support adding members first
  5. This only reads payloads for a group and doesn't return anything - we still need a higher-level method to list all messages in a group by reading from the DB, we also need a higher-level method to read payloads for all groups.
  6. Distinguishing between retriable and non-retriable errors

There's plenty of space left here also to add in the logic for other message types

@richardhuaaa richardhuaaa force-pushed the rich/receive-application-message branch from 207852d to 82d1eb7 Compare November 10, 2023 01:02

let result = group.receive().await;
if let GroupError::ReceiveError(errors) = result.err().unwrap() {
assert_eq!(errors.len(), 1);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The OpenMLS manual states: To guarantee the best possible Forward Secrecy, the key material used to encrypt messages is immediately discarded after encryption. This means that the message author cannot decrypt application messages. If access to the messages content is required after creating the message, a copy of the plaintext message should be kept by the application.

So I think we'll need to do a few things in the commit flow:

  1. Save the plaintext message when sending
  2. Advance the last message timestamp in the topics table, to indicate the payload has already been consumed. I think we need to correlate this based on the hash of the encrypted bytes - I didn't see any way to detect the sender without decrypting the message, which we can't do here

Saving these for later PR's.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The plaintext and the message hash are already saved in the intent to tee this flow up

Copy link
Contributor

@neekolas neekolas Nov 14, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For application messages, we can just use the plaintext saved in the intent.

For commits, we just need to merge_pending_commit. Technically, there is some ambiguity around which pending commit is actually being merged since the function doesn't take any arguments. We might want some more metadata on the intent to ensure the pending commit in the mls store matches the intent. In practice it should be fine since we can only have one pending commit at a time.

As long as we properly clear out failed intents it'll always be committing the right one, but it does feel a bit brittle to not know exactly which staged commit you are merging.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@neekolas it might make sense to save the plaintext during intent publishing directly onto the messages table, with a 'status' field like 'uncommitted', that allows us to support optimistic UI on send if apps want it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed. We are going to need optimistic sending. Especially if we don't intend to fully sync after every message publish, which I think makes the most sense.

@richardhuaaa richardhuaaa marked this pull request as ready for review November 14, 2023 01:02
@richardhuaaa richardhuaaa requested a review from a team November 14, 2023 01:02
@richardhuaaa richardhuaaa enabled auto-merge (squash) November 14, 2023 02:14
let receive_errors: Vec<MessageProcessingError> = envelopes
.into_iter()
.map(|envelope| -> Result<(), MessageProcessingError> {
let mls_message_in = MlsMessageIn::tls_deserialize_exact(&envelope.message)?;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thing that gets messy here is error handling. There are some errors we want to just log and move on from (garbage message, conflicting epochs/race conditions).

But there are some errors that we need to retry. For example, any sort of IOError saving to the DB or getting a connection from the pool. If Client A processes a commit successfully and Client B fails on the same commit, their group state will fall out of sync and Client B will be unable to read future messages.

One way to solve this would be to have some sort of IsRetryable method available on GroupErrors. Retryable errors would abort the process_messages flow entirely, since we don't want to apply any future commits or increment the last_message_timestamp until we've fully processed the preceding message. The caller could then retry later. We'd just have to be sure that retryable errors can eventually either succeed or turn into non-retryable errors (maybe by keeping track of the number of attempts). Otherwise it could be a never-ending retry loop.

Open to other solutions.

It also might be valid to completely ignore this edge case and just have really good mechanisms for re-adding yourself to the group once the client gets into this bad state, assuming the bad state is reliably detectable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a great point - I added this to the list of things to follow up on above

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a similar issue to figure out for the message publishing flow as well.

@richardhuaaa richardhuaaa merged commit a9dd41d into main Nov 14, 2023
4 checks passed
@richardhuaaa richardhuaaa deleted the rich/receive-application-message branch November 14, 2023 02:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants