-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade to automerge 0.5; fix compaction #58
Conversation
@teohhanhui I think that's the one where I chose Greg's API over Alex's for compaction and nobody has really stepped in to decide which one was more correct but we do need them to be consistent for sure. #52 (comment) is the context |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@issackelly apologies for missing the compaction API stuff. I think what you have here makes sense. I'm actually busy working on a bunch of parts of this codebase getting it to interop with the JS implementation and I might tighten up the storage API a bit as part of that.
810d40a
to
7aba870
Compare
5760556
to
ce4ed60
Compare
@@ -732,7 +766,7 @@ impl DocumentInfo { | |||
} | |||
let waker = Arc::new(RepoWaker::Storage(wake_sender.clone(), document_id)); | |||
self.state.poll_pending_save(waker); | |||
self.patches_since_last_save = 0; | |||
self.last_heads = new_heads; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this lead to data loss? We update the heads of the last time we saved before the storage future has resolved, which means the storage future can fail for some reason (or more likely, be stuck in some kind of deadlock) and then then next time we save we only save since the heads which havent made it to disk.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about whether it can lead to data loss, but in any case it does make sense to store the heads together with the future onto the DocState
, and then note_changes
could use the heads of the last (fut, heads) pair? I think it's a good follow-up, because it's a complicated change to make.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Filed #60
@@ -732,7 +766,7 @@ impl DocumentInfo { | |||
} | |||
let waker = Arc::new(RepoWaker::Storage(wake_sender.clone(), document_id)); | |||
self.state.poll_pending_save(waker); | |||
self.patches_since_last_save = 0; | |||
self.last_heads = new_heads; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure about whether it can lead to data loss, but in any case it does make sense to store the heads together with the future onto the DocState
, and then note_changes
could use the heads of the last (fut, heads) pair? I think it's a good follow-up, because it's a complicated change to make.
5e84c07
to
bba686e
Compare
@teohhanhui Thanks! Will take another look... |
Ok if we remove the line at #58 (comment) and fix the conflict it looks good to go. Haven't looked at the fs store changes yet. cc @alexjg |
bba686e
to
58b5fe1
Compare
Keep a thread local counter for number of changes before compaction.
…c didn't change since last save
58b5fe1
to
8b1b3ac
Compare
8b1b3ac
to
8383acd
Compare
Punting the question of ee9504d, but otherwise this should be ready to merge if everything is okay. |
Replaces #21 and #50
TODO: