-
-
Notifications
You must be signed in to change notification settings - Fork 192
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attempt to write saved data in a resilient manner #1001
Attempt to write saved data in a resilient manner #1001
Conversation
Last commit published: e8ac624b542217cac433f4f15517566bf5ab851a. PR PublishingThe artifacts published by this PR:
Repository DeclarationIn order to use the artifacts published by the PR, add the following repository to your buildscript: repositories {
maven {
name 'Maven for PR #1001' // https://github.com/neoforged/NeoForge/pull/1001
url 'https://prmaven.neoforged.net/NeoForge/pr1001'
content {
includeModule('net.neoforged', 'testframework')
includeModule('net.neoforged', 'neoforge')
}
}
} MDK installationIn order to setup a MDK using the latest PR version, run the following commands in a terminal. mkdir NeoForge-pr1001
cd NeoForge-pr1001
curl -L https://prmaven.neoforged.net/NeoForge/pr1001/net/neoforged/neoforge/21.0.107-beta-pr-1001-feature-resillient-io/mdk-pr1001.zip -o mdk.zip
jar xf mdk.zip
rm mdk.zip || del mdk.zip To test a production environment, you can download the installer from here. |
I'm not 100% sure if putting the patch in NbtIo is the best place, as there are certain SavedData implementations which themselves do this kind of resilient saving - the player's saved data is one example. It may be worth moving the implementation into SavedData directly, rather than NbtIo. Thoughts? |
I wanted to look into how vanilla handles this for its main level data file. There is some convoluted logic to make it work in many corner cases. |
Looking at it, it seems that it uses the same trick (
My guess is that internally this acts as a write-through for any caches and buffers for the region file that's open. Everything else on top of this is just shuffling I/O around so that it doesn't happen on the main thread, as they become blocking writes. Since we're writing to a temporary file and moving that into place, all buffers should be appropriately flushed before the move completes anyway. We could consider moving to an I/O pool but I don't think this is necessary, and would be a rather invasive patch in comparison. |
Doesn't MC also have some recovery system where it attempts to read files that are updated but haven't been moved yet? If it's just falling back to The atomic move should be good enough. |
I couldn't find anything fancy like that. I assume they couldn't do that anyway, because it would risk data loss; a partially completed save could still have partially valid data and load successfully, even though the older file is more likely to be correct. |
patches/net/minecraft/world/level/storage/DimensionDataStorage.java.patch
Show resolved
Hide resolved
patches/net/minecraft/world/level/storage/DimensionDataStorage.java.patch
Outdated
Show resolved
Hide resolved
fbc0902
to
94148f2
Compare
94148f2
to
4479ecb
Compare
4479ecb
to
36c2e54
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does closing the channel not flush it already?
No. Not that we know of. You are thinking of java flush. We need OS-level fsync. |
Ok. Just the extra parentheses and we're good then. |
We no longer need to implement this ourselves for NeoForge. For Fabric, keep the existing patch. See: neoforged/NeoForge#1001
We no longer need to implement this ourselves for NeoForge. For Fabric, keep the existing patch. See: neoforged/NeoForge#1001
I'm implementing a naive solution for saving NBT data in a resilient manner here to prevent data corruption when a server/game crashes during saving. This should prevent data attachments (especially large ones) from corrupting, as any unfinished writes will get directed to a temp file instead. These temp files are automatically cleaned on load, if detected, using a naive algorithm which likely has terrible performance characteristics.
This should resolve #775, at least partially. It would be better if we could guarantee the write goes through, but if the JVM hard crashes (e.g. access violation, power loss) I don't see a way around needing something like this.