Replies: 2 comments 4 replies
-
There are some different ways to go about it. The easiest way would be to start a snapshot scan over every partition in the entire database and write them out to a backup directory. That will also get rid of any tombstones (and maybe old versions, could be configured), so the created snapshot will be as compact as can be. But it won't be the same database physically, and scanning through a database is slower than a simple file copy. It can kind of run passively in the background as it just relies on the snapshot feature, so compaction can still progress, because compaction keeps old versions around as needed. For a small-ish database, this option could be enough I think, and would probably be a good feature addition. RocksDB has this feature called |
Beta Was this translation helpful? Give feedback.
-
I'm gonna close this in favour of #52. Realistically I think there will be two ways of backups: offline (trivial cloning) and checkpointing. |
Beta Was this translation helpful? Give feedback.
-
If one wanted to do a backup of the database, what's the best practice here? Is there a way to do an "online" (not shutting down the process using the database) backup?
Since there are many directories and files, I assume the safest way is to persist from the process, exit and then create a tarball of the whole directory.
With SQLite, for example, it's possible to run a
VACUUM INTO
that will create a space-efficient snapshot of the database. SQLite can support multiple processes though, so it's a different ball game.Beta Was this translation helpful? Give feedback.
All reactions