-
Notifications
You must be signed in to change notification settings - Fork 3
Backup and Restore
Priam supports both snapshot and incremental backups for Cassandra SSTable files and uses S3 to save these files.
Priam leverages Cassandra's snapshot feature to have an eventually consistant backup. Cassandra's snapshot feature flushes data to disk and hard links all SSTable files into a snapshot directory. These files are then picked up by Priam and uploaded to S3. Although snapshotting across cluster is not guaranteed to produce consistant backup of cluster, consistency is resumed upon restore by Cassandra and running repairs.
Priam uses Quartz to schedule all tasks. Snapshot backups are run as a recurring task. They can also be triggered via REST API (refer API section), which is useful during upgrades or maintenance operations.
Snapshots are run on a daily basis for the entire cluster, at a specific time, Ideally during non-peak hours. Priam's backup.hour property allows you to set daily snapshot time (refer properties). The snapshot backup orchestrates invoking of Cassandra 'snapshot' JMX command, uploading files to S3 and cleaning up the directory.
meta.json All SSTable files are uploaded individually to S3 with built in retries on failure. Upon completion, the meta file (meta.json) is uploaded which contains reference to all files that belong to the snapshot. This is then used for validation during restore.
When incremental backups are enabled in Cassandra, hard links are created for all new SSTables created in the incremental backup directory. Since, SSTables are immutable files they can be safely copied to an external source. Priam scans this directory frequently for incremental SSTable files and uploads to S3.
In addition to that Priam also allows throttling the Mb/s which are read from disk.
When restoring data, user provides Priam with the start and end time's for the restore, Priam downloads all the keyspace snapshot files along with any incremental (if enabled) from S3 and orchestrates the stopping and starting of the Cassandra process. During this process we also backup the nodes schema file's but we remove the ring information where the original backup happened to have a clean cluster after restore, this allows us to restore the cluster with 1/2 the size of the existing cluster (Skipping alternate nodes) and run repair to regain the data which was skipped. Restoring to a different sized cluster is supported only for the Keyspaces with replication factor more than 1.
- More on Cassandra's backup feature on Datastax blog here.: