Skip to content

Commit

Permalink
ci: fix codespell errors
Browse files Browse the repository at this point in the history
Signed-off-by: Yang Chiu <[email protected]>
  • Loading branch information
yangchiu authored and innobead committed Feb 19, 2024
1 parent 46cfe13 commit b3a6cd6
Show file tree
Hide file tree
Showing 42 changed files with 82 additions and 81 deletions.
1 change: 1 addition & 0 deletions .github/workflows/codespell.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,3 +20,4 @@ jobs:
with:
check_filenames: true
skip: "*/**.yaml,*/**.yml,./scripts,./vendor,MAINTAINERS,LICENSE,go.mod,go.sum"
ignore_words_list: aks
2 changes: 1 addition & 1 deletion build_engine_test_images/terraform/aws/ubuntu/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,7 @@ resource "aws_route_table" "build_engine_aws_public_rt" {
}
}

# Assciate public subnet to public route table
# Associate public subnet to public route table
resource "aws_route_table_association" "build_engine_aws_public_subnet_rt_association" {
depends_on = [
aws_subnet.build_engine_aws_public_subnet,
Expand Down
6 changes: 3 additions & 3 deletions docs/content/manual/functional-test-cases/backup.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ Backup create operations test cases
|-----| --- | --- | --- |
| 1 | Create backup from existing snapshot | **Prerequisite:**<br><br>* Backup target is set to NFS server, or S3 compatible target.<br><br>1. Create a workload using Longhorn volume<br>2. Write data to volume, compute it’s checksum (checksum#1)<br>3. Create a snapshot (snapshot#1)<br>4. Create a backup from (snapshot#1)<br>5. Restore backup to a different volume<br>6. Attach volume to a node and check it’s data, and compute it’s checksum | * Backup should be created<br>* Restored volume data checksum should match (checksum#1) |
| 2 | Create volume backup for a volume attached to a node | **Prerequisite:**<br><br>* Backup target is set to NFS server, or S3 compatible target.<br><br>1. Create a volume, attach it to a node<br>2. Format volume using ext4/xfs filesystem and mount it to a directory on the node<br>3. Write data to volume, compute it’s checksum (checksum#1)<br>4. Create a backup<br>5. Restore backup to a different volume<br>6. Attach volume to a node and check it’s data, and compute it’s checksum<br>7. Check volume backup labels | * Backup should be created<br>* Restored volume data checksum should match (checksum#1)<br>* backup should have no backup labels |
| 3 | Create volume backup used by Kubernetes workload | **Prerequisite:**<br><br>* Backup target is set to NFS server, or S3 compatible target.<br><br>1. Create a deployment workload with `nReplicas = 1` using Longhorn volume<br>2. Write data to volume, compute it’s checksum (checksum#1)<br>3. Create a backup<br>4. Check backup labels<br>5. Scale down deployment `nReplicas = 0`<br>6. Delete Longhorn volume<br>7. Restore backup to a volume with the same deleted volume name<br>8. Scale back deployment `nReplicas = 1`<br>9. Check volume data checksum | * Backup labels should contain the following informations about workload that was using the volume at time of backup.<br> * Namespace<br> <br> * PV Name<br> <br> * PVC Name<br> <br> * PV Status<br> <br> * Workloads Status<br> <br> * Pod Name <br> Workload Name <br> Workload Type <br> Pod Status<br> <br>* After volume restore, data checksum should match (checksum#1) |
| 3 | Create volume backup used by Kubernetes workload | **Prerequisite:**<br><br>* Backup target is set to NFS server, or S3 compatible target.<br><br>1. Create a deployment workload with `nReplicas = 1` using Longhorn volume<br>2. Write data to volume, compute it’s checksum (checksum#1)<br>3. Create a backup<br>4. Check backup labels<br>5. Scale down deployment `nReplicas = 0`<br>6. Delete Longhorn volume<br>7. Restore backup to a volume with the same deleted volume name<br>8. Scale back deployment `nReplicas = 1`<br>9. Check volume data checksum | * Backup labels should contain the following information about workload that was using the volume at time of backup.<br> * Namespace<br> <br> * PV Name<br> <br> * PVC Name<br> <br> * PV Status<br> <br> * Workloads Status<br> <br> * Pod Name <br> Workload Name <br> Workload Type <br> Pod Status<br> <br>* After volume restore, data checksum should match (checksum#1) |
| 4 | Create volume backup with customized labels | **Prerequisite:**<br><br>* Backup target is set to NFS server, or S3 compatible target.<br><br>1. Create a volume, attach it to a node<br>2. Create a backup, add customized labels <br> key: `K1` value: `V1`<br>3. Check volume backup labels | * Backup should be created with customized labels |
| 5 | Create recurring backups | 1. Create a deployment workload with `nReplicas = 1` using Longhorn volume<br>2. Write data to volume , compute it’s checksum (checksum#1)<br>3. Create a recurring backup `every 5 minutes`. and set retain count to `5`<br>4. add customized labels key: `K1` value: `V1`<br>5. Wait for recurring backup to triggered (backup#1, backup#2 )<br>6. Scale down deployment `nReplicas = 0`<br>7. Delete the volume.<br>8. Restore backup to a volume with the same deleted volume name<br>9. Scale back deployment `nReplicas = 1`<br>10. Check volume data checksum | * backups should be created with Kubernetes status labels and customized labels<br>* After volume restore, data checksum should match (checksum#1)<br>* after restoring the backup recurring backups should continue to be created |
| 6 | Backup created using Longhorn behind proxy | **Prerequisite:**<br><br>* Setup a Proxy on an instance (Optional: use squid)<br>* Create a single node cluster in EC2<br>* Deploy Longhorn<br><br>1. Block outgoing traffic except for the proxy instance.<br>2. Create AWS secret in longhorn.<br>3. In UI Settings page, set backupstore target and backupstore credential secret<br>4. Create a volume, attach it to a node, format the volume, and mount it to a directory.<br>5. Write some data to the volume, and create a backup. | * Ensure backup is created |
Expand Down Expand Up @@ -99,7 +99,7 @@ Disaster Recovery test cases
| DR volume across the cluster #5 | Cluster A:<br><br>* Create volume Y<br>* Attach the volume Y<br>* Create a backup of Y<br><br>Cluster B:<br><br>* Backup Volume list page, click \`Create Disaster Recovery Volume\` from volume dropdown<br>* Create two DR volumes Ydr1 and Ydr2.<br>* Attach the volume Y to any node<br>* Mount the volume Y on the node<br>* Write a file of 10Mb into it, use \`/dev/urandom\` to generate the file<br>* Calculate the checksum of the file<br>* Make a Backup<br>* Attach Ydr1 and Ydr2 to any nodes | * DR volume's last backup should be updated automatically, after settings.BackupPollInterval passed.<br>* DR volume.LastBackup should be different from DR volume's controller\[0\].LastRestoredBackup temporarily (it's restoring the last backup)<br>* During the restoration, DR volume cannot be activated.<br>* Eventually, DR volume.LastBackup should equal to controller\[0\].LastRestoredBackup. |
| DR volume across the cluster #6 | \[follow #5\] <br>Cluster A:<br><br>* In the directory mounted volume Y, write a new file of 100Mb.<br>* Record the checksum of the file<br>* Create a backup of volume Y<br><br>Cluster B:<br><br>* Wait for restoration of volume Ydr1 and Ydr2 to complete<br>* Activate Ydr1<br>* Attach it to one node and verify the content | * DR volume's last backup should be updated automatically, after settings.BackupPollInterval passed.<br>* Eventually, DR volume.LastBackup should equal to controller\[0\].LastRestoredBackup.<br>* Ydr1 should have the same file checksum of volume Y |
| DR volume across the cluster #7 | \[follow #6\] <br>Cluster A<br><br>* In the directory mounted volume Y, remove all the files. Write a file of 50Mb<br>* Record the checksum of the file<br><br>Cluster B<br><br>* Change setting.BackupPollInterval to longer e.g. 1h<br><br>Cluster A<br><br>* Create a backup of volume Y<br><br>Cluster B <br>\[DO NOT CLICK BACKUP PAGE, which will update last backup as a side effect\]<br><br>* Before Ydr2's last backup updated, activate Ydr2 | * YBdr2's last backup should be immediately updated to the last backup of volume Y<br>* Activate should fail due to restoration is in progress | When user clicks on “activate DRV”, restoration happens<br><br>And the volume goes into detached state |
| DR volume across the cluster #8 | Cluster A<br><br>* Create volume Z<br>* Attach the volume Z<br>* Create a backup of Z<br><br>Cluster B<br><br>* Backup Volume list page, click \`Create Disaster Recovery Volume\` from volume dropdown<br>* Create DR volumes Zdr1, Zdr2 and Zdr3<br>* Attach the volume Zdr1, Zdr2 and Zdr3 to any node<br>* Change setting.BackupPollInterval to approriate interval for multiple backups e.g. 15min<br>* Make sure LastBackup of Zdr is consistent with that of Z<br><br>Cluster A<br><br>* Create multiple backups for volume Z before Zdr's last backup updated. For each backup, write or modify at least one file then record the cheksum.<br><br>Cluster B<br><br>* Wait for restoration of volume Zdr1 to complete<br>* Activate Zdr1<br>* Attach it to one node and verify the content | * Zdr1's last backup should be updated after settings.BackupPollInterval passed.<br>* Zdr1 should have the same files with the the same checksums of volume Z |
| DR volume across the cluster #8 | Cluster A<br><br>* Create volume Z<br>* Attach the volume Z<br>* Create a backup of Z<br><br>Cluster B<br><br>* Backup Volume list page, click \`Create Disaster Recovery Volume\` from volume dropdown<br>* Create DR volumes Zdr1, Zdr2 and Zdr3<br>* Attach the volume Zdr1, Zdr2 and Zdr3 to any node<br>* Change setting.BackupPollInterval to appropriate interval for multiple backups e.g. 15min<br>* Make sure LastBackup of Zdr is consistent with that of Z<br><br>Cluster A<br><br>* Create multiple backups for volume Z before Zdr's last backup updated. For each backup, write or modify at least one file then record the checksum.<br><br>Cluster B<br><br>* Wait for restoration of volume Zdr1 to complete<br>* Activate Zdr1<br>* Attach it to one node and verify the content | * Zdr1's last backup should be updated after settings.BackupPollInterval passed.<br>* Zdr1 should have the same files with the the same checksums of volume Z |
| DR volume across the cluster #9 | \[follow #8\] <br>Cluster A<br><br>* Delete the latest backup of Volume Z | * Last backup of Zdr2 and Zdr3 should be empty after settings.BackupPollInterval passed. Field controller\[0\].LastRestoredBackup and controller\[0\].RequestedBackupRestore should retain. |
| DR volume across the cluster #10 | \[follow #9\] <br>Cluster B<br><br>* Activate Zdr2<br>* Attach it to one node and verify the content | * Zdr2 should have the same files with the the same checksums of volume Z | |
| DR volume across the cluster #11 | \[follow #10\] <br>Cluster A<br><br>* Create one more backup with at least one file modified.<br><br>Cluster B<br><br>* Wait for restoration of volume Zdr3 to complete<br>* Activate Zdr3<br>* Attach it to one node and verify the content | * Zdr3 should have the same files with the the same checksums of volume Z |
Expand Down Expand Up @@ -150,7 +150,7 @@ The setup requirements:
| 4 | Delete the backup with `DeletionPolicy` as delete | 1. Repeat the steps from test scenario 1.<br>2. Delete the `VolumeSnapshot` using `kubectl delete volumesnapshots test-snapshot-pvc` | 1. The `VolumeSnapshot` should be deleted.<br>2. By default the `DeletionPolicy` is delete, so the `VolumeSnapshotContent` should be deleted.<br>3. Verify in the backup store, the backup should be deleted. |
| 5 | Delete the backup with `DeletionPolicy` as retain | 1. Create a `VolumeSnapshotClass` class with `deletionPolicy` as Retain<br><pre>kind: VolumeSnapshotClass<br>apiVersion: snapshot.storage.k8s.io/v1beta1<br>metadata:<br> name: longhorn<br>driver: driver.longhorn.io<br>deletionPolicy: Retain</pre>2. Repeat the steps from test scenario 1.<br>3. Delete the `VolumeSnapshot` using `kubectl delete volumesnapshots test-snapshot-pvc` | 1. The `VolumeSnapshot` should be deleted.<br>2. `VolumeSnapshotContent` should NOT be deleted.<br>3. Verify in the backup store, the backup should NOT be deleted. |
| 6 | Take a backup from longhorn of a snapshot created by csi snapshotter. | 1. Create a volume test-vol and write into it.<br> 1. Compute the md5sum<br> <br>2. Create the below `VolumeSnapshot` object<br><pre>apiVersion: snapshot.storage.k8s.io/v1beta1<br>kind: VolumeSnapshot<br>metadata:<br> name: test-snapshot-pvc<br>spec:<br> volumeSnapshotClassName: longhorn<br> source:<br> persistentVolumeClaimName: test-vol</pre>3. Go to longhorn UI and click on the snapshot created and take another backup | 1. On creating a `VolumeSnapshot`, a backup should be created in the backup store.<br>2. On creating another backup from longhorn UI, one more backup should be created in backup store. |
| 7 | Delete the `csi plugin` while a backup is in progress. | 1. Create a volume and write into it. <br> Compute the md5sum of the data.<br>2. Create the below `VolumeSnapshot` object<br><pre>apiVersion: snapshot.storage.k8s.io/v1beta1<br>kind: VolumeSnapshot<br>metadata:<br> <br>name: test-snapshot-pvc<br>spec:<br> volumeSnapshotClassName: longhorn<br> source:<br> persistentVolumeClaimName: test-vol</pre>3. While the backup is in progress, delete the `csi plugin` | On deleting `csi plugin` , a new pod of `csi plugin` should get created and the bacup should continue to complete. |
| 7 | Delete the `csi plugin` while a backup is in progress. | 1. Create a volume and write into it. <br> Compute the md5sum of the data.<br>2. Create the below `VolumeSnapshot` object<br><pre>apiVersion: snapshot.storage.k8s.io/v1beta1<br>kind: VolumeSnapshot<br>metadata:<br> <br>name: test-snapshot-pvc<br>spec:<br> volumeSnapshotClassName: longhorn<br> source:<br> persistentVolumeClaimName: test-vol</pre>3. While the backup is in progress, delete the `csi plugin` | On deleting `csi plugin` , a new pod of `csi plugin` should get created and the backup should continue to complete. |
| 8 | Take a backup using csi snapshotter with backup store as NFS server. | | |
| 9 | Restore from NFS backup store. | | |
| 10 | Delete from NFS backup store. | | |
Expand Down
Loading

0 comments on commit b3a6cd6

Please sign in to comment.