Skip to content

Commit

Permalink
chore(robot): cleanup
Browse files Browse the repository at this point in the history
- Fix test case names to follow a consistent format.
- Remove redundant spaces.

Signed-off-by: Chin-Ya Huang <[email protected]>
  • Loading branch information
c3y1huang committed Dec 16, 2024
1 parent c8516c1 commit 6d767dc
Show file tree
Hide file tree
Showing 7 changed files with 26 additions and 26 deletions.
24 changes: 12 additions & 12 deletions e2e/tests/negative/component_resilience.robot
Original file line number Diff line number Diff line change
Expand Up @@ -29,12 +29,12 @@ Delete instance-manager of volume ${volume_id} and wait for recover
Delete instance-manager of deployment ${deployment_id} volume and wait for recover
When Delete instance-manager of deployment ${deployment_id} volume
And Wait for volume of deployment ${deployment_id} attached and degraded
And Wait for volume of deployment ${deployment_id} healthy
And Wait for volume of deployment ${deployment_id} healthy
And Wait for deployment ${deployment_id} pods stable
And Check deployment ${deployment_id} data in file data.txt is intact

*** Test Cases ***
Test Longhorn components recovery
Test Longhorn Components Recovery
[Documentation] -- Manual test plan --
... Test data setup:
... Deploy Longhorn on a 3 nodes cluster.
Expand Down Expand Up @@ -64,19 +64,19 @@ Test Longhorn components recovery
And Attach volume 1
And Wait for volume 1 healthy
And Write data to volume 1

When Create storageclass longhorn-test-1 with dataEngine=${DATA_ENGINE}
And Create persistentvolumeclaim 1 using RWX volume with longhorn-test-1 storageclass
And Create deployment 1 with persistentvolumeclaim 1
And Write 100 MB data to file data.txt in deployment 1
END

When Delete Longhorn DaemonSet longhorn-csi-plugin pod on node 1
When Delete Longhorn DaemonSet longhorn-csi-plugin pod on node 1
And Delete Longhorn Deployment csi-attacher pod on node 1
And Delete Longhorn Deployment csi-provisioner pod on node 1
And Delete Longhorn Deployment csi-resizer pod on node 1
And Delete Longhorn Deployment csi-snapshotter pod on node 1
And Delete Longhorn DaemonSet longhorn-manager pod on node 1
And Delete Longhorn DaemonSet longhorn-manager pod on node 1
And Delete Longhorn DaemonSet engine-image pod on node 1
And Delete Longhorn component instance-manager pod on node 1
And Delete Longhorn Deployment longhorn-ui pod
Expand All @@ -93,7 +93,7 @@ Test Longhorn components recovery
And Check deployment 1 data in file data.txt is intact
END

Test Longhorn volume recovery
Test Longhorn Volume Recovery
[Documentation] -- Manual test plan --
... Test data setup:
... Deploy Longhorn on a 3 nodes cluster.
Expand All @@ -115,7 +115,7 @@ Test Longhorn volume recovery
And Wait until volume 0 replica rebuilding started on replica node
Then Delete instance-manager of volume 0 and wait for recover

Test Longhorn backing image volume recovery
Test Longhorn Backing Image Volume Recovery
[Documentation] -- Manual test plan --
... Test data setup:
... Deploy Longhorn on a 3 nodes cluster.
Expand All @@ -127,15 +127,15 @@ Test Longhorn backing image volume recovery
... Test steps:
... Delete the IM of the volume and make sure volume recovers. Check the data as well.
... Start replica rebuilding for the aforementioned volume, and delete the IM-e while it is rebuilding. Verify the recovered volume.
... Delete the backing image manager pod and verify the pod gets recreated.
... Delete the backing image manager pod and verify the pod gets recreated.
IF '${DATA_ENGINE}' == 'v1'
When Create backing image bi with url=https://longhorn-backing-image.s3-us-west-1.amazonaws.com/parrot.qcow2
And Create volume 0 with backingImage=bi dataEngine=${DATA_ENGINE}
And Attach volume 0
And Wait for volume 0 healthy
And Write data to volume 0
Then Delete instance-manager of volume 0 and wait for recover

When Delete volume 0 replica on replica node
And Wait until volume 0 replica rebuilding started on replica node
Then Delete instance-manager of volume 0 and wait for recover
Expand All @@ -144,7 +144,7 @@ Test Longhorn backing image volume recovery
Then Wait backing image managers running
END

Test Longhorn dynamic provisioned RWX volume recovery
Test Longhorn Dynamic Provisioned RWX Volume Recovery
[Documentation] -- Manual test plan --
... Test data setup:
... Deploy Longhorn on a 3 nodes cluster.
Expand Down Expand Up @@ -174,7 +174,7 @@ Test Longhorn dynamic provisioned RWX volume recovery
And Check deployment 0 data in file data.txt is intact
END

Test Longhorn dynamic provisioned RWO volume recovery
Test Longhorn Dynamic Provisioned RWO Volume Recovery
[Documentation] -- Manual test plan --
... Test data setup:
... Deploy Longhorn on a 3 nodes cluster.
Expand All @@ -191,7 +191,7 @@ Test Longhorn dynamic provisioned RWO volume recovery
And Create deployment 0 with persistentvolumeclaim 0
And Write 500 MB data to file data.txt in deployment 0
Then Delete instance-manager of deployment 0 volume and wait for recover

When Delete replica of deployment 0 volume on replica node
And Wait until volume of deployment 0 replica rebuilding started on replica node
Then Delete instance-manager of deployment 0 volume and wait for recover
8 changes: 4 additions & 4 deletions e2e/tests/negative/node_drain.robot
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ Force Drain Replica Node While Replica Rebuilding
And Check deployment 1 data in file data.txt is intact
END

Drain node with force
Drain Node With Force
[Documentation] Drain node with force
... 1. Deploy a cluster contains 3 worker nodes N1, N2, N3.
... 2. Deploy Longhorn.
Expand Down Expand Up @@ -117,7 +117,7 @@ Drain node with force
And Check instance-manager pod is not running on drained node
Then Check deployment 0 data in file data.txt is intact

Drain node without force
Drain Node Without Force
[Documentation] Drain node without force
... 1. Cordon the node. Longhorn will automatically disable the node scheduling when a Kubernetes node is cordoned.
... 2. Evict all the replicas from the node.
Expand All @@ -139,7 +139,7 @@ Drain node without force
And Check instance-manager pod is not running on drained node
Then Check deployment 0 data in file data.txt is intact

Test kubectl drain nodes for PVC/PV/LHV is created through Longhorn API
Test Kubectl Drain Nodes For PVC/PV/LHV Is Created Through Longhorn API
[Documentation] Test kubectl drain nodes for PVC/PV/LHV is created through Longhorn API
... Given 1 PVC/PV/LHV created through Longhorn API And LHV is not yet attached/replicated.
... When kubectl drain nodes.
Expand All @@ -153,7 +153,7 @@ Test kubectl drain nodes for PVC/PV/LHV is created through Longhorn API
And Create persistentvolumeclaim for volume 0
And Force drain all nodes

Stopped replicas on deleted nodes should not be counted as healthy replicas when draining nodes
Stopped Replicas On Deleted Nodes Should Not Be Counted As Healthy Replicas When Draining Nodes
[Documentation] Stopped replicas on deleted nodes should not be counted as healthy replicas when draining nodes
... When draining a node, the node will be set as unscheduled and all pods should be evicted.
... By Longhorn’s default settings, the replica will only be evicted if there is another healthy replica on the running node.
Expand Down
6 changes: 3 additions & 3 deletions e2e/tests/negative/pull_backup_from_another_longhorn.robot
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Test Setup Set test environment
Test Teardown Cleanup test resources

*** Test Cases ***
Pull backup created by another Longhorn system
Pull Backup Created By Another Longhorn System
[Documentation] Pull backup created by another Longhorn system
... 1. Install test version of Longhorn.
... 2. Create volume, write data, and take backup.
Expand All @@ -32,7 +32,7 @@ Pull backup created by another Longhorn system
... 8. Create volume, write data, and take backup.
... 9. Uninstall Longhorn.
... 10. Install test version of Longhorn.
... 11. Restore the backup create in step 8 and verify the data.
... 11. Restore the backup create in step 8 and verify the data.
...
... Important
... - This test case need have set environment variable manually first if not run on Jenkins
Expand All @@ -49,7 +49,7 @@ Pull backup created by another Longhorn system
And Attach volume 0
And Wait for volume 0 healthy
And Write data 0 300 MB to volume 0
When Create backup 0 for volume 0
When Create backup 0 for volume 0
Then Verify backup list contains no error for volume 0
And Verify backup list contains backup 0 of volume 0
Then Uninstall Longhorn
Expand Down
4 changes: 2 additions & 2 deletions e2e/tests/negative/replica_rebuilding.robot
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@ Reboot Replica Node While Replica Rebuilding
And Check volume 0 data is intact
END

Delete replicas one by one after the volume is healthy
Delete Replicas One By One After The Volume Is Healthy
Given Create storageclass longhorn-test with dataEngine=${DATA_ENGINE}
And Create persistentvolumeclaim 0 using RWO volume with longhorn-test storageclass
And Create deployment 0 with persistentvolumeclaim 0
Expand All @@ -90,7 +90,7 @@ Delete replicas one by one after the volume is healthy
Then Check deployment 0 data in file data.txt is intact
END

Delete replicas one by one regardless of the volume health
Delete Replicas One By One Regardless Of The Volume Health
[Documentation] Currently v2 data engine have a chance to hit
... https://github.com/longhorn/longhorn/issues/9216 and will be fixed
... in v1.9.0
Expand Down
4 changes: 2 additions & 2 deletions e2e/tests/negative/test_backup_listing.robot
Original file line number Diff line number Diff line change
Expand Up @@ -127,7 +127,7 @@ Pod ${pod_id} data should same as volume ${source_volume_id} backup ${backup_id}
... msg="expected ${expected_checksum}, got ${current_checksum}!"

*** Test Cases ***
Backup listing with more than 1000 backups
Backup Listing With More Than 1000 Backups
[Tags] manual longhorn-8355
[Documentation] Test backup listing
Given Create persistentvolumeclaim 0 using RWO volume
Expand All @@ -139,7 +139,7 @@ Backup listing with more than 1000 backups
Then Get deployment 1 volume data in file data
And Volume 1 data should same as deployment 0 volume

Backup listing of volume bigger than 200 Gi
Backup Listing Of Volume Bigger Than 200 Gi
[Tags] manual longhorn-8355 large-size
[Documentation] Test backup bigger than 200 Gi
Given Create persistentvolumeclaim 0 using RWO volume
Expand Down
2 changes: 1 addition & 1 deletion e2e/tests/regression/test_persistentvolumeclaim.robot
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ ${RETRY_INTERVAL} 1

*** Test Cases ***

Test persistentvolumeclaim expand more than storage maximum size should fail
Test PersistentVolumeClaim Expand More Than Storage Maximum Size Should Fail
[Tags] volume expansion
[Documentation] Verify that a PersistentVolumeClaim cannot be expanded beyond
... the storage maximum size.
Expand Down
4 changes: 2 additions & 2 deletions e2e/tests/regression/test_volume.robot
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Create volume with invalid name should fail

*** Test Cases ***

Test RWX volume data integrity after CSI plugin pod restart
Test RWX Volume Data Integrity After CSI Plugin Pod Restart
[Tags] volume rwx storage-network
[Documentation] Test RWX volume data directory is accessible after Longhorn CSI plugin pod restart.
...
Expand All @@ -41,7 +41,7 @@ Test RWX volume data integrity after CSI plugin pod restart

Then Check deployment 0 data in file data.txt is intact

Test detached volume should not reattach after node eviction
Test Detached Volume Should Not Reattach After Node Eviction
[Tags] volume node-eviction
[Documentation] Test detached volume should not reattach after node eviction.
...
Expand Down

0 comments on commit 6d767dc

Please sign in to comment.