Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

test(robot): v2 volume should block trim when volume is degraded #2114

Merged
merged 1 commit into from
Nov 10, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 11 additions & 0 deletions e2e/keywords/workload.resource
Original file line number Diff line number Diff line change
Expand Up @@ -189,3 +189,14 @@ Check ${workload_kind} ${workload_id} pod is ${expect_state} on another node
Delete Longhorn ${workload_kind} ${workload_name} pod on node ${node_id}
${node_name} = get_node_by_index ${node_id}
delete_workload_pod_on_node ${workload_name} ${node_name} longhorn-system

Trim ${workload_kind} ${workload_id} volume should ${condition}
${workload_name} = generate_name_with_suffix ${workload_kind} ${workload_id}

IF $condition == "fail"
trim_workload_volume_filesystem ${workload_name} is_expect_fail=True
ELSE IF $condition == "pass"
trim_workload_volume_filesystem ${workload_name} is_expect_fail=False
ELSE
Fail "Invalid condition value: ${condition}"
END
4 changes: 4 additions & 0 deletions e2e/libs/keywords/workload_keywords.py
Original file line number Diff line number Diff line change
Expand Up @@ -192,3 +192,7 @@ def is_workloads_pods_has_annotations(self, workload_names, annotation_key, name
if not is_workload_pods_has_annotations(workload_name, annotation_key, namespace=namespace, label_selector=label_selector):
return False
return True

def trim_workload_volume_filesystem(self, workload_name, is_expect_fail=False):
volume_name = get_workload_volume_name(workload_name)
self.volume.trim_filesystem(volume_name, is_expect_fail=is_expect_fail)
3 changes: 3 additions & 0 deletions e2e/libs/volume/crd.py
Original file line number Diff line number Diff line change
Expand Up @@ -511,3 +511,6 @@ def validate_volume_setting(self, volume_name, setting_name, value):
volume = self.get(volume_name)
assert str(volume["spec"][setting_name]) == value, \
f"Expected volume {volume_name} setting {setting_name} is {value}, but it's {str(volume['spec'][setting_name])}"

def trim_filesystem(self, volume_name, is_expect_fail=False):
return Rest(self).trim_filesystem(volume_name, is_expect_fail=is_expect_fail)
17 changes: 17 additions & 0 deletions e2e/libs/volume/rest.py
Original file line number Diff line number Diff line change
Expand Up @@ -370,3 +370,20 @@ def wait_for_replica_ready_to_rw(self, volume_name):
break
time.sleep(self.retry_interval)
assert ready, f"Failed to get volume {volume_name} replicas ready: {replicas}"

def trim_filesystem(self, volume_name, is_expect_fail=False):
is_unexpected_pass = False
try:
self.get(volume_name).trimFilesystem(name=volume_name)

if is_expect_fail:
is_unexpected_pass = True

except Exception as e:
if is_expect_fail:
logging(f"Failed to trim filesystem: {e}")
else:
raise e

if is_unexpected_pass:
raise Exception(f"Expected volume {volume_name} trim filesystem to fail")
3 changes: 3 additions & 0 deletions e2e/libs/volume/volume.py
Original file line number Diff line number Diff line change
Expand Up @@ -154,3 +154,6 @@ def wait_for_engine_image_upgrade_completed(self, volume_name, engine_image_name

def validate_volume_setting(self, volume_name, setting_name, value):
return self.volume.validate_volume_setting(volume_name, setting_name, value)

def trim_filesystem(self, volume_name, is_expect_fail=False):
return self.volume.trim_filesystem(volume_name, is_expect_fail=is_expect_fail)
22 changes: 22 additions & 0 deletions e2e/tests/regression/test_v2.robot
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,8 @@ Resource ../keywords/workload.resource
Resource ../keywords/volume.resource
Resource ../keywords/setting.resource
Resource ../keywords/node.resource
Resource ../keywords/host.resource
Resource ../keywords/longhorn.resource

Test Setup Set test environment
Test Teardown Cleanup test resources
Expand Down Expand Up @@ -50,3 +52,23 @@ Degraded Volume Replica Rebuilding
And Wait for deployment 0 pods stable
Then Check deployment 0 data in file data.txt is intact
END

V2 Volume Should Block Trim When Volume Is Degraded
Given Set setting auto-salvage to true
And Create storageclass longhorn-test with dataEngine=v2
And Create persistentvolumeclaim 0 using RWO volume with longhorn-test storageclass
And Create deployment 0 with persistentvolumeclaim 0

FOR ${i} IN RANGE ${LOOP_COUNT}
And Keep writing data to pod of deployment 0

When Restart cluster
And Wait for longhorn ready
And Wait for volume of deployment 0 attached and degraded
Then Trim deployment 0 volume should fail
c3y1huang marked this conversation as resolved.
Show resolved Hide resolved

When Wait for workloads pods stable
... deployment 0
And Check deployment 0 works
Then Trim deployment 0 volume should pass
END