Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor(negative): replace robot test variable assignments with runtime queries #1604

Merged
merged 12 commits into from
Mar 8, 2024
Merged
28 changes: 10 additions & 18 deletions e2e/keywords/common.resource
Original file line number Diff line number Diff line change
Expand Up @@ -2,37 +2,29 @@
Documentation Common keywords

Library ../libs/keywords/common_keywords.py
Library ../libs/keywords/deployment_keywords.py
Library ../libs/keywords/network_keywords.py
Library ../libs/keywords/recurringjob_keywords.py
innobead marked this conversation as resolved.
Show resolved Hide resolved
Library ../libs/keywords/statefulset_keywords.py
Library ../libs/keywords/stress_keywords.py
Library ../libs/keywords/volume_keywords.py
Library ../libs/keywords/recurring_job_keywords.py
Library ../libs/keywords/workload_keywords.py
Library ../libs/keywords/network_keywords.py


*** Variables ***


*** Keywords ***
Set test environment
init_k8s_api_client
init_node_exec ${SUITE NAME.rsplit('.')[1]}
init_storageclasses
@{volume_list} = Create List
Set Test Variable ${volume_list}
@{deployment_list} = Create List
Set Test Variable ${deployment_list}
@{statefulset_list} = Create List
Set Test Variable ${statefulset_list}
@{persistentvolumeclaim_list} = Create List
Set Test Variable ${persistentvolumeclaim_list}

setup_control_plane_network_latency

Cleanup test resources
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cleanup test resources appears to be unable to execute concurrently if we intend to run tests concurrently.

Currently, it's OK, but later on, for concurrent test execution, we should specifically tear down the resources for the tests that are currently running.

cleanup_control_plane_network_latency
cleanup_node_exec
cleanup_stress_helper
cleanup_recurring_jobs ${volume_list}
cleanup_volumes ${volume_list}
cleanup_deployments ${deployment_list}
cleanup_statefulsets ${statefulset_list}
cleanup_recurringjobs
cleanup_deployments
cleanup_statefulsets
cleanup_persistentvolumeclaims
cleanup_volumes
cleanup_storageclasses
21 changes: 21 additions & 0 deletions e2e/keywords/deployment.resource
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
*** Settings ***
Documentation Deployment Keywords

Library Collections
Library ../libs/keywords/common_keywords.py
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TODO: Suggest adding libs to the lib path, so that we can import our libraries from the python lib paths directly.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Chinya, please help create a ticket for this improvement.

Library ../libs/keywords/deployment_keywords.py

*** Keywords ***
Create deployment ${deployment_id} with persistentvolumeclaim ${claim_id}
${deployment_name} = generate_name_with_suffix deployment ${deployment_id}
${claim_name} = generate_name_with_suffix claim ${claim_id}
create_deployment ${deployment_name} ${claim_name}

Check deployment ${deployment_id} works
${deployment_name} = generate_name_with_suffix deployment ${deployment_id}
write_workload_pod_random_data ${deployment_name} 1024 random-data
check_workload_pod_data_checksum ${deployment_name} random-data

Wait for deployment ${deployment_id} pods stable
${deployment_name} = generate_name_with_suffix deployment ${deployment_id}
wait_for_workload_pods_stable ${deployment_name}
1 change: 0 additions & 1 deletion e2e/keywords/engine.resource
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ Documentation Longhorn engine related keywords
Library ../libs/keywords/common_keywords.py
Library ../libs/keywords/engine_keywords.py


*** Keywords ***
Engine state should eventually be ${expected_engine_state}
Run keyword And Continue On Failure
Expand Down
33 changes: 33 additions & 0 deletions e2e/keywords/host.resource
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
*** Settings ***
Documentation Physical Node Keywords

Library ../libs/keywords/common_keywords.py
Library ../libs/keywords/host_keywords.py
Library ../libs/keywords/network_keywords.py
Library ../libs/keywords/volume_keywords.py
Library ../libs/keywords/workload_keywords.py

*** Keywords ***
Reboot volume ${volume_id} volume node
${volume_name} = generate_name_with_suffix volume ${volume_id}
reboot_volume_node ${volume_name}

Reboot volume ${volume_id} replica node
${volume_name} = generate_name_with_suffix volume ${volume_id}
reboot_replica_node ${volume_name}

Reboot node ${idx}
reboot_node_by_index ${idx}

Restart all worker nodes
reboot_all_worker_nodes

Power off node ${idx} for ${power_off_time_in_min} mins
reboot_node_by_index ${idx} ${power_off_time_in_min}

Power off all worker nodes for ${power_off_time_in_min} mins
reboot_all_worker_nodes ${power_off_time_in_min}

Restart cluster
reboot_all_nodes
setup_control_plane_network_latency
15 changes: 0 additions & 15 deletions e2e/keywords/kubelet.resource

This file was deleted.

21 changes: 21 additions & 0 deletions e2e/keywords/longhorn.resource
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
*** Settings ***
Documentation Longhorn Keywords

Library ../libs/keywords/instancemanager_keywords.py
Library ../libs/keywords/workload_keywords.py

*** Variables ***
@{longhorn_workloads}
... csi-attacher
... csi-provisioner
... csi-resizer
... csi-snapshotter
... longhorn-driver-deployer
... longhorn-csi-plugin
... longhorn-manager
... longhorn-ui

*** Keywords ***
Wait for longhorn ready
wait_for_all_instance_manager_running
wait_for_workloads_pods_running ${longhorn_workloads} longhorn-system
68 changes: 0 additions & 68 deletions e2e/keywords/node.resource

This file was deleted.

15 changes: 15 additions & 0 deletions e2e/keywords/persistentvolumeclaim.resource
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
*** Settings ***
Documentation PersistentVolumeClaim Keywords

Library Collections
Library ../libs/keywords/common_keywords.py
Library ../libs/keywords/persistentvolumeclaim_keywords.py

*** Keywords ***
Create persistentvolumeclaim ${claim_id} using ${volume_type} volume
${claim_name} = generate_name_with_suffix claim ${claim_id}
create_persistentvolumeclaim ${claim_name} ${volume_type}

Create persistentvolumeclaim ${claim_id} using ${volume_type} volume with ${option} storageclass
${claim_name} = generate_name_with_suffix claim ${claim_id}
create_persistentvolumeclaim ${claim_name} ${volume_type} ${option}
13 changes: 0 additions & 13 deletions e2e/keywords/recurring_job.resource

This file was deleted.

16 changes: 16 additions & 0 deletions e2e/keywords/recurringjob.resource
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
*** Settings ***
Documentation Recurring Job Keywords

Library Collections
Library ../libs/keywords/common_keywords.py
Library ../libs/keywords/recurringjob_keywords.py

*** Keywords ***
Create snapshot and backup recurringjob for volume ${volume_id}
${volume_name} = generate_name_with_suffix volume ${volume_id}
create_snapshot_recurringjob_for_volume ${volume_name}
create_backup_recurringjob_for_volume ${volume_name}

Check recurringjobs for volume ${volume_id} work
${volume_name} = generate_name_with_suffix volume ${volume_id}
check_recurringjobs_work ${volume_name}
56 changes: 56 additions & 0 deletions e2e/keywords/statefulset.resource
Original file line number Diff line number Diff line change
@@ -0,0 +1,56 @@
*** Settings ***
Documentation StatefulSet Keywords

Library Collections
Library ../libs/keywords/common_keywords.py
Library ../libs/keywords/statefulset_keywords.py

*** Keywords ***
Create statefulset ${statefulset_id} using ${volume_type} volume
${statefulset_name} = generate_name_with_suffix statefulset ${statefulset_id}
create_statefulset ${statefulset_name} ${volume_type}

Create statefulset ${statefulset_id} using ${volume_type} volume with ${option} storageclass
${statefulset_name} = generate_name_with_suffix statefulset ${statefulset_id}
create_statefulset ${statefulset_name} ${volume_type} ${option}

Scale statefulset ${statefulset_id} to ${replicaset_size}
${statefulset_name} = generate_name_with_suffix statefulset ${statefulset_id}
scale_statefulset ${statefulset_name} ${replicaset_size}

Scale down statefulset ${statefulset_id} to detach volume
${statefulset_name} = generate_name_with_suffix statefulset ${statefulset_id}
scale_statefulset_down ${statefulset_name}

Scale up statefulset ${statefulset_id} to attach volume
${statefulset_name} = generate_name_with_suffix statefulset ${statefulset_id}
scale_statefulset_up ${statefulset_name}

Expand statefulset ${statefulset_id} volume by ${size} MiB
${statefulset_name} = generate_name_with_suffix statefulset ${statefulset_id}
expand_workload_claim_size_by_mib ${statefulset_name} ${size}

Write ${size} MB data to file ${file_name} in statefulset ${statefulset_id}
${statefulset_name} = generate_name_with_suffix statefulset ${statefulset_id}
write_workload_pod_random_data ${statefulset_name} ${size} ${file_name}

Check statefulset ${statefulset_id} works
${statefulset_name} = generate_name_with_suffix statefulset ${statefulset_id}
write_workload_pod_random_data ${statefulset_name} 1024 random-data
check_workload_pod_data_checksum ${statefulset_name} random-data

Check statefulset ${statefulset_id} data in file ${file_name} is intact
${statefulset_name} = generate_name_with_suffix statefulset ${statefulset_id}
check_workload_pod_data_checksum ${statefulset_name} ${file_name}

Wait for statefulset ${statefulset_id} volume size expanded
${statefulset_name} = generate_name_with_suffix statefulset ${statefulset_id}
wait_for_workload_claim_size_expanded ${statefulset_name}

Wait for statefulset ${statefulset_id} volume detached
${statefulset_name} = generate_name_with_suffix statefulset ${statefulset_id}
wait_for_workload_volume_detached ${statefulset_name}

Wait for statefulset ${statefulset_id} pods stable
${statefulset_name} = generate_name_with_suffix statefulset ${statefulset_id}
wait_for_workload_pods_stable ${statefulset_name}
20 changes: 14 additions & 6 deletions e2e/keywords/stress.resource
Original file line number Diff line number Diff line change
Expand Up @@ -4,14 +4,22 @@ Documentation Stress Node Keywords
Library ../libs/keywords/stress_keywords.py

*** Keywords ***
Stress the CPU of all ${role} nodes
Stress CPU of all ${role} nodes
stress_node_cpu_by_role ${role}

Stress the CPU of all volume nodes
stress_node_cpu_by_volumes ${volume_list}
Stress CPU of node with volume ${volume_id}
${volume_name} = generate_name_with_suffix volume ${volume_id}
stress_node_cpu_by_volume ${volume_name}

Stress the memory of all ${role} nodes
Stress CPU of volume nodes
stress_node_cpu_of_all_volumes

Stress memory of all ${role} nodes
stress_node_memory_by_role ${role}

Stress the memory of all volume nodes
stress_node_memory_by_volumes ${volume_list}
Stress memory of node with volume ${volume_id}
${volume_name} = generate_name_with_suffix volume ${volume_id}
stress_node_memory_by_volume ${volume_name}

Stress memory of volume nodes
stress_node_memory_of_all_volumes
Loading