-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: support cifs and azurite on pytest and e2e #2155
base: master
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -16,10 +16,14 @@ Requirement: | |
6. Make sure `nfs-common` or equivalent has been installed on the node to allow the NFS client to work. | ||
|
||
Run the test: | ||
1. Deploy all backupstore servers(including `NFS` server and `Minio` as s3 server) for test purposes. | ||
1. Deploy all backupstore servers(including `NFS` server and `Minio` as s3 server `CIFS` and `Azurite` server) for test purposes. | ||
|
||
For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). | ||
``` | ||
kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \ | ||
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml | ||
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ | ||
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ | ||
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml | ||
Comment on lines
+24
to
+26
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 💡 Codebase verification The referenced manifest files are not available in the repository The verification shows that:
🔗 Analysis chainVerify the existence of new manifest files. Let's ensure that the CIFS and Azurite manifest files are available in the repository. Let me try another verification approach by checking the deploy/backupstores directory specifically and examining its contents. 🏁 Scripts executedThe following scripts were executed for the analysis: Script: #!/bin/bash
# Description: Check if the new manifest files exist in the repository
# Expected: Both manifest files should be found
echo "Checking for CIFS and Azurite manifest files..."
fd -t f "cifs-backupstore.yaml|azurite-backupstore.yaml"
Length of output: 163 Script: #!/bin/bash
# Check the contents of deploy/backupstores directory
echo "Checking deploy/backupstores directory..."
fd . "deploy/backupstores" -t f
# Also check if these files exist in any other location
echo -e "\nChecking entire repository for these manifest files..."
rg -l "kind: Deployment.*[cifs|azurite].*backupstore"
Length of output: 412 |
||
``` | ||
2. Deploy the test script to the Kubernetes cluster. | ||
``` | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -42,7 +42,7 @@ spec: | |
- name: LONGHORN_JUNIT_REPORT_PATH | ||
value: /tmp/test-report/longhorn-test-junit-report.xml | ||
- name: LONGHORN_BACKUPSTORES | ||
value: "s3://backupbucket@us-east-1/backupstore$minio-secret, nfs://longhorn-test-nfs-svc.default:/opt/backupstore" | ||
value: "s3://backupbucket@us-east-1/backupstore$minio-secret, nfs://longhorn-test-nfs-svc.default:/opt/backupstore, cifs://longhorn-test-cifs-svc.default/backupstore$cifs-secret, azblob://[email protected]/$azblob-secret" | ||
- name: LONGHORN_BACKUPSTORE_POLL_INTERVAL | ||
value: "30" | ||
- name: LONGHORN_DISK_TYPE | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -17,6 +17,8 @@ | |
from common import cleanup_all_volumes | ||
from common import is_backupTarget_s3 | ||
from common import is_backupTarget_nfs | ||
from common import is_backupTarget_cifs | ||
from common import is_backupTarget_azurite | ||
from common import get_longhorn_api_client | ||
from common import delete_backup_volume | ||
from common import delete_backup_backing_image | ||
|
@@ -64,8 +66,12 @@ def set_random_backupstore(request, client): | |
elif request.param == "nfs": | ||
set_backupstore_nfs(client) | ||
mount_nfs_backupstore(client) | ||
elif request.param == "cifs": | ||
set_backupstore_cifs(client) | ||
elif request.param == "azblob": | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If I run other backup related test cases that manipulate the backup store, like There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is exactly what I'm concerned about: #2155 (comment) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Updated, change
|
||
set_backupstore_azurite(client) | ||
|
||
yield | ||
yield request.param | ||
cleanup_all_volumes(client) | ||
backupstore_cleanup(client) | ||
system_backups_cleanup(client) | ||
|
@@ -116,6 +122,30 @@ def set_backupstore_nfs(client): | |
break | ||
|
||
|
||
def set_backupstore_cifs(client): | ||
backupstores = get_backupstore_url() | ||
poll_interval = get_backupstore_poll_interval() | ||
for backupstore in backupstores: | ||
if is_backupTarget_cifs(backupstore): | ||
backupsettings = backupstore.split("$") | ||
set_backupstore_url(client, backupsettings[0]) | ||
set_backupstore_credential_secret(client, backupsettings[1]) | ||
set_backupstore_poll_interval(client, poll_interval) | ||
break | ||
|
||
Comment on lines
+125
to
+135
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 💡 Codebase verification Missing CIFS implementations confirmed for backup store operations The codebase has several functions that handle backup operations with explicit implementations for S3 and NFS but raise NotImplementedError for other types. While CIFS support is partially implemented (detection and basic setup), the following functions need CIFS-specific implementations:
Each of these functions currently has an if-elif block for S3 and NFS but falls through to raise NotImplementedError, which will break CIFS backup operations. 🔗 Analysis chainImplement CIFS-specific functions for backup operations. Several functions in the file handle different backup store types but lack CIFS implementations. This could lead to NotImplementedError when these functions are called with CIFS backup store. Consider implementing CIFS-specific versions of the following functions:
Let's verify the functions that need CIFS implementation: Consider creating a backup store interface or abstract class to ensure consistent implementation across different backup store types. 🏁 Scripts executedThe following scripts were executed for the analysis: Script: #!/bin/bash
# Find all functions that handle different backup store types
echo "Functions potentially needing CIFS implementation:"
rg "if is_backupTarget_s3.*elif is_backupTarget_nfs.*else.*raise NotImplementedError" -A 1
Length of output: 203 Script: #!/bin/bash
# Let's search for functions that handle backup store types with a broader pattern
echo "Functions handling backup store types:"
rg "is_backupTarget_(s3|nfs|cifs)" -B 2 -A 5
# Also search for NotImplementedError to find potential gaps
echo -e "\nFunctions with NotImplementedError:"
rg "NotImplementedError.*backup" -B 2 -A 2
# Look for existing CIFS implementations
echo -e "\nExisting CIFS implementations:"
rg "is_backupTarget_cifs" -B 2 -A 2
Length of output: 19333 |
||
|
||
def set_backupstore_azurite(client): | ||
backupstores = get_backupstore_url() | ||
poll_interval = get_backupstore_poll_interval() | ||
for backupstore in backupstores: | ||
if is_backupTarget_azurite(backupstore): | ||
backupsettings = backupstore.split("$") | ||
set_backupstore_url(client, backupsettings[0]) | ||
set_backupstore_credential_secret(client, backupsettings[1]) | ||
set_backupstore_poll_interval(client, poll_interval) | ||
break | ||
|
||
|
||
def set_backupstore_url(client, url): | ||
backup_target_setting = client.by_id_setting(SETTING_BACKUP_TARGET) | ||
backup_target_setting = client.update(backup_target_setting, | ||
|
@@ -274,7 +304,7 @@ def backupstore_get_backup_volume_prefix(client, volume_name): | |
return nfs_get_backup_volume_prefix(client, volume_name) | ||
|
||
else: | ||
raise NotImplementedError | ||
pytest.skip("Skip test case because the backup store type is not supported") # NOQA | ||
|
||
|
||
def minio_get_backup_volume_prefix(volume_name): | ||
|
@@ -311,7 +341,7 @@ def backupstore_get_backup_cfg_file_path(client, volume_name, backup_name): | |
return nfs_get_backup_cfg_file_path(client, volume_name, backup_name) | ||
|
||
else: | ||
raise NotImplementedError | ||
pytest.skip("Skip test case because the backup store type is not supported") # NOQA | ||
|
||
|
||
def minio_get_backup_cfg_file_path(volume_name, backup_name): | ||
|
@@ -334,7 +364,7 @@ def backupstore_get_volume_cfg_file_path(client, volume_name): | |
return nfs_get_volume_cfg_file_path(client, volume_name) | ||
|
||
else: | ||
raise NotImplementedError | ||
pytest.skip("Skip test case because the backup store type is not supported") # NOQA | ||
|
||
|
||
def nfs_get_volume_cfg_file_path(client, volume_name): | ||
|
@@ -357,7 +387,7 @@ def backupstore_get_backup_blocks_dir(client, volume_name): | |
return nfs_get_backup_blocks_dir(client, volume_name) | ||
|
||
else: | ||
raise NotImplementedError | ||
pytest.skip("Skip test case because the backup store type is not supported") # NOQA | ||
|
||
|
||
def minio_get_backup_blocks_dir(volume_name): | ||
|
@@ -383,7 +413,7 @@ def backupstore_create_file(client, core_api, file_path, data={}): | |
return nfs_create_file_in_backupstore(file_path, data={}) | ||
|
||
else: | ||
raise NotImplementedError | ||
pytest.skip("Skip test case because the backup store type is not supported") # NOQA | ||
|
||
|
||
def mino_create_file_in_backupstore(client, core_api, file_path, data={}): # NOQA | ||
|
@@ -433,7 +463,7 @@ def backupstore_write_backup_cfg_file(client, core_api, volume_name, backup_name | |
data) | ||
|
||
else: | ||
raise NotImplementedError | ||
pytest.skip("Skip test case because the backup store type is not supported") # NOQA | ||
|
||
|
||
def nfs_write_backup_cfg_file(client, volume_name, backup_name, data): | ||
|
@@ -481,7 +511,7 @@ def backupstore_delete_file(client, core_api, file_path): | |
return nfs_delete_file_in_backupstore(file_path) | ||
|
||
else: | ||
raise NotImplementedError | ||
pytest.skip("Skip test case because the backup store type is not supported") # NOQA | ||
|
||
|
||
def mino_delete_file_in_backupstore(client, core_api, file_path): | ||
|
@@ -521,7 +551,7 @@ def backupstore_delete_backup_cfg_file(client, core_api, volume_name, backup_nam | |
nfs_delete_backup_cfg_file(client, volume_name, backup_name) | ||
|
||
else: | ||
raise NotImplementedError | ||
pytest.skip("Skip test case because the backup store type is not supported") # NOQA | ||
|
||
|
||
def nfs_delete_backup_cfg_file(client, volume_name, backup_name): | ||
|
@@ -563,7 +593,7 @@ def backupstore_delete_volume_cfg_file(client, core_api, volume_name): # NOQA | |
nfs_delete_volume_cfg_file(client, volume_name) | ||
|
||
else: | ||
raise NotImplementedError | ||
pytest.skip("Skip test case because the backup store type is not supported") # NOQA | ||
|
||
|
||
def nfs_delete_volume_cfg_file(client, volume_name): | ||
|
@@ -632,7 +662,7 @@ def backupstore_delete_random_backup_block(client, core_api, volume_name): | |
nfs_delete_random_backup_block(client, volume_name) | ||
|
||
else: | ||
raise NotImplementedError | ||
pytest.skip("Skip test case because the backup store type is not supported") # NOQA | ||
|
||
|
||
def nfs_delete_random_backup_block(client, volume_name): | ||
|
@@ -681,7 +711,7 @@ def backupstore_count_backup_block_files(client, core_api, volume_name): | |
return nfs_count_backup_block_files(client, volume_name) | ||
|
||
else: | ||
raise NotImplementedError | ||
pytest.skip("Skip test case because the backup store type is not supported") # NOQA | ||
|
||
|
||
def nfs_count_backup_block_files(client, volume_name): | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1409,6 +1409,10 @@ def test_all_replica_restore_failure(set_random_backupstore, client, core_api, v | |
15. Verify the faulted volume cannot be attached to a node. | ||
16. Verify this faulted volume can be deleted. | ||
""" | ||
backup_store_type = set_random_backupstore | ||
if backup_store_type not in ["nfs", "s3"]: | ||
pytest.skip("Skip test case because the backup store type is not supported") # NOQA | ||
|
||
Comment on lines
+1412
to
+1415
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion Use Inside the function Apply this diff: - backup_store_type = set_random_backupstore
- if backup_store_type not in ["nfs", "s3"]:
- pytest.skip("Skip test case because the backup store type is not supported") # NOQA
+@pytest.mark.skipif(set_random_backupstore not in ["nfs", "s3"], reason="Backup store type not supported")
def test_all_replica_restore_failure(set_random_backupstore, client, core_api, volume_name, csi_pv, pvc, pod_make): # NOQA
|
||
auto_salvage_setting = client.by_id_setting(SETTING_AUTO_SALVAGE) | ||
assert auto_salvage_setting.name == SETTING_AUTO_SALVAGE | ||
assert auto_salvage_setting.value == "true" | ||
|
Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
@@ -1,6 +1,34 @@ | ||||||||||||||||||||||||||||||||||||||||||||
install_backupstores(){ | ||||||||||||||||||||||||||||||||||||||||||||
MINIO_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml" | ||||||||||||||||||||||||||||||||||||||||||||
NFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml" | ||||||||||||||||||||||||||||||||||||||||||||
CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml" | ||||||||||||||||||||||||||||||||||||||||||||
AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml" | ||||||||||||||||||||||||||||||||||||||||||||
kubectl create -f ${MINIO_BACKUPSTORE_URL} \ | ||||||||||||||||||||||||||||||||||||||||||||
-f ${NFS_BACKUPSTORE_URL} | ||||||||||||||||||||||||||||||||||||||||||||
-f ${NFS_BACKUPSTORE_URL} \ | ||||||||||||||||||||||||||||||||||||||||||||
-f ${CIFS_BACKUPSTORE_URL} \ | ||||||||||||||||||||||||||||||||||||||||||||
-f ${AZURITE_BACKUPSTORE_URL} | ||||||||||||||||||||||||||||||||||||||||||||
setup_azuitize_backup_store | ||||||||||||||||||||||||||||||||||||||||||||
} | ||||||||||||||||||||||||||||||||||||||||||||
Comment on lines
+4
to
+11
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fix typo in function name: "azuitize" should be "azurite". The function name contains a typo that should be corrected for consistency with the rest of the codebase and documentation. Apply this diff to fix the typo: - setup_azuitize_backup_store
+ setup_azurite_backup_store 📝 Committable suggestion
Suggested change
|
||||||||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||||||||
setup_azuitize_backup_store(){ | ||||||||||||||||||||||||||||||||||||||||||||
RETRY=0 | ||||||||||||||||||||||||||||||||||||||||||||
MAX_RETRY=60 | ||||||||||||||||||||||||||||||||||||||||||||
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do | ||||||||||||||||||||||||||||||||||||||||||||
echo 'Waiting azurite pod running' | ||||||||||||||||||||||||||||||||||||||||||||
sleep 5 | ||||||||||||||||||||||||||||||||||||||||||||
if [ $RETRY -eq $MAX_RETRY ]; then | ||||||||||||||||||||||||||||||||||||||||||||
break | ||||||||||||||||||||||||||||||||||||||||||||
fi | ||||||||||||||||||||||||||||||||||||||||||||
RETRY=$((RETRY+1)) | ||||||||||||||||||||||||||||||||||||||||||||
done | ||||||||||||||||||||||||||||||||||||||||||||
Comment on lines
+14
to
+23
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion Enhance retry mechanism with proper error handling. The current retry implementation has several areas for improvement:
Consider this improved implementation: - RETRY=0
- MAX_RETRY=60
+ readonly AZURITE_POD_WAIT_SECONDS=300
+ readonly AZURITE_CHECK_INTERVAL=5
+ end_time=$(($(date +%s) + ${AZURITE_POD_WAIT_SECONDS}))
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do
echo 'Waiting azurite pod running'
- sleep 5
- if [ $RETRY -eq $MAX_RETRY ]; then
+ sleep ${AZURITE_CHECK_INTERVAL}
+ if [ $(date +%s) -gt $end_time ]; then
+ echo "Error: Timeout waiting for Azurite pod to be ready"
+ exit 1
+ fi
- break
fi
- RETRY=$((RETRY+1))
done 📝 Committable suggestion
Suggested change
|
||||||||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||||||||
AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64) | ||||||||||||||||||||||||||||||||||||||||||||
kubectl -n longhorn-system patch secret azblob-secret \ | ||||||||||||||||||||||||||||||||||||||||||||
--type=json \ | ||||||||||||||||||||||||||||||||||||||||||||
-p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]" | ||||||||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||||||||
CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip) | ||||||||||||||||||||||||||||||||||||||||||||
# port forward and az container create need to be run on control node | ||||||||||||||||||||||||||||||||||||||||||||
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &" | ||||||||||||||||||||||||||||||||||||||||||||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Do you know why we need to ssh into the control plane to run the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It should can perform remote create, but since we need to ssh into control plane node to perform port forward(the workaround to setup azurite backupstore), using one ssh command to create az storage on control plane node directly is more efficient. I'd love to hear your thoughts on this, thank you. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Isn't the port-forward workaround using a
Additionally, the username There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yes I did a try, if the port forward running on jenkins agent, the backup store can not be accessed. For the local port description, actually means the kubernetes node, I have updated the confluence content. And the |
||||||||||||||||||||||||||||||||||||||||||||
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://0.0.0.0:20001/devstoreaccount1;'" | ||||||||||||||||||||||||||||||||||||||||||||
Comment on lines
+32
to
+33
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Security concerns in Azurite setup.
Example improvement for port forwarding: - ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &"
+ ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward service/azblob-service 20001:10000 > /dev/null 2>&1 &" Example improvement for credentials: - ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://0.0.0.0:20001/devstoreaccount1;'"
+ CONNECTION_STRING=$(kubectl get secret azurite-credentials -n longhorn-system -o jsonpath='{.data.connection-string}' | base64 -d)
+ ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string \"\${CONNECTION_STRING}\""
|
||||||||||||||||||||||||||||||||||||||||||||
} |
Original file line number | Diff line number | Diff line change | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
@@ -1,5 +1,7 @@ | ||||||||||||||||||
S3_BACKUP_STORE='s3://backupbucket@us-east-1/backupstore$minio-secret' | ||||||||||||||||||
NFS_BACKUP_STORE='nfs://longhorn-test-nfs-svc.default:/opt/backupstore' | ||||||||||||||||||
CIFS_BACKUP_STORE='cifs://longhorn-test-cifs-svc.default/backupstore$cifs-secret' | ||||||||||||||||||
AZURITE_BACKUP_STORE='azblob://[email protected]/$azblob-secret' | ||||||||||||||||||
|
||||||||||||||||||
run_longhorn_e2e_test(){ | ||||||||||||||||||
|
||||||||||||||||||
|
@@ -22,6 +24,10 @@ run_longhorn_e2e_test(){ | |||||||||||||||||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${S3_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | ||||||||||||||||||
elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then | ||||||||||||||||||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${NFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | ||||||||||||||||||
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | ||||||||||||||||||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | ||||||||||||||||||
Comment on lines
+27
to
+28
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 💡 Codebase verification The yq command updates wrong environment variable name The script modifies Consider updating the yq command to: yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[] |= select(.name == "LONGHORN_BACKUPSTORE").value = "'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} 🔗 Analysis chainVerify environment variable ordering in test manifest. The implementation assumes the backup store environment variable is at index 1 in the container spec. This implicit ordering dependency could be fragile. Let's verify the environment variable structure: Let me try a different approach to verify the environment variables structure in the test manifest. This will help us:
Environment variable ordering is safe but variable name differs The review concern about environment variable ordering can be resolved, but there's a different issue. In both test manifests, the backup store environment variable is indeed at index 1, but the variable names are different:
Let's verify which manifest file is actually being used in the script: 🏁 Scripts executedThe following scripts were executed for the analysis: Script: #!/bin/bash
# Check the environment variables structure in the test manifest
echo "Checking environment variables in test manifest..."
yq e '.spec.containers[0].env' e2e/deploy/test.yaml
Length of output: 263 Script: #!/bin/bash
# First, let's locate the test manifest file
fd -e yaml -e yml test.yaml
# Then let's examine its content focusing on environment variables
for file in $(fd -e yaml -e yml test.yaml); do
echo "=== Examining $file ==="
cat "$file"
done
Length of output: 7419 Script: #!/bin/bash
# Check which manifest file is referenced in the script
rg "LONGHORN_TESTS_MANIFEST_FILE_PATH.*=.*test.yaml" pipelines/utilities/run_longhorn_e2e_test.sh
Length of output: 158 |
||||||||||||||||||
elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then | ||||||||||||||||||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${AZURITE_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | ||||||||||||||||||
Comment on lines
+27
to
+30
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Use name-based selection for environment variables. The current implementation uses index-based access ( Apply this diff to fix the environment variable selection: - yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
+ yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[] |= select(.name == "LONGHORN_BACKUPSTORE").value = "'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${AZURITE_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
+ yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[] |= select(.name == "LONGHORN_BACKUPSTORE").value = "'${AZURITE_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} This change also needs to be applied to the existing S3 and NFS cases for consistency. 📝 Committable suggestion
Suggested change
|
||||||||||||||||||
fi | ||||||||||||||||||
|
||||||||||||||||||
if [[ "${TF_VAR_use_hdd}" == true ]]; then | ||||||||||||||||||
|
@@ -75,6 +81,10 @@ run_longhorn_e2e_test_out_of_cluster(){ | |||||||||||||||||
LONGHORN_BACKUPSTORES=${S3_BACKUP_STORE} | ||||||||||||||||||
elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then | ||||||||||||||||||
LONGHORN_BACKUPSTORES=${NFS_BACKUP_STORE} | ||||||||||||||||||
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | ||||||||||||||||||
LONGHORN_BACKUPSTORES=${CIFS_BACKUP_STORE} | ||||||||||||||||||
elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then | ||||||||||||||||||
LONGHORN_BACKUPSTORES=${AZURITE_BACKUP_STORE} | ||||||||||||||||||
fi | ||||||||||||||||||
LONGHORN_BACKUPSTORE_POLL_INTERVAL="30" | ||||||||||||||||||
|
||||||||||||||||||
|
Original file line number | Diff line number | Diff line change | ||||||||||||||||||||||||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
@@ -24,6 +24,12 @@ run_longhorn_test(){ | |||||||||||||||||||||||||||||||||||||||||||
elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then | ||||||||||||||||||||||||||||||||||||||||||||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $2}' | sed 's/ *//'` | ||||||||||||||||||||||||||||||||||||||||||||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | ||||||||||||||||||||||||||||||||||||||||||||
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | ||||||||||||||||||||||||||||||||||||||||||||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` | ||||||||||||||||||||||||||||||||||||||||||||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | ||||||||||||||||||||||||||||||||||||||||||||
Comment on lines
+27
to
+29
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion Consider adding validation for the CIFS backup store value. The implementation follows the existing pattern for S3 and NFS, but consider adding validation to ensure the extracted CIFS value exists and is valid. This would prevent silent failures if the third comma-separated value is missing or malformed. elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
+ if [[ -z "${BACKUP_STORE_FOR_TEST}" ]]; then
+ echo "Error: CIFS backup store value not found in env[1].value" >&2
+ exit 1
+ fi
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} 📝 Committable suggestion
Suggested change
|
||||||||||||||||||||||||||||||||||||||||||||
elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then | ||||||||||||||||||||||||||||||||||||||||||||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $4}' | sed 's/ *//'` | ||||||||||||||||||||||||||||||||||||||||||||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | ||||||||||||||||||||||||||||||||||||||||||||
Comment on lines
+30
to
+32
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fix inconsistent naming of "azurite". The PR title uses "azurtize" but the code uses "azurite". This inconsistency should be resolved by using the correct term "azurite" consistently throughout the PR.
Comment on lines
+27
to
+32
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 💡 Codebase verification Based on the test manifest content, I can now generate the final response: Environment variable structure verified and issues found The code assumes incorrect environment variable name. In the manifest, the backup stores are defined in
Issues found:
🔗 Analysis chainVerify the environment variable structure. The code assumes a specific order and number of comma-separated values in
Also applies to: 116-121 Let me try a different approach to verify the environment variable structure using 🏁 Scripts executedThe following scripts were executed for the analysis: Script: #!/bin/bash
# Description: Verify the structure of env[1].value in test manifests
# Expected: Should show the complete env[1].value string with all backup store values
echo "Checking test.yaml manifest:"
yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1]' "manager/integration/deploy/test.yaml"
echo -e "\nChecking upgrade_test.yaml manifest (if exists):"
find . -name "upgrade_test.yaml" -exec yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1]' {} \;
Length of output: 474 Script: #!/bin/bash
# Search for test manifest files and show their content
echo "=== Content of test.yaml ==="
find . -name "test.yaml" -exec cat {} \;
echo -e "\n=== Content of upgrade_test.yaml ==="
find . -name "upgrade_test.yaml" -exec cat {} \;
# Also search for any references to backup store configuration
echo -e "\n=== References to backup store configuration ==="
rg -A 5 -B 5 "BACKUP_STORE" --type yaml
Length of output: 7402 🛠️ Refactor suggestion Consider using associative arrays for backup store types. The current implementation using hardcoded array indices (1,2,3,4) is brittle and error-prone. Consider using an associative array to map backup store types to their respective positions: +declare -A STORE_POSITIONS=(
+ ["s3"]=1
+ ["nfs"]=2
+ ["cifs"]=3
+ ["azurite"]=4
+)
if [[ $BACKUP_STORE_TYPE = "s3" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $1}' | sed 's/ *//'`
+ position="${STORE_POSITIONS[$BACKUP_STORE_TYPE]}"
+ BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $'${position}'}' | sed 's/ *//'` 📝 Committable suggestion
Suggested change
|
||||||||||||||||||||||||||||||||||||||||||||
fi | ||||||||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||||||||
if [[ "${TF_VAR_use_hdd}" == true ]]; then | ||||||||||||||||||||||||||||||||||||||||||||
|
@@ -107,6 +113,12 @@ run_longhorn_upgrade_test(){ | |||||||||||||||||||||||||||||||||||||||||||
elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then | ||||||||||||||||||||||||||||||||||||||||||||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $2}' | sed 's/ *//'` | ||||||||||||||||||||||||||||||||||||||||||||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | ||||||||||||||||||||||||||||||||||||||||||||
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | ||||||||||||||||||||||||||||||||||||||||||||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` | ||||||||||||||||||||||||||||||||||||||||||||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | ||||||||||||||||||||||||||||||||||||||||||||
Comment on lines
+116
to
+118
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 🛠️ Refactor suggestion Consider extracting backup store logic to a shared function. The CIFS implementation is identical in both +get_backup_store_value() {
+ local manifest_file="$1"
+ local store_type="$2"
+ local value_index
+
+ case "${store_type}" in
+ "s3") value_index=1 ;;
+ "nfs") value_index=2 ;;
+ "cifs") value_index=3 ;;
+ *) echo "Error: Unknown backup store type: ${store_type}" >&2; exit 1 ;;
+ esac
+
+ local store_value=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' "${manifest_file}" | awk -F ',' '{print $'${value_index}'}' | sed 's/ *//'`
+ if [[ -z "${store_value}" ]]; then
+ echo "Error: ${store_type} backup store value not found in env[1].value" >&2
+ exit 1
+ fi
+ echo "${store_value}"
+}
+
+set_backup_store_value() {
+ local manifest_file="$1"
+ local store_value="$2"
+ yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${store_value}'"' "${manifest_file}"
+}
run_longhorn_test(){
# ... existing code ...
if [[ $BACKUP_STORE_TYPE = "s3" || $BACKUP_STORE_TYPE = "nfs" || $BACKUP_STORE_TYPE = "cifs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
+ BACKUP_STORE_FOR_TEST=$(get_backup_store_value "${LONGHORN_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_TYPE}")
+ set_backup_store_value "${LONGHORN_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_FOR_TEST}"
fi
# ... rest of the function ...
}
run_longhorn_upgrade_test(){
# ... existing code ...
if [[ $BACKUP_STORE_TYPE = "s3" || $BACKUP_STORE_TYPE = "nfs" || $BACKUP_STORE_TYPE = "cifs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
+ BACKUP_STORE_FOR_TEST=$(get_backup_store_value "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_TYPE}")
+ set_backup_store_value "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_FOR_TEST}"
fi
# ... rest of the function ...
}
|
||||||||||||||||||||||||||||||||||||||||||||
elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then | ||||||||||||||||||||||||||||||||||||||||||||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $4}' | sed 's/ *//'` | ||||||||||||||||||||||||||||||||||||||||||||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | ||||||||||||||||||||||||||||||||||||||||||||
fi | ||||||||||||||||||||||||||||||||||||||||||||
|
||||||||||||||||||||||||||||||||||||||||||||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[4].value="'${LONGHORN_UPGRADE_TYPE}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | ||||||||||||||||||||||||||||||||||||||||||||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Documentation needs improvement for external users.
The documentation has several areas that need attention:
Consider adding:
🧰 Tools
🪛 Markdownlint
19-19: null
Bare URL used
(MD034, no-bare-urls)