Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ci: support cifs and azurite on pytest and e2e #2155

Open
wants to merge 2 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions e2e/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,14 @@ curl -sSfL https://raw.githubusercontent.com/longhorn/longhorn/master/scripts/en

### Run the test

1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server) for test purposes.
1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server, `CIFS` and `Azurite` server) for test purposes.

For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing).
Comment on lines +17 to +19
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Documentation needs improvement for external users.

The documentation has several areas that need attention:

  1. The Confluence URL is internal and not accessible to external users. Consider documenting the Azurite setup steps directly in the README or in a public documentation.
  2. Similar to Azurite, please document any specific setup requirements or prerequisites for CIFS.

Consider adding:

 1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server, `CIFS` and `Azurite` server) for test purposes.

-   For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing).
+   ### Prerequisites
+   - For CIFS:
+     - <document CIFS requirements>
+   - For Azurite:
+     1. <document step 1>
+     2. <document step 2>
+     ...

Committable suggestion skipped: line range outside the PR's diff.

🧰 Tools
🪛 Markdownlint

19-19: null
Bare URL used

(MD034, no-bare-urls)

```
kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml
```

1. Expose Longhorn API:
Expand Down
8 changes: 6 additions & 2 deletions manager/integration/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,10 +16,14 @@ Requirement:
6. Make sure `nfs-common` or equivalent has been installed on the node to allow the NFS client to work.

Run the test:
1. Deploy all backupstore servers(including `NFS` server and `Minio` as s3 server) for test purposes.
1. Deploy all backupstore servers(including `NFS` server and `Minio` as s3 server `CIFS` and `Azurite` server) for test purposes.

For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing).
```
kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml
Comment on lines +24 to +26
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

The referenced manifest files are not available in the repository

The verification shows that:

  • The deploy/backupstores directory does not exist in the repository
  • The manifest files cifs-backupstore.yaml and azurite-backupstore.yaml are not found anywhere in the codebase
  • The README is referencing files from the master branch of the repository through raw GitHub URLs, but these files should exist in the repository itself
🔗 Analysis chain

Verify the existence of new manifest files.

Let's ensure that the CIFS and Azurite manifest files are available in the repository.


Let me try another verification approach by checking the deploy/backupstores directory specifically and examining its contents.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Check if the new manifest files exist in the repository
# Expected: Both manifest files should be found

echo "Checking for CIFS and Azurite manifest files..."
fd -t f "cifs-backupstore.yaml|azurite-backupstore.yaml"

Length of output: 163


Script:

#!/bin/bash
# Check the contents of deploy/backupstores directory
echo "Checking deploy/backupstores directory..."
fd . "deploy/backupstores" -t f

# Also check if these files exist in any other location
echo -e "\nChecking entire repository for these manifest files..."
rg -l "kind: Deployment.*[cifs|azurite].*backupstore" 

Length of output: 412

```
2. Deploy the test script to the Kubernetes cluster.
```
Expand Down
2 changes: 1 addition & 1 deletion manager/integration/deploy/test.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ spec:
- name: LONGHORN_JUNIT_REPORT_PATH
value: /tmp/test-report/longhorn-test-junit-report.xml
- name: LONGHORN_BACKUPSTORES
value: "s3://backupbucket@us-east-1/backupstore$minio-secret, nfs://longhorn-test-nfs-svc.default:/opt/backupstore"
value: "s3://backupbucket@us-east-1/backupstore$minio-secret, nfs://longhorn-test-nfs-svc.default:/opt/backupstore, cifs://longhorn-test-cifs-svc.default/backupstore$cifs-secret, azblob://[email protected]/$azblob-secret"
- name: LONGHORN_BACKUPSTORE_POLL_INTERVAL
value: "30"
- name: LONGHORN_DISK_TYPE
Expand Down
54 changes: 42 additions & 12 deletions manager/integration/tests/backupstore.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,8 @@
from common import cleanup_all_volumes
from common import is_backupTarget_s3
from common import is_backupTarget_nfs
from common import is_backupTarget_cifs
from common import is_backupTarget_azurite
from common import get_longhorn_api_client
from common import delete_backup_volume
from common import delete_backup_backing_image
Expand Down Expand Up @@ -64,8 +66,12 @@ def set_random_backupstore(request, client):
elif request.param == "nfs":
set_backupstore_nfs(client)
mount_nfs_backupstore(client)
elif request.param == "cifs":
set_backupstore_cifs(client)
elif request.param == "azblob":
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I run other backup related test cases that manipulate the backup store, like test_backup_volume_list, test_backup_block_deletion,
test_backup_metadata_deletion, test_all_replica_restore_failure, etc., will they pass? If not, should they be skipped?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is exactly what I'm concerned about: #2155 (comment)

Copy link
Contributor Author

@chriscchien chriscchien Nov 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated, change raise NotImplementedError to pytest.skip("Skip test case because the backup store type is not supported") # NOQA and the teardown function can work after the skip. in related function and test case

  • test_backup_block_deletion
  • test_backup_metadata_deletion
  • test_backup_volume_list
  • test_all_replica_restore_failure

set_backupstore_azurite(client)

yield
yield request.param
cleanup_all_volumes(client)
backupstore_cleanup(client)
system_backups_cleanup(client)
Expand Down Expand Up @@ -116,6 +122,30 @@ def set_backupstore_nfs(client):
break


def set_backupstore_cifs(client):
backupstores = get_backupstore_url()
poll_interval = get_backupstore_poll_interval()
for backupstore in backupstores:
if is_backupTarget_cifs(backupstore):
backupsettings = backupstore.split("$")
set_backupstore_url(client, backupsettings[0])
set_backupstore_credential_secret(client, backupsettings[1])
set_backupstore_poll_interval(client, poll_interval)
break

Comment on lines +125 to +135
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Missing CIFS implementations confirmed for backup store operations

The codebase has several functions that handle backup operations with explicit implementations for S3 and NFS but raise NotImplementedError for other types. While CIFS support is partially implemented (detection and basic setup), the following functions need CIFS-specific implementations:

  • backupstore_get_backup_volume_prefix
  • backupstore_get_backup_cfg_file_path
  • backupstore_get_volume_cfg_file_path
  • backupstore_get_backup_blocks_dir
  • backupstore_create_file_in_backupstore
  • backupstore_delete_file_in_backupstore
  • backupstore_delete_backup_cfg_file
  • backupstore_delete_volume_cfg_file
  • backupstore_delete_random_backup_block
  • backupstore_count_backup_block_files

Each of these functions currently has an if-elif block for S3 and NFS but falls through to raise NotImplementedError, which will break CIFS backup operations.

🔗 Analysis chain

Implement CIFS-specific functions for backup operations.

Several functions in the file handle different backup store types but lack CIFS implementations. This could lead to NotImplementedError when these functions are called with CIFS backup store. Consider implementing CIFS-specific versions of the following functions:

  1. backupstore_get_backup_cfg_file_path
  2. backupstore_get_volume_cfg_file_path
  3. backupstore_get_backup_blocks_dir
  4. backupstore_create_file
  5. backupstore_delete_file
  6. Other similar functions that have S3 and NFS implementations

Let's verify the functions that need CIFS implementation:

Consider creating a backup store interface or abstract class to ensure consistent implementation across different backup store types.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Find all functions that handle different backup store types
echo "Functions potentially needing CIFS implementation:"
rg "if is_backupTarget_s3.*elif is_backupTarget_nfs.*else.*raise NotImplementedError" -A 1

Length of output: 203


Script:

#!/bin/bash
# Let's search for functions that handle backup store types with a broader pattern
echo "Functions handling backup store types:"
rg "is_backupTarget_(s3|nfs|cifs)" -B 2 -A 5

# Also search for NotImplementedError to find potential gaps
echo -e "\nFunctions with NotImplementedError:"
rg "NotImplementedError.*backup" -B 2 -A 2

# Look for existing CIFS implementations
echo -e "\nExisting CIFS implementations:"
rg "is_backupTarget_cifs" -B 2 -A 2

Length of output: 19333


def set_backupstore_azurite(client):
backupstores = get_backupstore_url()
poll_interval = get_backupstore_poll_interval()
for backupstore in backupstores:
if is_backupTarget_azurite(backupstore):
backupsettings = backupstore.split("$")
set_backupstore_url(client, backupsettings[0])
set_backupstore_credential_secret(client, backupsettings[1])
set_backupstore_poll_interval(client, poll_interval)
break


def set_backupstore_url(client, url):
backup_target_setting = client.by_id_setting(SETTING_BACKUP_TARGET)
backup_target_setting = client.update(backup_target_setting,
Expand Down Expand Up @@ -274,7 +304,7 @@ def backupstore_get_backup_volume_prefix(client, volume_name):
return nfs_get_backup_volume_prefix(client, volume_name)

else:
raise NotImplementedError
pytest.skip("Skip test case because the backup store type is not supported") # NOQA


def minio_get_backup_volume_prefix(volume_name):
Expand Down Expand Up @@ -311,7 +341,7 @@ def backupstore_get_backup_cfg_file_path(client, volume_name, backup_name):
return nfs_get_backup_cfg_file_path(client, volume_name, backup_name)

else:
raise NotImplementedError
pytest.skip("Skip test case because the backup store type is not supported") # NOQA


def minio_get_backup_cfg_file_path(volume_name, backup_name):
Expand All @@ -334,7 +364,7 @@ def backupstore_get_volume_cfg_file_path(client, volume_name):
return nfs_get_volume_cfg_file_path(client, volume_name)

else:
raise NotImplementedError
pytest.skip("Skip test case because the backup store type is not supported") # NOQA


def nfs_get_volume_cfg_file_path(client, volume_name):
Expand All @@ -357,7 +387,7 @@ def backupstore_get_backup_blocks_dir(client, volume_name):
return nfs_get_backup_blocks_dir(client, volume_name)

else:
raise NotImplementedError
pytest.skip("Skip test case because the backup store type is not supported") # NOQA


def minio_get_backup_blocks_dir(volume_name):
Expand All @@ -383,7 +413,7 @@ def backupstore_create_file(client, core_api, file_path, data={}):
return nfs_create_file_in_backupstore(file_path, data={})

else:
raise NotImplementedError
pytest.skip("Skip test case because the backup store type is not supported") # NOQA


def mino_create_file_in_backupstore(client, core_api, file_path, data={}): # NOQA
Expand Down Expand Up @@ -433,7 +463,7 @@ def backupstore_write_backup_cfg_file(client, core_api, volume_name, backup_name
data)

else:
raise NotImplementedError
pytest.skip("Skip test case because the backup store type is not supported") # NOQA


def nfs_write_backup_cfg_file(client, volume_name, backup_name, data):
Expand Down Expand Up @@ -481,7 +511,7 @@ def backupstore_delete_file(client, core_api, file_path):
return nfs_delete_file_in_backupstore(file_path)

else:
raise NotImplementedError
pytest.skip("Skip test case because the backup store type is not supported") # NOQA


def mino_delete_file_in_backupstore(client, core_api, file_path):
Expand Down Expand Up @@ -521,7 +551,7 @@ def backupstore_delete_backup_cfg_file(client, core_api, volume_name, backup_nam
nfs_delete_backup_cfg_file(client, volume_name, backup_name)

else:
raise NotImplementedError
pytest.skip("Skip test case because the backup store type is not supported") # NOQA


def nfs_delete_backup_cfg_file(client, volume_name, backup_name):
Expand Down Expand Up @@ -563,7 +593,7 @@ def backupstore_delete_volume_cfg_file(client, core_api, volume_name): # NOQA
nfs_delete_volume_cfg_file(client, volume_name)

else:
raise NotImplementedError
pytest.skip("Skip test case because the backup store type is not supported") # NOQA


def nfs_delete_volume_cfg_file(client, volume_name):
Expand Down Expand Up @@ -632,7 +662,7 @@ def backupstore_delete_random_backup_block(client, core_api, volume_name):
nfs_delete_random_backup_block(client, volume_name)

else:
raise NotImplementedError
pytest.skip("Skip test case because the backup store type is not supported") # NOQA


def nfs_delete_random_backup_block(client, volume_name):
Expand Down Expand Up @@ -681,7 +711,7 @@ def backupstore_count_backup_block_files(client, core_api, volume_name):
return nfs_count_backup_block_files(client, volume_name)

else:
raise NotImplementedError
pytest.skip("Skip test case because the backup store type is not supported") # NOQA


def nfs_count_backup_block_files(client, volume_name):
Expand Down
8 changes: 8 additions & 0 deletions manager/integration/tests/common.py
Original file line number Diff line number Diff line change
Expand Up @@ -3895,6 +3895,14 @@ def is_backupTarget_nfs(s):
return s.startswith("nfs://")


def is_backupTarget_cifs(s):
return s.startswith("cifs://")


def is_backupTarget_azurite(s):
return s.startswith("azblob://")


def wait_for_backup_volume(client, vol_name, backing_image=""):
for _ in range(RETRY_BACKUP_COUNTS):
bv = client.by_id_backupVolume(vol_name)
Expand Down
14 changes: 13 additions & 1 deletion manager/integration/tests/test_basic.py
Original file line number Diff line number Diff line change
Expand Up @@ -673,6 +673,10 @@ def test_backup_block_deletion(set_random_backupstore, client, core_api, volume_
17. Delete the backup volume
18. Cleanup the volume
"""
backup_store_type = set_random_backupstore
if backup_store_type not in ["nfs", "s3"]:
pytest.skip("Skip test case because the backup store type is not supported") # NOQA

backupstore_cleanup(client)

volume = create_and_check_volume(client, volume_name)
Expand Down Expand Up @@ -1106,6 +1110,10 @@ def test_backup_volume_list(set_random_backupstore, client, core_api): # NOQA
11. delete backup volumes(1 & 2)
12. cleanup
"""
backup_store_type = set_random_backupstore
if backup_store_type not in ["nfs", "s3"]:
pytest.skip("Skip test case because the backup store type is not supported") # NOQA

backupstore_cleanup(client)

# create 2 volumes.
Expand Down Expand Up @@ -1200,6 +1208,10 @@ def test_backup_metadata_deletion(set_random_backupstore, client, core_api, volu
18. verify that volume(1) has been deleted in the backupstore.
19. cleanup
"""
backup_store_type = set_random_backupstore
if backup_store_type not in ["nfs", "s3"]:
pytest.skip("Skip test case because the backup store type is not supported") # NOQA

backupstore_cleanup(client)

volume1_name = volume_name + "-1"
Expand Down Expand Up @@ -4392,7 +4404,7 @@ def test_backuptarget_available_during_engine_image_not_ready(client, apps_api):
url = backupstore
cred_secret = ""
else:
raise NotImplementedError
pytest.skip("Skip test case because the backup store type is not supported") # NOQA

poll_intervals = ["0", "300"]
for poll_interval in poll_intervals:
Expand Down
4 changes: 4 additions & 0 deletions manager/integration/tests/test_ha.py
Original file line number Diff line number Diff line change
Expand Up @@ -1409,6 +1409,10 @@ def test_all_replica_restore_failure(set_random_backupstore, client, core_api, v
15. Verify the faulted volume cannot be attached to a node.
16. Verify this faulted volume can be deleted.
"""
backup_store_type = set_random_backupstore
if backup_store_type not in ["nfs", "s3"]:
pytest.skip("Skip test case because the backup store type is not supported") # NOQA

Comment on lines +1412 to +1415
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use pytest.mark.skipif decorator for conditional test skipping

Inside the function test_all_replica_restore_failure, the test is conditionally skipped using pytest.skip. For better practice, use the @pytest.mark.skipif decorator at the function definition to conditionally skip the test based on the backup store type.

Apply this diff:

-    backup_store_type = set_random_backupstore
-    if backup_store_type not in ["nfs", "s3"]:
-        pytest.skip("Skip test case because the backup store type is not supported") # NOQA
+@pytest.mark.skipif(set_random_backupstore not in ["nfs", "s3"], reason="Backup store type not supported")
 def test_all_replica_restore_failure(set_random_backupstore, client, core_api, volume_name, csi_pv, pvc, pod_make):  # NOQA

Committable suggestion skipped: line range outside the PR's diff.

auto_salvage_setting = client.by_id_setting(SETTING_AUTO_SALVAGE)
assert auto_salvage_setting.name == SETTING_AUTO_SALVAGE
assert auto_salvage_setting.value == "true"
Expand Down
30 changes: 29 additions & 1 deletion pipelines/utilities/install_backupstores.sh
Original file line number Diff line number Diff line change
@@ -1,6 +1,34 @@
install_backupstores(){
MINIO_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml"
NFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml"
CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml"
AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml"
kubectl create -f ${MINIO_BACKUPSTORE_URL} \
-f ${NFS_BACKUPSTORE_URL}
-f ${NFS_BACKUPSTORE_URL} \
-f ${CIFS_BACKUPSTORE_URL} \
-f ${AZURITE_BACKUPSTORE_URL}
setup_azuitize_backup_store
}
Comment on lines +4 to +11
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix typo in function name: "azuitize" should be "azurite".

The function name contains a typo that should be corrected for consistency with the rest of the codebase and documentation.

Apply this diff to fix the typo:

-  setup_azuitize_backup_store
+  setup_azurite_backup_store
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml"
AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml"
kubectl create -f ${MINIO_BACKUPSTORE_URL} \
-f ${NFS_BACKUPSTORE_URL}
-f ${NFS_BACKUPSTORE_URL} \
-f ${CIFS_BACKUPSTORE_URL} \
-f ${AZURITE_BACKUPSTORE_URL}
setup_azuitize_backup_store
}
CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml"
AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml"
kubectl create -f ${MINIO_BACKUPSTORE_URL} \
-f ${NFS_BACKUPSTORE_URL} \
-f ${CIFS_BACKUPSTORE_URL} \
-f ${AZURITE_BACKUPSTORE_URL}
setup_azurite_backup_store
}


setup_azuitize_backup_store(){
RETRY=0
MAX_RETRY=60
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do
echo 'Waiting azurite pod running'
sleep 5
if [ $RETRY -eq $MAX_RETRY ]; then
break
fi
RETRY=$((RETRY+1))
done
Comment on lines +14 to +23
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Enhance retry mechanism with proper error handling.

The current retry implementation has several areas for improvement:

  1. No error handling when max retries are reached
  2. Magic numbers in retry constants

Consider this improved implementation:

-  RETRY=0
-  MAX_RETRY=60
+  readonly AZURITE_POD_WAIT_SECONDS=300
+  readonly AZURITE_CHECK_INTERVAL=5
+  end_time=$(($(date +%s) + ${AZURITE_POD_WAIT_SECONDS}))
   until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do
     echo 'Waiting azurite pod running'
-    sleep 5
-    if [ $RETRY -eq $MAX_RETRY ]; then
+    sleep ${AZURITE_CHECK_INTERVAL}
+    if [ $(date +%s) -gt $end_time ]; then
+      echo "Error: Timeout waiting for Azurite pod to be ready"
+      exit 1
+    fi
-      break
     fi
-    RETRY=$((RETRY+1))
   done
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
RETRY=0
MAX_RETRY=60
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do
echo 'Waiting azurite pod running'
sleep 5
if [ $RETRY -eq $MAX_RETRY ]; then
break
fi
RETRY=$((RETRY+1))
done
readonly AZURITE_POD_WAIT_SECONDS=300
readonly AZURITE_CHECK_INTERVAL=5
end_time=$(($(date +%s) + ${AZURITE_POD_WAIT_SECONDS}))
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do
echo 'Waiting azurite pod running'
sleep ${AZURITE_CHECK_INTERVAL}
if [ $(date +%s) -gt $end_time ]; then
echo "Error: Timeout waiting for Azurite pod to be ready"
exit 1
fi
done


AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64)
kubectl -n longhorn-system patch secret azblob-secret \
--type=json \
-p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]"

CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip)
# port forward and az container create need to be run on control node
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you know why we need to ssh into the control plane to run the az command? Can it be directly run on the Jenkins agent like other kubectl commands to skip the ssh step?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It should can perform remote create, but since we need to ssh into control plane node to perform port forward(the workaround to setup azurite backupstore), using one ssh command to create az storage on control plane node directly is more efficient. I'd love to hear your thoughts on this, thank you.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't the port-forward workaround using a local port? Not sure but I assume it means your local machine rather than the control plane node? So do you mean if we port-forward on the local machine, the workaround will not work?

3. Forward azblob-service
Forward local port 20001 traffic to port 10000 on the service

Additionally, the username ec2-user in ec2-user@${CONTROL_PLANE_PUBLIC_IP} limits the script to EC2 sles or sle-micro instances. But it won't be work on other distros like rocky, rhel, etc., and other environments like local VM or harvester HAL cluster.

Copy link
Contributor Author

@chriscchien chriscchien Nov 6, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I did a try, if the port forward running on jenkins agent, the backup store can not be accessed. For the local port description, actually means the kubernetes node, I have updated the confluence content.

And the azurite currently supported on sles only in this PR, I can create another ticket for implememt on other distros.

ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://0.0.0.0:20001/devstoreaccount1;'"
Comment on lines +32 to +33
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Security concerns in Azurite setup.

  1. Port forwarding is exposed to all interfaces (0.0.0.0), which is broader than necessary. Consider limiting it to specific interfaces or using localhost.
  2. The connection string contains sensitive credentials in plaintext. Consider:
    • Moving credentials to a Kubernetes secret
    • Using environment variables
    • Using a more secure method to pass credentials

Example improvement for port forwarding:

-  ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &"
+  ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward service/azblob-service 20001:10000 > /dev/null 2>&1 &"

Example improvement for credentials:

-  ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://0.0.0.0:20001/devstoreaccount1;'"
+  CONNECTION_STRING=$(kubectl get secret azurite-credentials -n longhorn-system -o jsonpath='{.data.connection-string}' | base64 -d)
+  ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string \"\${CONNECTION_STRING}\""

Committable suggestion skipped: line range outside the PR's diff.

}
10 changes: 10 additions & 0 deletions pipelines/utilities/run_longhorn_e2e_test.sh
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
S3_BACKUP_STORE='s3://backupbucket@us-east-1/backupstore$minio-secret'
NFS_BACKUP_STORE='nfs://longhorn-test-nfs-svc.default:/opt/backupstore'
CIFS_BACKUP_STORE='cifs://longhorn-test-cifs-svc.default/backupstore$cifs-secret'
AZURITE_BACKUP_STORE='azblob://[email protected]/$azblob-secret'

run_longhorn_e2e_test(){

Expand All @@ -22,6 +24,10 @@ run_longhorn_e2e_test(){
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${S3_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${NFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
Comment on lines +27 to +28
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

The yq command updates wrong environment variable name

The script modifies e2e/deploy/test.yaml which uses LONGHORN_BACKUPSTORE, but the CIFS modification is targeting the wrong index. The environment variable at index 1 is LONGHORN_BACKUPSTORE and should be selected by name rather than index for reliability.

Consider updating the yq command to:

yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[] |= select(.name == "LONGHORN_BACKUPSTORE").value = "'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
🔗 Analysis chain

Verify environment variable ordering in test manifest.

The implementation assumes the backup store environment variable is at index 1 in the container spec. This implicit ordering dependency could be fragile.

Let's verify the environment variable structure:


Let me try a different approach to verify the environment variables structure in the test manifest.

This will help us:

  1. Find the correct test manifest file location
  2. Examine its content to verify the environment variable structure
  3. Determine if the index-based access in the yq command is reliable

Environment variable ordering is safe but variable name differs

The review concern about environment variable ordering can be resolved, but there's a different issue. In both test manifests, the backup store environment variable is indeed at index 1, but the variable names are different:

  • e2e/deploy/test.yaml uses LONGHORN_BACKUPSTORE
  • manager/integration/deploy/test.yaml uses LONGHORN_BACKUPSTORES

Let's verify which manifest file is actually being used in the script:

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Check the environment variables structure in the test manifest
echo "Checking environment variables in test manifest..."
yq e '.spec.containers[0].env' e2e/deploy/test.yaml

Length of output: 263


Script:

#!/bin/bash
# First, let's locate the test manifest file
fd -e yaml -e yml test.yaml

# Then let's examine its content focusing on environment variables
for file in $(fd -e yaml -e yml test.yaml); do
    echo "=== Examining $file ==="
    cat "$file"
done

Length of output: 7419


Script:

#!/bin/bash
# Check which manifest file is referenced in the script
rg "LONGHORN_TESTS_MANIFEST_FILE_PATH.*=.*test.yaml" pipelines/utilities/run_longhorn_e2e_test.sh

Length of output: 158

elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${AZURITE_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
Comment on lines +27 to +30
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Use name-based selection for environment variables.

The current implementation uses index-based access (env[1]) which is fragile. As noted in previous reviews, the environment variable should be selected by name for reliability.

Apply this diff to fix the environment variable selection:

-    yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
+    yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[] |= select(.name == "LONGHORN_BACKUPSTORE").value = "'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}

-    yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${AZURITE_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
+    yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[] |= select(.name == "LONGHORN_BACKUPSTORE").value = "'${AZURITE_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}

This change also needs to be applied to the existing S3 and NFS cases for consistency.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${AZURITE_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[] |= select(.name == "LONGHORN_BACKUPSTORE").value = "'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[] |= select(.name == "LONGHORN_BACKUPSTORE").value = "'${AZURITE_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}

fi

if [[ "${TF_VAR_use_hdd}" == true ]]; then
Expand Down Expand Up @@ -75,6 +81,10 @@ run_longhorn_e2e_test_out_of_cluster(){
LONGHORN_BACKUPSTORES=${S3_BACKUP_STORE}
elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then
LONGHORN_BACKUPSTORES=${NFS_BACKUP_STORE}
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
LONGHORN_BACKUPSTORES=${CIFS_BACKUP_STORE}
elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then
LONGHORN_BACKUPSTORES=${AZURITE_BACKUP_STORE}
fi
LONGHORN_BACKUPSTORE_POLL_INTERVAL="30"

Expand Down
12 changes: 12 additions & 0 deletions pipelines/utilities/run_longhorn_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,12 @@ run_longhorn_test(){
elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $2}' | sed 's/ *//'`
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
Comment on lines +27 to +29
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider adding validation for the CIFS backup store value.

The implementation follows the existing pattern for S3 and NFS, but consider adding validation to ensure the extracted CIFS value exists and is valid. This would prevent silent failures if the third comma-separated value is missing or malformed.

  elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
    BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
+   if [[ -z "${BACKUP_STORE_FOR_TEST}" ]]; then
+     echo "Error: CIFS backup store value not found in env[1].value" >&2
+     exit 1
+   fi
    yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
if [[ -z "${BACKUP_STORE_FOR_TEST}" ]]; then
echo "Error: CIFS backup store value not found in env[1].value" >&2
exit 1
fi
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}

elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $4}' | sed 's/ *//'`
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
Comment on lines +30 to +32
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Fix inconsistent naming of "azurite".

The PR title uses "azurtize" but the code uses "azurite". This inconsistency should be resolved by using the correct term "azurite" consistently throughout the PR.

Comment on lines +27 to +32
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codebase verification

Based on the test manifest content, I can now generate the final response:

Environment variable structure verified and issues found

The code assumes incorrect environment variable name. In the manifest, the backup stores are defined in LONGHORN_BACKUPSTORES (plural) while the script looks for LONGHORN_BACKUPSTORE (singular). The manifest shows the correct structure:

value: "s3://backupbucket@us-east-1/backupstore$minio-secret, nfs://longhorn-test-nfs-svc.default:/opt/backupstore, cifs://longhorn-test-cifs-svc.default/backupstore$cifs-secret, azblob://[email protected]/$azblob-secret"

Issues found:

  • Script uses wrong environment variable name (env[1].value contains LONGHORN_BACKUPSTORES, not LONGHORN_BACKUPSTORE)
  • The order of values is fixed in the manifest: s3, nfs, cifs, azblob
  • Script needs to be updated to use the correct environment variable name and ensure compatibility with this structure
🔗 Analysis chain

Verify the environment variable structure.

The code assumes a specific order and number of comma-separated values in env[1].value. Please verify:

  1. The order of backup store values in the environment variable
  2. The presence of all required values
  3. The impact on existing tests when adding new values

Also applies to: 116-121


Let me try a different approach to verify the environment variable structure using grep and cat instead of yq.

🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Description: Verify the structure of env[1].value in test manifests
# Expected: Should show the complete env[1].value string with all backup store values

echo "Checking test.yaml manifest:"
yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1]' "manager/integration/deploy/test.yaml"

echo -e "\nChecking upgrade_test.yaml manifest (if exists):"
find . -name "upgrade_test.yaml" -exec yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1]' {} \;

Length of output: 474


Script:

#!/bin/bash
# Search for test manifest files and show their content
echo "=== Content of test.yaml ==="
find . -name "test.yaml" -exec cat {} \;

echo -e "\n=== Content of upgrade_test.yaml ==="
find . -name "upgrade_test.yaml" -exec cat {} \;

# Also search for any references to backup store configuration
echo -e "\n=== References to backup store configuration ==="
rg -A 5 -B 5 "BACKUP_STORE" --type yaml

Length of output: 7402


🛠️ Refactor suggestion

Consider using associative arrays for backup store types.

The current implementation using hardcoded array indices (1,2,3,4) is brittle and error-prone. Consider using an associative array to map backup store types to their respective positions:

+declare -A STORE_POSITIONS=(
+  ["s3"]=1
+  ["nfs"]=2
+  ["cifs"]=3
+  ["azurite"]=4
+)

 if [[ $BACKUP_STORE_TYPE = "s3" ]]; then
-  BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $1}' | sed 's/ *//'`
+  position="${STORE_POSITIONS[$BACKUP_STORE_TYPE]}"
+  BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $'${position}'}' | sed 's/ *//'`
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $4}' | sed 's/ *//'`
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
declare -A STORE_POSITIONS=(
["s3"]=1
["nfs"]=2
["cifs"]=3
["azurite"]=4
)
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
position="${STORE_POSITIONS[$BACKUP_STORE_TYPE]}"
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $'${position}'}' | sed 's/ *//'`
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then
position="${STORE_POSITIONS[$BACKUP_STORE_TYPE]}"
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $'${position}'}' | sed 's/ *//'`
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}

fi

if [[ "${TF_VAR_use_hdd}" == true ]]; then
Expand Down Expand Up @@ -107,6 +113,12 @@ run_longhorn_upgrade_test(){
elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $2}' | sed 's/ *//'`
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
Comment on lines +116 to +118
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider extracting backup store logic to a shared function.

The CIFS implementation is identical in both run_longhorn_test and run_longhorn_upgrade_test. Consider extracting this logic into a shared function to improve maintainability and reduce duplication.

+get_backup_store_value() {
+  local manifest_file="$1"
+  local store_type="$2"
+  local value_index
+
+  case "${store_type}" in
+    "s3")  value_index=1 ;;
+    "nfs") value_index=2 ;;
+    "cifs") value_index=3 ;;
+    *) echo "Error: Unknown backup store type: ${store_type}" >&2; exit 1 ;;
+  esac
+
+  local store_value=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' "${manifest_file}" | awk -F ',' '{print $'${value_index}'}' | sed 's/ *//'`
+  if [[ -z "${store_value}" ]]; then
+    echo "Error: ${store_type} backup store value not found in env[1].value" >&2
+    exit 1
+  fi
+  echo "${store_value}"
+}
+
+set_backup_store_value() {
+  local manifest_file="$1"
+  local store_value="$2"
+  yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${store_value}'"' "${manifest_file}"
+}

 run_longhorn_test(){
   # ... existing code ...
   if [[ $BACKUP_STORE_TYPE = "s3" || $BACKUP_STORE_TYPE = "nfs" || $BACKUP_STORE_TYPE = "cifs" ]]; then
-    BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
-    yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
+    BACKUP_STORE_FOR_TEST=$(get_backup_store_value "${LONGHORN_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_TYPE}")
+    set_backup_store_value "${LONGHORN_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_FOR_TEST}"
   fi
   # ... rest of the function ...
 }

 run_longhorn_upgrade_test(){
   # ... existing code ...
   if [[ $BACKUP_STORE_TYPE = "s3" || $BACKUP_STORE_TYPE = "nfs" || $BACKUP_STORE_TYPE = "cifs" ]]; then
-    BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
-    yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
+    BACKUP_STORE_FOR_TEST=$(get_backup_store_value "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_TYPE}")
+    set_backup_store_value "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_FOR_TEST}"
   fi
   # ... rest of the function ...
 }

Committable suggestion was skipped due to low confidence.

elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $4}' | sed 's/ *//'`
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
fi

yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[4].value="'${LONGHORN_UPGRADE_TYPE}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
Expand Down
Loading