-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: support cifs and azurite on pytest and e2e #2155
base: master
Are you sure you want to change the base?
Conversation
longhorn/longhorn-9699 Signed-off-by: Chris <[email protected]>
WalkthroughThe changes in this pull request enhance the deployment and management of backup store servers by adding support for CIFS and Azurite alongside existing options like NFS and Minio. Updates include modifications to documentation, Kubernetes configurations, and testing scripts to accommodate the new backup store types. Additionally, new functions and error handling improvements were introduced to facilitate the integration of these backup stores into the existing system. Changes
Possibly related PRs
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Outside diff range and nitpick comments (8)
pipelines/utilities/install_backupstores.sh (2)
4-4
: Consider version pinning instead of using master branch.
While the URL follows the established pattern, pointing to the master branch could lead to instability if the upstream YAML changes unexpectedly.
Consider using a specific release tag or commit hash in the URL for better stability:
- CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml"
+ CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/v1.5.1/deploy/backupstores/cifs-backupstore.yaml"
6-8
: Add error handling for kubectl operations.
The kubectl create command looks good, but consider adding error handling to ensure all resources are created successfully.
Consider adding error checking:
- kubectl create -f ${MINIO_BACKUPSTORE_URL} \
- -f ${NFS_BACKUPSTORE_URL} \
- -f ${CIFS_BACKUPSTORE_URL}
+ if ! kubectl create -f ${MINIO_BACKUPSTORE_URL} \
+ -f ${NFS_BACKUPSTORE_URL} \
+ -f ${CIFS_BACKUPSTORE_URL}; then
+ echo "Failed to create backup stores"
+ return 1
+ fi
manager/integration/README.md (1)
Line range hint 1-23
: Add CIFS prerequisites to the requirements section
While the NFS prerequisites are documented (point 6 mentions nfs-common
), there's no mention of CIFS client requirements. Please add a new requirement point specifying any necessary CIFS client packages or configurations needed on the nodes.
Example addition:
6. Make sure `nfs-common` or equivalent has been installed on the node to allow the NFS client to work.
+7. Make sure `cifs-utils` or equivalent has been installed on the node to allow the CIFS client to work.
🧰 Tools
🪛 Markdownlint
20-20: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
26-26: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_agent.sh.tpl (1)
Line range hint 1-3
: Improve error handling strategy.
The script currently uses set -e
but then disables it with set +e
for the k3s installation. This mixed approach could lead to silent failures in critical setup steps. Consider implementing proper error handling throughout the script.
Consider this more robust approach:
#!/bin/bash
-set -e
+set -euo pipefail
+
+# Error handling function
+handle_error() {
+ local exit_code=$?
+ local line_number=$1
+ echo "Error occurred in line ${line_number}, exit code: ${exit_code}"
+ exit $exit_code
+}
+
+trap 'handle_error ${LINENO}' ERR
test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_agent.sh.tpl (1)
8-8
: Consider adding basic Samba configuration for CIFS.
Since this script is preparing the node for CIFS support, it might be beneficial to include basic Samba configuration to ensure it's ready for use.
Consider adding these configurations after the package installation:
# Basic Samba configuration for CIFS client
sudo cat > /etc/samba/smb.conf <<EOF
[global]
workgroup = WORKGROUP
client min protocol = SMB2
client max protocol = SMB3
security = user
EOF
# Ensure Samba service is enabled and started
sudo systemctl enable --now smb
e2e/README.md (1)
20-21
: Consider enhancing CIFS-related documentation.
While the CIFS backupstore deployment command has been added, the documentation would benefit from:
- A brief explanation of CIFS and its purpose
- Any prerequisites or configuration needed for CIFS
- Updates to the environment variables section if CIFS requires specific configuration
Example addition:
1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server) for test purposes.
+ - Note: CIFS (Common Internet File System) backup store provides Windows-compatible file sharing support.
+ - CIFS Prerequisites:
+ - Ensure SMB/CIFS server is accessible from the cluster
+ - Configure necessary credentials if required
manager/integration/tests/backupstore.py (1)
122-132
: Add documentation and improve error handling.
The function implementation is correct but could benefit from the following improvements:
- Add a docstring explaining the function's purpose and parameters
- Add error handling for cases where no CIFS backupstore is found
- Validate the backupsettings array after split
- Add comments explaining the expected format of the backupstore string
Here's a suggested improvement:
def set_backupstore_cifs(client):
+ """Configure the backup store for CIFS.
+
+ Args:
+ client: The Longhorn client instance
+
+ Expected backupstore format: 'cifs://server/share$credential'
+
+ Raises:
+ ValueError: If no valid CIFS backupstore is found
+ """
backupstores = get_backupstore_url()
poll_interval = get_backupstore_poll_interval()
+ cifs_found = False
for backupstore in backupstores:
if is_backupTarget_cifs(backupstore):
backupsettings = backupstore.split("$")
+ if len(backupsettings) != 2:
+ raise ValueError(f"Invalid CIFS backupstore format: {backupstore}")
set_backupstore_url(client, backupsettings[0])
set_backupstore_credential_secret(client, backupsettings[1])
set_backupstore_poll_interval(client, poll_interval)
+ cifs_found = True
break
+ if not cifs_found:
+ raise ValueError("No valid CIFS backupstore found in configuration")
manager/integration/tests/common.py (1)
3898-3900
: Consider renaming function to follow Python naming conventions
The function name should use snake_case instead of camelCase to be consistent with PEP 8 Python naming conventions. Consider renaming to is_backup_target_cifs
.
-def is_backupTarget_cifs(s):
+def is_backup_target_cifs(s):
return s.startswith("cifs://")
The implementation is correct and follows the same pattern as other backup target validation functions.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (12)
- e2e/README.md (1 hunks)
- manager/integration/README.md (1 hunks)
- manager/integration/deploy/test.yaml (1 hunks)
- manager/integration/tests/backupstore.py (3 hunks)
- manager/integration/tests/common.py (1 hunks)
- pipelines/gke/scripts/longhorn-setup.sh (3 hunks)
- pipelines/utilities/install_backupstores.sh (1 hunks)
- pipelines/utilities/run_longhorn_e2e_test.sh (3 hunks)
- pipelines/utilities/run_longhorn_test.sh (2 hunks)
- test_framework/scripts/longhorn-setup.sh (3 hunks)
- test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_agent.sh.tpl (1 hunks)
- test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_agent.sh.tpl (1 hunks)
🔇 Additional comments (16)
manager/integration/README.md (1)
22-23
: LGTM: CIFS backupstore deployment added correctly
The addition of the CIFS backupstore deployment command is consistent with the existing structure and properly uses continuation characters.
test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_agent.sh.tpl (1)
Line range hint 27-44
: Consider adding CIFS-specific infrastructure setup.
For complete CIFS support in the CI environment, consider adding:
- Directory structure for CIFS mounts
- Network connectivity verification for CIFS ports (TCP 445)
- SELinux/AppArmor configurations if applicable
Let's verify if other CI configuration files handle CIFS setup:
test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_agent.sh.tpl (1)
8-8
: Verify CIFS mount capability in the CI environment.
Since this is for CI testing with CIFS, we should verify that the system can mount CIFS shares after setup.
✅ Verification successful
CIFS mount capability is properly set up in the CI environment
The verification shows that CIFS support is properly configured in the CI environment:
- The CIFS package (samba) is being installed in the provisioning script
- The CIFS backupstore is configured in the test environment with proper mount points and secrets
- The installation script includes CIFS backupstore deployment alongside other backup stores
- The test configuration includes CIFS backupstore URL in the environment variables for testing
The system has all the necessary components to mount and use CIFS shares during CI testing.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify CIFS capabilities are properly set up
# Test 1: Check if required packages are installed
echo "Checking required packages..."
rg -l "BACKUP_STORE_TYPE.*CIFS" | while read -r file; do
echo "Found CIFS configuration in: $file"
done
# Test 2: Check for mount.cifs capability
if ! which mount.cifs >/dev/null 2>&1; then
echo "ERROR: mount.cifs not found"
exit 1
fi
# Test 3: Check for proper kernel module
if ! lsmod | grep -q '^cifs\s'; then
echo "WARNING: CIFS kernel module not loaded"
fi
Length of output: 210
Script:
#!/bin/bash
# Let's check if samba package is being installed in other similar provisioning scripts
echo "Checking for samba package installation across provisioning scripts..."
rg -l "zypper.*install.*samba" --type sh
# Check for CIFS related configurations or mount commands
echo -e "\nChecking for CIFS/SMB related configurations..."
rg "mount.*cifs|mount.*smb" --type sh
# Check for any CIFS/SMB related test cases
echo -e "\nChecking for CIFS/SMB related test files..."
fd -e test.js -e test.sh -e spec.js -e spec.sh | xargs rg "cifs|smb|samba" -l
Length of output: 1055
Script:
#!/bin/bash
# Let's examine the test files that mention CIFS/SMB to understand the testing requirements
echo "Checking test files content..."
cat manager/integration/tests/backupstore.py manager/integration/tests/common.py
# Also check the installation script to see how CIFS is being handled
echo -e "\nChecking installation script..."
cat pipelines/utilities/install_backupstores.sh
# Check the test deployment configuration
echo -e "\nChecking test deployment configuration..."
cat manager/integration/deploy/test.yaml
Length of output: 238188
manager/integration/deploy/test.yaml (3)
Line range hint 18-44
: LGTM: Pod configuration is appropriate for testing purposes.
The pod specification includes necessary configurations for storage testing:
- Privileged containers for storage operations
- Required volume mounts
- Test reporting container
Also applies to: 46-106
45-45
: Verify CIFS dependencies and configuration.
The CIFS backup store configuration requires several prerequisites:
- The service
longhorn-test-cifs-svc.default
must be deployed - The
cifs-secret
must be created with appropriate credentials - The CIFS URL format should be validated
Let's verify these dependencies:
✅ Verification successful
Let me gather more information about the CIFS backupstore configuration by checking the referenced deployment files.
CIFS backup store configuration is properly defined and supported
The verification shows that all CIFS dependencies are properly configured:
- The
longhorn-test-cifs-svc
service is defined in the CIFS backupstore deployment manifest with the correct namespace and ports (139, 445) - The
cifs-secret
is created in bothlonghorn-system
anddefault
namespaces with the required CIFS credentials - The CIFS URL format
cifs://longhorn-test-cifs-svc.default/backupstore$cifs-secret
matches the deployment configuration
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for required CIFS-related resources
echo "Checking for CIFS service definition..."
rg -g '*.yaml' -g '*.yml' 'longhorn-test-cifs-svc'
echo "Checking for CIFS secret creation..."
rg -g '*.yaml' -g '*.yml' -g '*.sh' 'cifs-secret'
echo "Checking for CIFS mount documentation or examples..."
rg -g '*.md' 'cifs|CIFS' -C 2
Length of output: 1997
Script:
#!/bin/bash
# Check the content of CIFS backupstore deployment file and related configurations
echo "Checking CIFS backupstore deployment file..."
curl -s https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml | cat -
echo -e "\nChecking for CIFS test setup in integration tests..."
rg -g '*.go' 'cifs|CIFS' -C 2
echo -e "\nChecking for CIFS secret creation in test setup..."
fd -e yaml -e yml -e sh setup test | xargs rg 'cifs-secret'
Length of output: 3241
Line range hint 1-17
: Verify if cluster-admin permissions are necessary for testing.
While cluster-admin access might be required for comprehensive testing, consider creating a custom role with only the necessary permissions following the principle of least privilege. This would better reflect real-world deployment scenarios where users typically don't have cluster-admin access.
✅ Verification successful
Cluster-admin role is justified for integration tests
Based on the codebase analysis, the cluster-admin role is necessary for the test service account. The test suite performs extensive operations requiring broad permissions:
- Direct Kubernetes API access across multiple resource types (Pods, PVs, PVCs, StatefulSets, etc.)
- Custom Resource operations for Longhorn resources
- Node-level operations and monitoring
- Cross-namespace operations for testing various scenarios
- Cluster-wide operations for backup, HA, and infrastructure testing
The test environment specifically requires these elevated permissions to validate Longhorn's functionality comprehensively. This is a standard practice for integration testing where the test suite needs to manipulate and verify cluster-wide resources.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Analyze the test code to identify required Kubernetes permissions
# Search for Kubernetes API calls in test files to understand permission requirements
rg -g '*.py' -B 2 -A 2 'api\..*\.create|api\..*\.delete|api\..*\.patch|api\..*\.get|api\..*\.list|api\..*\.watch'
Length of output: 115
Script:
#!/bin/bash
# Let's try a different approach to find Kubernetes API usage patterns
# Search for kubernetes imports and client usage in Python files
rg -g '*.py' -B 2 -A 2 'from kubernetes import|kubernetes\.client|core_v1_api|apps_v1_api|custom_objects_api'
# Also search for specific test files that might contain Kubernetes operations
fd -g '*test*.py' --exec grep -l 'kubernetes\|K8s\|k8s'
Length of output: 16445
e2e/README.md (1)
20-21
: LGTM! Command structure is well-formatted.
The addition of the CIFS backupstore deployment maintains consistent formatting with the existing command structure.
pipelines/utilities/run_longhorn_e2e_test.sh (2)
3-3
: Verify CIFS dependencies are documented.
The CIFS backup store configuration assumes the existence of:
- A Kubernetes service
longhorn-test-cifs-svc
in thedefault
namespace - A Kubernetes secret
cifs-secret
for authentication
Let's verify if these dependencies are documented:
81-82
: Verify CIFS authentication for out-of-cluster testing.
While the CIFS backup store is configured, there don't appear to be any environment variables passed to the Docker container for CIFS authentication (unlike AWS credentials for S3).
Let's check if CIFS authentication is handled elsewhere:
Consider adding CIFS-specific environment variables to the Docker run command for authentication, similar to how AWS credentials are handled for S3:
docker run --pull=always \
--network=container:"${IMAGE_NAME}" \
--name "${CONTAINER_NAME}" \
-e LONGHORN_BACKUPSTORE="${LONGHORN_BACKUPSTORES}" \
+ -e CIFS_USERNAME="${CIFS_USERNAME}" \
+ -e CIFS_PASSWORD="${CIFS_PASSWORD}" \
...
pipelines/utilities/run_longhorn_test.sh (1)
27-29
: Verify CIFS implementation across related files.
The CIFS implementation looks correct, but let's verify the complete setup:
Also applies to: 113-115
✅ Verification successful
CIFS implementation is properly integrated across the codebase
The verification shows complete CIFS implementation:
- Test manifest correctly configures CIFS backupstore URL and credentials
- Integration tests have proper CIFS support:
common.py
includes CIFS URL validationbackupstore.py
implements CIFS backupstore configuration
- CIFS backupstore deployment is documented in both e2e and integration test READMEs
- The changes in
run_longhorn_test.sh
align with the existing CIFS infrastructure
The implementation is consistent and complete across all related components.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify CIFS implementation across related files
# Test 1: Check if CIFS is properly configured in the test manifest
echo "Checking test manifest for CIFS configuration..."
yq e '.spec.containers[0].env[] | select(.name == "LONGHORN_BACKUPSTORES")' manager/integration/deploy/test.yaml
# Test 2: Check if CIFS-related functions are implemented in test files
echo "Checking test files for CIFS support..."
rg -l "cifs" manager/integration/tests/
# Test 3: Check if CIFS is documented in README files
echo "Checking documentation for CIFS..."
rg -l "cifs" -g "README.md" .
Length of output: 652
Script:
#!/bin/bash
# Let's check the actual CIFS-related code in the test files and documentation
# Check CIFS implementation in common.py
echo "=== CIFS implementation in common.py ==="
rg -A 5 -B 5 "cifs" manager/integration/tests/common.py
# Check CIFS implementation in backupstore.py
echo "=== CIFS implementation in backupstore.py ==="
rg -A 5 -B 5 "cifs" manager/integration/tests/backupstore.py
# Check CIFS documentation content
echo "=== CIFS documentation in e2e/README.md ==="
rg -A 5 -B 5 "cifs" e2e/README.md
echo "=== CIFS documentation in manager/integration/README.md ==="
rg -A 5 -B 5 "cifs" manager/integration/README.md
# Let's also check the manifest file directly
echo "=== Content of test manifest file ==="
cat manager/integration/deploy/test.yaml
Length of output: 6196
pipelines/gke/scripts/longhorn-setup.sh (3)
144-147
: LGTM: CIFS backupstore configuration added correctly.
The CIFS backupstore URL and its integration into the kubectl create command follow the established pattern consistently with the existing NFS and MinIO configurations.
183-185
: LGTM: CIFS support properly integrated in upgrade test configuration.
The CIFS case is correctly implemented, following the same pattern as S3 and NFS, with proper extraction and configuration of the CIFS backup store value.
144-147
: Verify CIFS prerequisites in the cluster.
Consider adding checks to verify that CIFS support is properly enabled in the cluster before proceeding with the tests. This could include:
- Verifying CIFS kernel modules are loaded
- Checking if required CIFS utilities are installed
- Validating CIFS mount capabilities
manager/integration/tests/backupstore.py (2)
20-20
: LGTM!
The import follows the established pattern for backup target type checks.
68-69
: LGTM!
The CIFS condition follows the established pattern and correctly integrates with the existing backup store type handling.
test_framework/scripts/longhorn-setup.sh (1)
336-336
: LGTM: CIFS backupstore integration
The CIFS backupstore URL and deployment integration is properly implemented alongside the existing NFS and S3 backup stores.
Also applies to: 338-339
@@ -5,7 +5,7 @@ set -e | |||
sudo systemctl restart guestregister # Sometimes registration fails on first boot. | |||
sudo zypper ref | |||
sudo zypper install -y -t pattern devel_basis | |||
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper | |||
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add cifs-utils package for complete CIFS support.
While adding samba
is necessary, the cifs-utils
package is also required for mounting CIFS shares. This package provides the essential mount.cifs
utility.
Apply this diff:
-sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba
+sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba cifs-utils
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba | |
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba cifs-utils |
@@ -5,7 +5,7 @@ set -e | |||
sudo systemctl restart guestregister # Sometimes registration fails on first boot. | |||
sudo zypper ref | |||
sudo zypper install -y -t pattern devel_basis | |||
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper | |||
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Add error handling and verification for package installation.
While adding the samba
package is necessary for CIFS support, consider these improvements:
- Add error handling to ensure the package installation succeeds
- Verify the installation and required services
Consider this more robust implementation:
-sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba
+# Install required packages
+PACKAGES="open-iscsi nfs-client cryptsetup device-mapper samba"
+if ! sudo zypper install -y $PACKAGES; then
+ echo "Failed to install required packages"
+ exit 1
+fi
+
+# Verify samba installation
+if ! rpm -q samba > /dev/null; then
+ echo "Samba package verification failed"
+ exit 1
+fi
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba | |
# Install required packages | |
PACKAGES="open-iscsi nfs-client cryptsetup device-mapper samba" | |
if ! sudo zypper install -y $PACKAGES; then | |
echo "Failed to install required packages" | |
exit 1 | |
fi | |
# Verify samba installation | |
if ! rpm -q samba > /dev/null; then | |
echo "Samba package verification failed" | |
exit 1 | |
fi |
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | ||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
The yq command updates wrong environment variable name
The script modifies e2e/deploy/test.yaml
which uses LONGHORN_BACKUPSTORE
, but the CIFS modification is targeting the wrong index. The environment variable at index 1 is LONGHORN_BACKUPSTORE
and should be selected by name rather than index for reliability.
Consider updating the yq command to:
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[] |= select(.name == "LONGHORN_BACKUPSTORE").value = "'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
🔗 Analysis chain
Verify environment variable ordering in test manifest.
The implementation assumes the backup store environment variable is at index 1 in the container spec. This implicit ordering dependency could be fragile.
Let's verify the environment variable structure:
Let me try a different approach to verify the environment variables structure in the test manifest.
This will help us:
- Find the correct test manifest file location
- Examine its content to verify the environment variable structure
- Determine if the index-based access in the yq command is reliable
Environment variable ordering is safe but variable name differs
The review concern about environment variable ordering can be resolved, but there's a different issue. In both test manifests, the backup store environment variable is indeed at index 1, but the variable names are different:
e2e/deploy/test.yaml
usesLONGHORN_BACKUPSTORE
manager/integration/deploy/test.yaml
usesLONGHORN_BACKUPSTORES
Let's verify which manifest file is actually being used in the script:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check the environment variables structure in the test manifest
echo "Checking environment variables in test manifest..."
yq e '.spec.containers[0].env' e2e/deploy/test.yaml
Length of output: 263
Script:
#!/bin/bash
# First, let's locate the test manifest file
fd -e yaml -e yml test.yaml
# Then let's examine its content focusing on environment variables
for file in $(fd -e yaml -e yml test.yaml); do
echo "=== Examining $file ==="
cat "$file"
done
Length of output: 7419
Script:
#!/bin/bash
# Check which manifest file is referenced in the script
rg "LONGHORN_TESTS_MANIFEST_FILE_PATH.*=.*test.yaml" pipelines/utilities/run_longhorn_e2e_test.sh
Length of output: 158
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | ||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` | ||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider extracting backup store logic to a shared function.
The CIFS implementation is identical in both run_longhorn_test
and run_longhorn_upgrade_test
. Consider extracting this logic into a shared function to improve maintainability and reduce duplication.
+get_backup_store_value() {
+ local manifest_file="$1"
+ local store_type="$2"
+ local value_index
+
+ case "${store_type}" in
+ "s3") value_index=1 ;;
+ "nfs") value_index=2 ;;
+ "cifs") value_index=3 ;;
+ *) echo "Error: Unknown backup store type: ${store_type}" >&2; exit 1 ;;
+ esac
+
+ local store_value=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' "${manifest_file}" | awk -F ',' '{print $'${value_index}'}' | sed 's/ *//'`
+ if [[ -z "${store_value}" ]]; then
+ echo "Error: ${store_type} backup store value not found in env[1].value" >&2
+ exit 1
+ fi
+ echo "${store_value}"
+}
+
+set_backup_store_value() {
+ local manifest_file="$1"
+ local store_value="$2"
+ yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${store_value}'"' "${manifest_file}"
+}
run_longhorn_test(){
# ... existing code ...
if [[ $BACKUP_STORE_TYPE = "s3" || $BACKUP_STORE_TYPE = "nfs" || $BACKUP_STORE_TYPE = "cifs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
+ BACKUP_STORE_FOR_TEST=$(get_backup_store_value "${LONGHORN_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_TYPE}")
+ set_backup_store_value "${LONGHORN_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_FOR_TEST}"
fi
# ... rest of the function ...
}
run_longhorn_upgrade_test(){
# ... existing code ...
if [[ $BACKUP_STORE_TYPE = "s3" || $BACKUP_STORE_TYPE = "nfs" || $BACKUP_STORE_TYPE = "cifs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
+ BACKUP_STORE_FOR_TEST=$(get_backup_store_value "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_TYPE}")
+ set_backup_store_value "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_FOR_TEST}"
fi
# ... rest of the function ...
}
Committable suggestion was skipped due to low confidence.
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | ||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` | ||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding validation for the CIFS backup store value.
The implementation follows the existing pattern for S3 and NFS, but consider adding validation to ensure the extracted CIFS value exists and is valid. This would prevent silent failures if the third comma-separated value is missing or malformed.
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
+ if [[ -z "${BACKUP_STORE_FOR_TEST}" ]]; then
+ echo "Error: CIFS backup store value not found in env[1].value" >&2
+ exit 1
+ fi
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | |
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` | |
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | |
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | |
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` | |
if [[ -z "${BACKUP_STORE_FOR_TEST}" ]]; then | |
echo "Error: CIFS backup store value not found in env[1].value" >&2 | |
exit 1 | |
fi | |
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} |
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | ||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` | ||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider refactoring duplicated backup store handling logic.
The CIFS handling logic is duplicated between the upgrade test and regular test sections. Consider extracting this into a shared function to improve maintainability.
Example refactor:
+ configure_backup_store() {
+ local manifest_file="$1"
+ local store_type="$2"
+
+ if [[ $store_type = "s3" ]]; then
+ BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${manifest_file} | awk -F ',' '{print $1}' | sed 's/ *//'`
+ elif [[ $store_type = "nfs" ]]; then
+ BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${manifest_file} | awk -F ',' '{print $2}' | sed 's/ *//'`
+ elif [[ $store_type = "cifs" ]]; then
+ BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${manifest_file} | awk -F ',' '{print $3}' | sed 's/ *//'`
+ fi
+
+ yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${manifest_file}
+ }
run_longhorn_upgrade_test(){
# ... existing code ...
- if [[ $BACKUP_STORE_TYPE = "s3" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $1}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
- elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $2}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
- elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
- fi
+ configure_backup_store "${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_TYPE}"
# ... rest of the function ...
}
run_longhorn_tests(){
# ... existing code ...
- if [[ $BACKUP_STORE_TYPE = "s3" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $1}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
- elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $2}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
- elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
- fi
+ configure_backup_store "${LONGHORN_TESTS_MANIFEST_FILE_PATH}" "${BACKUP_STORE_TYPE}"
# ... rest of the function ...
}
Committable suggestion was skipped due to low confidence.
def set_backupstore_cifs(client): | ||
backupstores = get_backupstore_url() | ||
poll_interval = get_backupstore_poll_interval() | ||
for backupstore in backupstores: | ||
if is_backupTarget_cifs(backupstore): | ||
backupsettings = backupstore.split("$") | ||
set_backupstore_url(client, backupsettings[0]) | ||
set_backupstore_credential_secret(client, backupsettings[1]) | ||
set_backupstore_poll_interval(client, poll_interval) | ||
break | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Missing CIFS implementations confirmed for backup store operations
The codebase has several functions that handle backup operations with explicit implementations for S3 and NFS but raise NotImplementedError for other types. While CIFS support is partially implemented (detection and basic setup), the following functions need CIFS-specific implementations:
- backupstore_get_backup_volume_prefix
- backupstore_get_backup_cfg_file_path
- backupstore_get_volume_cfg_file_path
- backupstore_get_backup_blocks_dir
- backupstore_create_file_in_backupstore
- backupstore_delete_file_in_backupstore
- backupstore_delete_backup_cfg_file
- backupstore_delete_volume_cfg_file
- backupstore_delete_random_backup_block
- backupstore_count_backup_block_files
Each of these functions currently has an if-elif block for S3 and NFS but falls through to raise NotImplementedError, which will break CIFS backup operations.
🔗 Analysis chain
Implement CIFS-specific functions for backup operations.
Several functions in the file handle different backup store types but lack CIFS implementations. This could lead to NotImplementedError when these functions are called with CIFS backup store. Consider implementing CIFS-specific versions of the following functions:
- backupstore_get_backup_cfg_file_path
- backupstore_get_volume_cfg_file_path
- backupstore_get_backup_blocks_dir
- backupstore_create_file
- backupstore_delete_file
- Other similar functions that have S3 and NFS implementations
Let's verify the functions that need CIFS implementation:
Consider creating a backup store interface or abstract class to ensure consistent implementation across different backup store types.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Find all functions that handle different backup store types
echo "Functions potentially needing CIFS implementation:"
rg "if is_backupTarget_s3.*elif is_backupTarget_nfs.*else.*raise NotImplementedError" -A 1
Length of output: 203
Script:
#!/bin/bash
# Let's search for functions that handle backup store types with a broader pattern
echo "Functions handling backup store types:"
rg "is_backupTarget_(s3|nfs|cifs)" -B 2 -A 5
# Also search for NotImplementedError to find potential gaps
echo -e "\nFunctions with NotImplementedError:"
rg "NotImplementedError.*backup" -B 2 -A 2
# Look for existing CIFS implementations
echo -e "\nExisting CIFS implementations:"
rg "is_backupTarget_cifs" -B 2 -A 2
Length of output: 19333
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | ||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` | ||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider extracting duplicated CIFS handling logic
The CIFS backup store type handling logic is duplicated in both run_longhorn_upgrade_test()
and run_longhorn_tests()
functions. Consider extracting this into a helper function to improve maintainability and reduce code duplication.
+get_backup_store_for_test() {
+ local manifest_file="$1"
+ local store_type="$2"
+ local store_index=1
+
+ case "$store_type" in
+ "s3") store_index=1 ;;
+ "nfs") store_index=2 ;;
+ "cifs") store_index=3 ;;
+ esac
+
+ yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' "$manifest_file" | \
+ awk -F ',' "{print \$$store_index}" | sed 's/ *//'
+}
+
run_longhorn_upgrade_test(){
# ... existing code ...
if [[ $BACKUP_STORE_TYPE = "s3" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $1}' | sed 's/ *//'`
+ BACKUP_STORE_FOR_TEST=$(get_backup_store_for_test "$LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH" "s3")
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $2}' | sed 's/ *//'`
+ BACKUP_STORE_FOR_TEST=$(get_backup_store_for_test "$LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH" "nfs")
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
+ BACKUP_STORE_FOR_TEST=$(get_backup_store_for_test "$LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH" "cifs")
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
fi
This refactoring:
- Reduces code duplication
- Makes it easier to add new backup store types
- Centralizes the backup store extraction logic
- Improves maintainability
Also applies to: 458-460
This pull request is now in conflict. Could you fix it @chriscchien? 🙏 |
06626a3
to
679b07c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post. This is likely due to GitHub's limits when posting large numbers of comments.
Actionable comments posted: 14
🧹 Outside diff range and nitpick comments (9)
test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_server.sh.tpl (1)
30-30
: Add newline at end of file.It's recommended to maintain a newline at the end of shell scripts as per POSIX standards. This ensures proper behavior when concatenating or appending to the file.
fi +
test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_server.sh.tpl (1)
Line range hint
1-3
: Improve error handling strategy.The current use of
set -e
could cause premature script termination during critical setup steps. A more robust approach would be to implement proper error handling for each critical section.Consider this approach:
#!/bin/bash -set -e +# Helper function for commands that need retries +retry_command() { + local -r cmd="$1" + local -r description="$2" + local -r max_attempts=3 + local -r sleep_time=5 + + for ((i=1; i<=max_attempts; i++)); do + if eval "$cmd"; then + return 0 + fi + echo "Attempt $i/$max_attempts: $description failed. Retrying in ${sleep_time}s..." + sleep $sleep_time + done + + echo "Error: $description failed after $max_attempts attempts" + return 1 +} # Replace critical commands with retry mechanism -sudo systemctl restart guestregister +retry_command "sudo systemctl restart guestregister" "guest registration" -sudo zypper ref -sudo zypper install -y -t pattern devel_basis +retry_command "sudo zypper ref" "repository refresh" +retry_command "sudo zypper install -y -t pattern devel_basis" "development pattern installation"This approach:
- Removes the global
set -e
- Implements a retry mechanism for critical commands
- Provides better error visibility and recovery
- Maintains script execution even if non-critical commands fail
manager/integration/README.md (2)
19-19
: Fix typos and improve documentation clarity.
- Fix the typo "Azurtize" → "Azurite"
- Improve the grammar and formatting of the manual steps note
Apply this diff:
-1. Deploy all backupstore servers(including `NFS` server and `Minio` as s3 server `CIFS` and `Azurtize` server) for test purposes. +1. Deploy all backupstore servers (including `NFS` server, `Minio` as S3 server, `CIFS`, and `Azurite` server) for test purposes. - For Azurtize, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). + For Azurite, there are some manual steps that need to be done after the manifest is deployed. See [Azurite Backupstore Setup Guide](https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing) for details.Also applies to: 21-21
24-26
: Improve command block formatting and reliability.
- Add language specification to the code block
- Consider using versioned URLs or documenting the version requirement
- Improve command readability
Apply this diff:
-``` +```bash kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yamlConsider adding a note about the required Longhorn version compatibility for these manifests.
pipelines/utilities/install_backupstores.sh (1)
4-11
: Fix typo in variable name and add error handling.The variable name contains a typo:
AZURTIZE_BACKUPSTORE_URL
should beAZURITE_BACKUPSTORE_URL
.Consider adding error handling and URL verification:
- AZURTIZE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml" + AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml" + + # Verify URLs are accessible + for url in "${MINIO_BACKUPSTORE_URL}" "${NFS_BACKUPSTORE_URL}" "${CIFS_BACKUPSTORE_URL}" "${AZURITE_BACKUPSTORE_URL}"; do + if ! curl --output /dev/null --silent --head --fail "$url"; then + echo "Error: Unable to access $url" + exit 1 + fi + done + kubectl create -f ${MINIO_BACKUPSTORE_URL} \ -f ${NFS_BACKUPSTORE_URL} \ -f ${CIFS_BACKUPSTORE_URL} \ - -f ${AZURTIZE_BACKUPSTORE_URL} + -f ${AZURITE_BACKUPSTORE_URL} || { + echo "Error: Failed to create backup stores" + exit 1 + }e2e/README.md (2)
17-24
: Documentation improvements needed for backup store setup.
The Azurite setup link is an internal Confluence URL that may not be accessible to all users. Consider:
- Moving the setup instructions to a public documentation
- Adding the essential setup steps directly in this README
Add prerequisites for CIFS and Azurite setup:
- Required credentials or configurations
- Network requirements
- Storage requirements
Apply these improvements:
-1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server, `CIFS` and `Azurtize` server) for test purposes. +1. Deploy all backupstore servers for test purposes: + - NFS server + - Minio (S3 compatible) + - CIFS server (requires: <list prerequisites>) + - Azurite server (requires: <list prerequisites>) - For Azurtize, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). + For Azurite setup: + 1. <Step 1> + 2. <Step 2> + 3. For detailed instructions, see [Azurite Setup Guide](public-url-to-documentation) ```bash kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml🧰 Tools
🪛 LanguageTool
[uncategorized] ~19-~19: A determiner appears to be missing. Consider inserting it.
Context: ...r Azurtize, there are some manual steps need to be done after manifest deployed(http...(AI_EN_LECTOR_MISSING_DETERMINER)
🪛 Markdownlint
19-19: null
Bare URL used(MD034, no-bare-urls)
20-20: null
Fenced code blocks should have a language specified(MD040, fenced-code-language)
19-20
: Fix grammar and markdown formatting.
- Grammar: "after manifest deployed" should be "after the manifest is deployed"
- The URL should be properly formatted as a markdown link
- Code block should specify the language
Apply these fixes:
- For Azurtize, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). + For Azurite, there are some manual steps that need to be done after the manifest is deployed. See [Setup Guide](https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). -``` +```bash🧰 Tools
🪛 LanguageTool
[uncategorized] ~19-~19: A determiner appears to be missing. Consider inserting it.
Context: ...r Azurtize, there are some manual steps need to be done after manifest deployed(http...(AI_EN_LECTOR_MISSING_DETERMINER)
🪛 Markdownlint
19-19: null
Bare URL used(MD034, no-bare-urls)
20-20: null
Fenced code blocks should have a language specified(MD040, fenced-code-language)
pipelines/utilities/run_longhorn_test.sh (1)
27-32
: Add documentation for the backup store value format.The code assumes a specific comma-separated format in env[1].value but this format is not documented. Consider adding a comment explaining the expected format:
Add this comment before the if-elif block:
+ # env[1].value format: "<s3_value>,<nfs_value>,<cifs_value>,<azurite_value>" if [[ $BACKUP_STORE_TYPE = "s3" ]]; then
test_framework/scripts/longhorn-setup.sh (1)
567-570
: Remove Dead Code: Commented-Out BlockLines 567-570 contain code that is commented out. Unless needed for future reference, it's best to remove dead code to enhance readability.
Consider removing or explaining the necessity of this commented-out block.
🛑 Comments failed to post (14)
test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_server.sh.tpl (1)
8-10:
⚠️ Potential issueFix incomplete and potentially unsafe Azure CLI installation.
The current Azure CLI installation command is incomplete and needs improvement:
- The
curl
command only downloads but doesn't execute or save the script- Missing error handling and installation verification
- No version pinning for better reproducibility
Apply this diff to implement a safer Azure CLI installation:
# install azure cli -curl -sL https://aka.ms/InstallAzureCLIDeb +echo "Installing Azure CLI..." +if ! command -v az &> /dev/null; then + curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash + if ! command -v az &> /dev/null; then + echo "Azure CLI installation failed" + exit 1 + fi + echo "Azure CLI installation completed" +fiThis implementation:
- Checks if Azure CLI is already installed
- Properly pipes the installation script to bash
- Verifies successful installation
- Includes error handling
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.# install azure cli echo "Installing Azure CLI..." if ! command -v az &> /dev/null; then curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash if ! command -v az &> /dev/null; then echo "Azure CLI installation failed" exit 1 fi echo "Azure CLI installation completed" fi
pipelines/utilities/run_longhorn_test.sh (2)
30-32:
⚠️ Potential issueFix typo in backup store type name.
The backup store type "azurtize" appears to be a typo and should be "azurite" (Azure Storage Emulator).
Apply this diff to fix the typo:
- elif [[ $BACKUP_STORE_TYPE = "azurtize" ]]; then + elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $4}' | sed 's/ *//'` yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
119-121:
⚠️ Potential issueFix typo in backup store type name.
The backup store type "azurtize" appears to be a typo and should be "azurite" (Azure Storage Emulator).
Apply this diff to fix the typo:
- elif [[ $BACKUP_STORE_TYPE = "azurtize" ]]; then + elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $4}' | sed 's/ *//'` yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
pipelines/gke/scripts/longhorn-setup.sh (5)
150-150:
⚠️ Potential issueCorrect the function call to
setup_azurite_backup_store
.The function call
setup_azuitize_backup_store
on line 150 has a typo. The correct function name should besetup_azurite_backup_store
.Apply this diff to fix the function call:
-setup_azuitize_backup_store +setup_azurite_backup_store📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.setup_azurite_backup_store
144-145:
⚠️ Potential issueCorrect the typo in variable name for Azurite backup store URL.
The variable
AZURTIZE_BACKUPSTORE_URL
appears to have a typo. It should beAZURITE_BACKUPSTORE_URL
to correctly reference the Azurite backup store.Apply this diff to fix the typo:
CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml" -AZURTIZE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml" +AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml" AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml"
262-267:
⚠️ Potential issueEnsure consistent spelling of 'Azurite' in test setup.
In the test setup, the backup store type and related variables have typos. Please correct
azurtize
toazurite
.Apply this diff to fix the typos:
elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then BACKUP_STORE_FOR_TEST=`...` yq e -i '...' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} -elif [[ $BACKUP_STORE_TYPE = "azurtize" ]]; then +elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then BACKUP_STORE_FOR_TEST=`...` yq e -i '...' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}Committable suggestion skipped: line range outside the PR's diff.
153-174:
⚠️ Potential issueFix typos in function name and variables in
setup_azurite_backup_store
.The function
setup_azuitize_backup_store
contains typos in its name and within its body. Please correct all instances ofazuitize
andazurtize
toazurite
.Apply these diffs to correct the typos:
-setup_azuitize_backup_store(){ +setup_azurite_backup_store(){# In line 156: -until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do +until (kubectl get pods | grep 'longhorn-test-azurite' | grep 'Running'); do# In line 157: -echo 'Waiting azurtize pod running' +echo 'Waiting for Azurite pod to be running'# In line 165: -nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 & +nohup kubectl port-forward --address 0.0.0.0 service/azurite-service 20001:10000 &# In line 168: -AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64) +AZURITE_ENDPOINT=$(echo -n "http://$(kubectl get svc azurite-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64)# In line 169: -kubectl -n longhorn-system patch secret azblob-secret \ +kubectl -n longhorn-system patch secret azurite-secret \# In line 173: -az storage container create -n longhorn-test-azurtize --connection-string "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=..." +az storage container create -n longhorn-test-azurite --connection-string "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=..."📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.setup_azurite_backup_store(){ RETRY=0 MAX_RETRY=60 until (kubectl get pods | grep 'longhorn-test-azurite' | grep 'Running'); do echo 'Waiting for Azurite pod to be running' sleep 5 if [ $RETRY -eq $MAX_RETRY ]; then break fi RETRY=$((RETRY+1)) done nohup kubectl port-forward --address 0.0.0.0 service/azurite-service 20001:10000 & sleep 5 AZURITE_ENDPOINT=$(echo -n "http://$(kubectl get svc azurite-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64) kubectl -n longhorn-system patch secret azurite-secret \ --type=json \ -p="[{\"op\": \"replace\", \"path\": \"/data/AZBLOB_ENDPOINT\", \"value\": \"${AZURITE_ENDPOINT}\"}]" az storage container create -n longhorn-test-azurite --connection-string "DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://0.0.0.0:20001/devstoreaccount1;" }
208-213:
⚠️ Potential issueCorrect the backup store type comparison for Azurite.
The comparison
$BACKUP_STORE_TYPE = "azurtize"
has a typo. It should be"azurite"
to correctly match the backup store type.Apply this diff to fix the typo:
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then BACKUP_STORE_FOR_TEST=`...` yq e -i '...' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} -elif [[ $BACKUP_STORE_TYPE = "azurtize" ]]; then +elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then BACKUP_STORE_FOR_TEST=`...` yq e -i '...' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $4}' | sed 's/ *//'` yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
test_framework/scripts/longhorn-setup.sh (6)
342-342:
⚠️ Potential issueTypo in Function Call: Correct 'setup_azuitize_backup_store' to 'setup_azurite_backup_store'
At line 342, the function
setup_azuitize_backup_store
is called, but there seems to be a typo in the function name. It should besetup_azurite_backup_store
to match the intended functionality.Apply this diff to correct the function call:
-setup_azuitize_backup_store +setup_azurite_backup_store📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.setup_azurite_backup_store
336-337:
⚠️ Potential issueTypo in Variable Name: Correct 'AZURTIZE' to 'AZURITE'
The variables
AZURTIZE_BACKUPSTORE_URL
should be renamed toAZURITE_BACKUPSTORE_URL
to match the correct spelling of Azurite.Apply this diff to correct the variable name:
CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml" -AZURTIZE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml" +AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml" AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml"
430-432:
⚠️ Potential issueTypo in Backup Store Type: Correct 'azurtize' to 'azurite'
In the conditional statement at line 430, the backup store type is misspelled as
azurtize
. It should beazurite
to correctly match the intended backup store.Apply this diff to correct the backup store type:
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then # ... -elif [[ $BACKUP_STORE_TYPE = "azurtize" ]]; then +elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; thenCommittable suggestion skipped: line range outside the PR's diff.
487-492: 🛠️ Refactor suggestion
⚠️ Potential issueConsistent Handling of Backup Store Types in Tests
The conditional blocks handling different
BACKUP_STORE_TYPE
values inrun_longhorn_tests()
contain a typo in the backup store typeazurtize
. It should beazurite
.Correct the backup store type and consider refactoring to reduce code duplication.
Apply this diff:
elif [[ $BACKUP_STORE_TYPE = "nfs" ]]; then # ... -elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then +elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then # ... -elif [[ $BACKUP_STORE_TYPE = "azurtize" ]]; then +elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; thenTo improve maintainability, consider extracting the duplicated logic for handling
BACKUP_STORE_FOR_TEST
into a helper function, similar to the prior suggestion.Committable suggestion skipped: line range outside the PR's diff.
362-367:
⚠️ Potential issueSecurity Concern: Avoid Hardcoding Credentials in Scripts
At line 366, the
az storage container create
command includes hardcoded credentials in the connection string. Storing sensitive information like account keys directly in scripts poses a security risk.Consider the following actions:
- Retrieve credentials securely from environment variables or a secret management system.
- Replace hardcoded values with secure references.
For example:
-ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurtize --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;'" +ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string '${AZURITE_CONNECTION_STRING}'"Ensure
AZURITE_CONNECTION_STRING
is securely provided.Committable suggestion skipped: line range outside the PR's diff.
345-367:
⚠️ Potential issueTypo in Function Name and Consistent Naming
The function
setup_azuitize_backup_store()
has a typo in the name. It should besetup_azurite_backup_store()
to correctly reference Azurite.Apply this diff to correct the function name:
-function setup_azuitize_backup_store(){ +function setup_azurite_backup_store(){Additionally, review all occurrences of
azuitize
in the function and replace them withazurite
for consistency.Committable suggestion skipped: line range outside the PR's diff.
6dd4978
to
d12551d
Compare
This ticket is ready for review, thanks. cc @longhorn/qa @longhorn/dev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There are lots of azurtize
in this PR. Should it be azurite
?
CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip) | ||
CONTROL_PLANE_PRIVATE_IP=$(kubectl get nodes -o wide | awk '/control-plane/ {print $6}') | ||
# port forward and az container create need to be run on control node | ||
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you know why we need to ssh into the control plane to run the az
command? Can it be directly run on the Jenkins agent like other kubectl
commands to skip the ssh
step?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should can perform remote create, but since we need to ssh into control plane node to perform port forward(the workaround to setup azurite backupstore), using one ssh command to create az storage on control plane node directly is more efficient. I'd love to hear your thoughts on this, thank you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't the port-forward workaround using a local port
? Not sure but I assume it means your local machine rather than the control plane node? So do you mean if we port-forward on the local machine, the workaround will not work?
3. Forward azblob-service
Forward local port 20001 traffic to port 10000 on the service
Additionally, the username ec2-user
in ec2-user@${CONTROL_PLANE_PUBLIC_IP}
limits the script to EC2 sles
or sle-micro
instances. But it won't be work on other distros like rocky, rhel, etc., and other environments like local VM or harvester HAL cluster.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I did a try, if the port forward running on jenkins agent, the backup store can not be accessed. For the local port description, actually means the kubernetes node, I have updated the confluence content.
And the azurite
currently supported on sles only in this PR, I can create another ticket for implememt on other distros.
@@ -5,7 +5,7 @@ set -e | |||
sudo systemctl restart guestregister # Sometimes registration fails on first boot. | |||
sudo zypper ref | |||
sudo zypper install -y -t pattern devel_basis | |||
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper | |||
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would you like to add samba
to terraform scripts for other distros as well?
e2e/README.md
Outdated
1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server) for test purposes. | ||
1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server, `CIFS` and `Azurtize` server) for test purposes. | ||
|
||
For Azurtize, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we have a test report for the CIFS and Azurite robot test cases run on our Jenkins?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Test case "Test Incremental Restore" passed on my jenkins with cifs
and azurite
, because our jenkins need to point to my branch to run the scripts, currently it can not have report on our jenkins.
@@ -64,6 +66,10 @@ def set_random_backupstore(request, client): | |||
elif request.param == "nfs": | |||
set_backupstore_nfs(client) | |||
mount_nfs_backupstore(client) | |||
elif request.param == "cifs": | |||
set_backupstore_cifs(client) | |||
elif request.param == "azblob": |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I run other backup related test cases that manipulate the backup store, like test_backup_volume_list
, test_backup_block_deletion
,
test_backup_metadata_deletion
, test_all_replica_restore_failure
, etc., will they pass? If not, should they be skipped?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is exactly what I'm concerned about: #2155 (comment)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated, change raise NotImplementedError
to pytest.skip("Skip test case because the backup store type is not supported") # NOQA
and the teardown function can work after the skip. in related function and test case
- test_backup_block_deletion
- test_backup_metadata_deletion
- test_backup_volume_list
- test_all_replica_restore_failure
e2e/README.md
Outdated
1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server) for test purposes. | ||
1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server, `CIFS` and `Azurtize` server) for test purposes. | ||
|
||
For Azurtize, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mantissahz @chriscchien Could this be resolve to eliminate the tedious manual steps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is a dicsussion in longhorn/longhorn#9340, currently the manual steps is the workaround to setup azurite in test environment. If the discussion having further conclusion, I will follow up to modify code to match the behavior.
cc @mantissahz
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See if it's possible to resolve this.
d12551d
to
3c57f3b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (12)
test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl (1)
4-4
: Verify package installation error handling.The script should handle potential package installation failures gracefully, especially in CI environments where network issues might occur.
Consider adding error handling:
-apt-get install -y nfs-common cryptsetup dmsetup samba linux-modules-extra-`uname -r` +if ! apt-get install -y nfs-common cryptsetup dmsetep samba linux-modules-extra-`uname -r`; then + echo "Failed to install required packages" + exit 1 +fitest_framework/terraform/equinix/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl (1)
7-7
: Consider adding package installation verification.For improved robustness in CI environments, consider adding package installation verification.
-apt-get install -y nfs-common cryptsetup dmsetup samba linux-modules-extra-`uname -r` +apt-get install -y nfs-common cryptsetup dmsetup samba linux-modules-extra-`uname -r` + +# Verify critical packages are installed +for pkg in nfs-common cryptsetup dmsetup samba; do + if ! dpkg -l | grep -q "^ii $pkg "; then + echo "Failed to install $pkg" + exit 1 + fi +donetest_framework/terraform/aws/centos/user-data-scripts/provision_rke2_agent.sh.tpl (1)
Line range hint
1-7
: Consider adding SELinux configuration for CIFS.Since this script configures SELinux and installs samba, it would be beneficial to add the necessary SELinux boolean settings for CIFS functionality.
Consider adding these configurations after the SELinux mode setting:
elif [[ ${selinux_mode} == "permissive" ]]; then setenforce 0 fi + +# Configure SELinux for CIFS if in enforcing mode +if [[ ${selinux_mode} == "enforcing" ]] ; then + # Enable SELinux boolean for CIFS + setsebool -P samba_enable_home_dirs on + setsebool -P samba_share_t on +fitest_framework/terraform/equinix/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl (1)
Line range hint
7-24
: Consider adding system tuning parameters for CIFS performance.While the script correctly sets up kernel modules and system parameters for various functionalities, it might benefit from additional CIFS-specific tuning parameters for optimal performance.
Consider adding these CIFS-related system parameters:
touch /etc/modules-load.d/modules.conf cat > /etc/modules-load.d/modules.conf <<EOF uio uio_pci_generic vfio_pci nvme-tcp dm_crypt +cifs EOF echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages echo "vm.nr_hugepages=1024" >> /etc/sysctl.conf +# CIFS performance tuning +echo "net.ipv4.tcp_window_scaling = 1" >> /etc/sysctl.conf +echo "net.core.rmem_max = 16777216" >> /etc/sysctl.conf +echo "net.core.wmem_max = 16777216" >> /etc/sysctl.conftest_framework/terraform/aws/rockylinux/user-data-scripts/provision_k3s_agent.sh.tpl (2)
13-13
: Consider version pinning for reproducible builds.The addition of the
samba
package is appropriate for CIFS support. However, consider pinning the version for better reproducibility across environments.-sudo dnf install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper samba +sudo dnf install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper samba-4.15.0
13-13
: Consider additional Samba configurations for robust CIFS support.While the Samba package is installed, the script might benefit from additional configurations to ensure reliable CIFS operations, especially in SELinux enforcing mode:
- SELinux context settings for Samba
- Basic SMB client configurations
- Service enablement if server functionality is needed
Consider adding these configurations after the package installation:
sudo dnf install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper samba +# Configure SELinux for Samba +if [[ ${selinux_mode} == "enforcing" ]] ; then + sudo setsebool -P samba_enable_home_dirs on + sudo setsebool -P samba_share_nfs on +fi + +# Basic SMB client configurations +sudo cat >> /etc/samba/smb.conf <<EOF +[global] +client min protocol = SMB2 +client max protocol = SMB3 +EOFtest_framework/terraform/aws/rockylinux/user-data-scripts/provision_rke2_agent.sh.tpl (1)
13-13
: Consider adding security hardening for Samba.Since CIFS/Samba can be a potential attack vector, consider implementing these security enhancements:
- Ensure SMBv1 is disabled
- Configure minimum SMB protocol version
- Set appropriate file permissions
Add these security configurations to strengthen Samba:
sudo dnf install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper jq nmap-ncat samba +# Disable SMBv1 and set minimum protocol version to SMBv2 +sudo tee -a /etc/samba/smb.conf > /dev/null <<EOF +[global] + client min protocol = SMB2 + server min protocol = SMB2 + security = user +EOFmanager/integration/README.md (2)
19-21
: Improve documentation formatting and clarity.
- The URL should be properly formatted as a markdown link.
- Consider adding more details about the manual steps required for Azurite setup.
Apply this diff to improve the formatting:
-1. Deploy all backupstore servers(including `NFS` server and `Minio` as s3 server `CIFS` and `Azurite` server) for test purposes. +1. Deploy all backupstore servers (including `NFS` server, `Minio` as S3 server, `CIFS`, and `Azurite` server) for test purposes. - For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). + For Azurite, manual steps are required after deploying the manifest. See [Azurite Backupstore Setup Guide](https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing) for details.🧰 Tools
🪛 Markdownlint
21-21: null
Bare URL used(MD034, no-bare-urls)
24-26
: Add code block language specification and improve command readability.The kubectl command block should specify the shell language and be properly formatted for better readability.
Apply this diff to improve the formatting:
-``` +```shell kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yamlpipelines/utilities/install_backupstores.sh (1)
25-28
: Improve secret patching with validation.The current implementation doesn't validate the service IP or secret patching success.
Consider adding validation:
- AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64) - kubectl -n longhorn-system patch secret azblob-secret \ - --type=json \ - -p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]" + # Get service IP with validation + SERVICE_IP=$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}') + if [ -z "$SERVICE_IP" ]; then + echo "Error: Failed to get Azurite service IP" + exit 1 + fi + + # Patch secret with validation + AZBLOB_ENDPOINT=$(echo -n "http://${SERVICE_IP}:10000/" | base64) + if ! kubectl -n longhorn-system patch secret azblob-secret \ + --type=json \ + -p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]"; then + echo "Error: Failed to patch azblob-secret" + exit 1 + fie2e/README.md (2)
17-19
: Enhance documentation and consider automation.
- Format the Confluence URL as a proper markdown link.
- Consider automating the manual Azurite setup steps to improve CI/CD efficiency.
Apply this diff to fix the markdown formatting:
- For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). + For Azurite, there are some manual steps need to be done after manifest deployed. See [Azurite Backupstore Setup Guide](https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing).Would you like me to help create a script to automate the Azurite setup steps? This could eliminate the manual intervention currently required.
🧰 Tools
🪛 Markdownlint
19-19: null
Bare URL used(MD034, no-bare-urls)
22-24
: Add language specifier to the code block.The deployment commands look good, but the code block should specify the language for better syntax highlighting.
Add
bash
as the language specifier to the code block:-``` +```bash kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml</blockquote></details> </blockquote></details> <details> <summary>📜 Review details</summary> **Configuration used: CodeRabbit UI** **Review profile: CHILL** <details> <summary>📥 Commits</summary> Reviewing files that changed from the base of the PR and between 4400e94cf45562e2770921746201a69d68df7d60 and 3c57f3b1454f3613d146da9a105b3c66170206d8. </details> <details> <summary>📒 Files selected for processing (23)</summary> * `e2e/README.md` (1 hunks) * `manager/integration/README.md` (1 hunks) * `manager/integration/deploy/test.yaml` (1 hunks) * `manager/integration/tests/backupstore.py` (14 hunks) * `manager/integration/tests/common.py` (1 hunks) * `pipelines/utilities/install_backupstores.sh` (1 hunks) * `pipelines/utilities/run_longhorn_e2e_test.sh` (3 hunks) * `pipelines/utilities/run_longhorn_test.sh` (2 hunks) * `test_framework/scripts/longhorn-setup.sh` (3 hunks) * `test_framework/terraform/aws/centos/user-data-scripts/provision_k3s_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/centos/user-data-scripts/provision_rke2_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/oracle/user-data-scripts/provision_k3s_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/oracle/user-data-scripts/provision_rke2_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/rhel/user-data-scripts/provision_k3s_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/rhel/user-data-scripts/provision_rke2_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/rockylinux/user-data-scripts/provision_k3s_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/rockylinux/user-data-scripts/provision_rke2_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_server.sh.tpl` (2 hunks) * `test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_server.sh.tpl` (1 hunks) * `test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl` (1 hunks) * `test_framework/terraform/aws/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl` (1 hunks) * `test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl` (1 hunks) * `test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl` (1 hunks) </details> <details> <summary>🚧 Files skipped from review as they are similar to previous changes (5)</summary> * manager/integration/deploy/test.yaml * pipelines/utilities/run_longhorn_e2e_test.sh * pipelines/utilities/run_longhorn_test.sh * test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_server.sh.tpl * test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_server.sh.tpl </details> <details> <summary>🧰 Additional context used</summary> <details> <summary>🪛 Markdownlint</summary> <details> <summary>e2e/README.md</summary> 19-19: null Bare URL used (MD034, no-bare-urls) --- 20-20: null Fenced code blocks should have a language specified (MD040, fenced-code-language) </details> <details> <summary>manager/integration/README.md</summary> 21-21: null Bare URL used (MD034, no-bare-urls) --- 22-22: null Fenced code blocks should have a language specified (MD040, fenced-code-language) </details> </details> </details> <details> <summary>🔇 Additional comments (31)</summary> <details> <summary>test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl (1)</summary> `4-4`: **LGTM! Consider additional CIFS configuration.** The addition of the `samba` package is correct and necessary for CIFS support. However, for complete CIFS functionality, you might want to consider adding basic CIFS configuration. Let's verify if any CIFS configuration files are present in the repository: </details> <details> <summary>test_framework/terraform/aws/centos/user-data-scripts/provision_k3s_agent.sh.tpl (2)</summary> `17-17`: **LGTM: Addition of samba package for CIFS support** The addition of the samba package aligns with the PR objective to support CIFS as a backup store type. --- `17-17`: **Consider adding package installation verification and service configuration** While the samba package installation is correct, consider: 1. Verifying successful package installation 2. Configuring and starting the samba service if required for CIFS backup store functionality Consider adding these verifications: ```diff sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools samba +# Verify samba package installation +if ! rpm -q samba > /dev/null; then + echo "Error: samba package installation failed" + exit 1 +fi + sudo systemctl -q enable iscsid sudo systemctl start iscsid + +# Configure and start samba service if needed for CIFS backup store +if [[ "${BACKUP_STORE_TYPE}" == "CIFS" ]]; then + sudo systemctl enable smb + sudo systemctl start smb +fi
test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl (2)
7-7
: LGTM: Addition of samba package aligns with CIFS support requirements.The addition of the
samba
package is appropriate for enabling CIFS support in the testing environment.
Line range hint
40-43
: Verify K3s version compatibility with CIFS workloads.The script installs K3s using a specified version. Ensure the chosen K3s version supports the CIFS-related workloads you plan to run.
✅ Verification successful
Let me gather more information about the CIFS workload requirements and version specifications.
Based on the gathered information, I can now provide the final response:
CIFS workload compatibility is already verified through extensive testing infrastructure
The codebase shows comprehensive CIFS support through Longhorn's backup store functionality, with:
- Dedicated CIFS backup store deployment and configuration
- Integration tests specifically covering CIFS scenarios
- E2E test infrastructure that validates CIFS workloads
The K3s version is properly managed through the
distro_version
variable, which is consistently used across different infrastructure deployments. The CIFS functionality is part of the standard test suite and doesn't require special K3s version considerations.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check if the K3s version variable is defined and not empty # This helps ensure we're using a specific, tested version # Test: Check if INSTALL_K3S_VERSION is used in the template rg "INSTALL_K3S_VERSION.*distro_version" -A 2 -B 2 # Test: Look for any K3s version specifications in related config files fd -e yaml -e json -e tf | xargs rg "k3s.*version"Length of output: 2976
Script:
#!/bin/bash # Check for CIFS-related configurations and requirements rg -i "cifs|smb" -A 3 -B 3 # Look for kubernetes version variables and their values rg "k8s_distro_version|kubernetes.*version" --type terraform # Check for any storage class or volume definitions that might use CIFS fd -e yaml -e yml | xargs rg -l "storageClass|persistentVolume.*|csi"Length of output: 15455
test_framework/terraform/aws/oracle/user-data-scripts/provision_k3s_agent.sh.tpl (2)
5-5
: LGTM! Verify samba configuration.The addition of the
samba
package aligns with the PR objective to support CIFS backup stores. However, ensure that the samba configuration is properly handled elsewhere in the codebase.✅ Verification successful
LGTM! The samba package installation is correctly integrated.
The search results confirm that the
samba
package installation is part of a well-integrated CIFS backup store implementation:
- CIFS backup store is properly configured through
cifs-backupstore.yaml
manifest- The configuration is managed at the Kubernetes level using secrets (
cifs-secret
)- Integration tests include CIFS backup store testing alongside other storage backends
- No direct Samba configuration is needed as it's handled through Kubernetes/Longhorn's backup store implementation
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for samba configuration files in the codebase # Expected: Find related configuration files for samba setup # Search for potential samba config files echo "Searching for samba configuration files..." rg -g '!{*.md,*.txt}' -i 'smb\.conf|samba.*\.conf' # Search for samba-related setup code echo "Searching for samba setup code..." rg -g '!{*.md,*.txt}' -i '(configure|setup).*samba'Length of output: 346
Script:
#!/bin/bash # Let's check if there are any other files in the codebase that might be configuring or using CIFS/SMB # This will help us understand the broader context of Samba usage # Search for CIFS/SMB related code echo "Searching for CIFS/SMB usage..." rg -i "cifs|smb" -A 3 -B 3 # Check for mount commands or configurations echo "Searching for mount configurations..." rg -i "mount.*cifs|mount.*smb" -A 3 -B 3 # Look for storage class or PV/PVC configurations that might use CIFS echo "Searching for Kubernetes storage configurations..." rg -g '*.{yaml,yml}' -i "cifs|smb"Length of output: 15468
5-5
: Consider security implications of samba installation.While adding the samba package is necessary for CIFS support, it increases the attack surface. Consider:
- Ensuring samba service is only started when needed
- Implementing proper firewall rules
- Applying security hardening configurations
test_framework/terraform/aws/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl (2)
Line range hint
1-47
: Verify systemd service dependencies.Since CIFS support is being added, we should ensure that the RKE2 agent service starts after the SMB/CIFS services are ready.
#!/bin/bash # Description: Check if there are any existing systemd service dependencies # Expected: Find systemd unit files or service configurations # Search for systemd service configurations rg -A 5 "systemctl|\.service" # Search for specific RKE2 agent service configurations fd "rke2-agent.service"
4-4
: LGTM: Samba package installation for CIFS support.The addition of the
samba
package aligns with the PR objective to support CIFS in the CI environment.Let's verify if similar changes are needed for other distributions:
✅ Verification successful
Based on the search results, I can now verify the review comment. The search shows that
samba
package installation has been consistently added across different distributions:
- For Ubuntu (apt-based):
apt-get install -y nfs-common cryptsetup dmsetup samba linux-modules-extra-`uname -r`
- For SLES/SUSE (zypper-based):
sudo zypper install -y open-iscsi nfs-client cryptsetup device-mapper samba
- For RHEL/CentOS/Oracle Linux (yum-based):
sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper nc samba
LGTM: Samba package installation is consistently handled across distributions
The addition of the
samba
package in the Ubuntu RKE2 agent provisioning script aligns with similar changes made in other distribution-specific provisioning scripts, ensuring consistent CIFS support across all supported platforms.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check if samba package installation is consistently handled across different distributions # Expected: Find other provisioning scripts that might need similar updates # Search for similar provisioning scripts fd provision_rke2_agent.sh.tpl # Search for package installation patterns in these files rg -A 5 "apt-get install|yum install|zypper install"Length of output: 117685
test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl (1)
Line range hint
1-55
: Verify error handling for CIFS-related operations.The script uses
set -e
for error handling, which is good. However, since we're adding CIFS support, we should ensure that any CIFS-related failures are properly handled and don't silently fail.Let's check for error handling patterns in similar scripts:
test_framework/terraform/aws/rhel/user-data-scripts/provision_k3s_agent.sh.tpl (1)
13-13
: LGTM! Addition of samba package aligns with CIFS support requirements.The addition of the samba package is necessary for CIFS functionality and is correctly placed in the system package installation step.
Let's verify the package installation and service configuration:
Note: Consider adding samba service management (enable/start) if required by your CIFS implementation.
✅ Verification successful
LGTM! Samba package installation is consistent across agent nodes and aligns with CIFS support requirements.
The verification shows:
- Samba package is consistently added to agent node provisioning scripts across different OS distributions:
- RHEL:
yum install -y ... samba
- Ubuntu:
apt-get install -y ... samba
- SLES:
zypper install -y ... samba
- CentOS/RockyLinux:
yum/dnf install -y ... samba
- CIFS backupstore configuration exists in the test framework
- No explicit samba service management is required as CIFS functionality only needs the package installed
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify samba package installation and configuration in other scripts # Test 1: Check if samba configuration is handled in other scripts rg -A 5 'smb\.conf|samba\.conf' # Test 2: Check if samba service is managed in other scripts rg -A 5 'systemctl.*smb|systemctl.*samba' # Test 3: Check for any samba-related security configurations rg -A 5 'SELinux.*samba|selinux.*smb'Length of output: 117
Script:
#!/bin/bash # Let's verify if samba package is consistently added across all similar provisioning scripts # and check for any existing samba-related configurations # Check all provisioning scripts for package installations echo "=== Checking provisioning scripts for package installations ===" fd "provision.*\.(sh|tpl)" | xargs rg "yum install|apt-get install" # Check for any samba/CIFS related configurations or mount points echo -e "\n=== Checking for CIFS/samba configurations ===" rg -i "cifs|samba|smb" -A 5 # Check for any systemd service configurations echo -e "\n=== Checking systemd service configurations ===" fd ".*\.service" | xargs cat 2>/dev/null # Check for any security or SELinux related configurations echo -e "\n=== Checking security configurations ===" rg "selinux|apparmor|security" -A 5Length of output: 123821
test_framework/terraform/aws/oracle/user-data-scripts/provision_rke2_agent.sh.tpl (2)
5-5
: Consider security implications of Samba installation.While adding the
samba
package is necessary for CIFS support, consider:
- Ensuring Samba service doesn't auto-start unless explicitly needed
- Documenting any required Samba configurations for CIFS usage
- Adding security hardening guidelines if Samba service needs to run
Let's check if Samba service is being managed elsewhere:
#!/bin/bash # Description: Check for Samba service management # Test: Search for smb/samba service management in scripts # Expect: Find any explicit Samba service configurations # Search for Samba service management rg "systemctl.*smb|systemctl.*samba"
5-5
: LGTM! Addition of samba package aligns with CIFS support requirements.The addition of the
samba
package is correctly implemented and necessary for CIFS support.Let's verify similar changes across other provisioning scripts:
test_framework/terraform/aws/rhel/user-data-scripts/provision_rke2_agent.sh.tpl (2)
13-13
: Verify SELinux policies for CIFSSince this script configures SELinux, ensure that appropriate SELinux policies are in place for CIFS operations.
Run this script to check for SELinux-related CIFS configurations:
13-13
: LGTM: Samba package addition for CIFS supportThe addition of the samba package is correct and necessary for enabling CIFS support.
Let's verify if any additional CIFS-related packages or configurations might be needed:
test_framework/terraform/aws/rockylinux/user-data-scripts/provision_rke2_agent.sh.tpl (1)
13-13
: Verify Samba configuration requirements.The addition of the
samba
package aligns with CIFS support requirements. However, ensure that any necessary Samba configurations (like SMB protocol version, security settings) are handled elsewhere in the deployment pipeline.Let's verify if there are any Samba configurations in the codebase:
✅ Verification successful
Samba package installation is appropriate and well-integrated
The addition of the
samba
package is verified to be correctly integrated with the existing CIFS backup store functionality. The codebase shows:
- CIFS backup store configuration is managed through Kubernetes manifests downloaded from the Longhorn repository
- CIFS endpoints are properly configured in test environments
- The backup store implementation handles authentication through secrets (
cifs-secret
)- No direct Samba configuration is needed in the agent nodes as they only need the client capabilities
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for Samba/CIFS configurations in the codebase # Expected: Find files containing Samba/CIFS configurations # Search for potential Samba config files fd -e conf -e cfg -e yaml -e yml | xargs rg -l 'smb|cifs|samba' # Search for Samba/CIFS related environment variables or configurations rg -g '!*.md' -i '(smb|cifs|samba).*=.*'Length of output: 609
Script:
#!/bin/bash # Let's examine the content of these files to understand the CIFS configurations # Check the test configuration echo "=== Checking test.yaml configuration ===" cat manager/integration/deploy/test.yaml # Check the CIFS backupstore setup echo -e "\n=== Checking longhorn-setup.sh ===" cat test_framework/scripts/longhorn-setup.sh | grep -A 5 -B 5 CIFS # Check the backup stores installation script echo -e "\n=== Checking install_backupstores.sh ===" cat pipelines/utilities/install_backupstores.sh | grep -A 5 -B 5 CIFS # Check the e2e test script echo -e "\n=== Checking run_longhorn_e2e_test.sh ===" cat pipelines/utilities/run_longhorn_e2e_test.sh | grep -A 5 -B 5 CIFSLength of output: 6543
Script:
#!/bin/bash # Let's check the CIFS backupstore configuration that's being downloaded # and any other Samba/CIFS related configurations # Search for any smb.conf or related configuration files fd -e conf | grep -i "smb\|samba" # Check for any security or mount related configurations rg -g '!*.md' "mount.*cifs|smbclient|security\s*=\s*|vers\s*=\s*" # Check if there are any other scripts that might configure Samba fd -e sh -e bash | xargs rg -l "mount.*cifs|smbclient|smb.conf"Length of output: 856
manager/integration/README.md (1)
19-26
: Verify the availability of new backup store manifests.The documentation references new YAML files for CIFS and Azurite backup stores. Let's verify their existence in the repository.
✅ Verification successful
Backup store manifests are available and accessible
The verification confirms that both
cifs-backupstore.yaml
andazurite-backupstore.yaml
are available at the specified URLs in the Longhorn repository, validating the documentation's accuracy.🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the existence of new backup store manifests # Test: Check if the YAML files exist in the longhorn repository for store in cifs azurite; do echo "Checking ${store}-backupstore.yaml..." curl -s -f "https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/${store}-backupstore.yaml" > /dev/null if [ $? -eq 0 ]; then echo "✓ ${store}-backupstore.yaml exists" else echo "✗ ${store}-backupstore.yaml not found" fi doneLength of output: 630
🧰 Tools
🪛 Markdownlint
21-21: null
Bare URL used(MD034, no-bare-urls)
22-22: null
Fenced code blocks should have a language specified(MD040, fenced-code-language)
pipelines/utilities/install_backupstores.sh (2)
4-11
: LGTM! Clean integration of new backup stores.The changes properly integrate CIFS and Azurite backup stores alongside existing MinIO and NFS stores, maintaining consistent URL patterns and deployment structure.
30-34
:⚠️ Potential issueSecurity and reliability concerns in Azurite setup.
Several issues need attention:
- Hardcoded credentials in the Azure connection string pose a security risk
- No cleanup of the port-forward process
- No validation of container creation success
- Using SSH with public IP might not be necessary (as noted in previous review)
Let's verify the necessity of SSH:
Suggested improvements:
- Move credentials to Kubernetes secrets
- Add proper process management
- Add error handling
- Consider removing SSH requirement
- CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip) - CONTROL_PLANE_PRIVATE_IP=$(kubectl get nodes -o wide | awk '/control-plane/ {print $6}') - # port forward and az container create need to be run on control node - ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &" - ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;'" + # Start port forwarding with proper process management + PORT_FORWARD_PID="" + cleanup_port_forward() { + if [ ! -z "$PORT_FORWARD_PID" ]; then + kill $PORT_FORWARD_PID + fi + } + trap cleanup_port_forward EXIT + + # Get connection details from secret + AZURITE_CONN_STR=$(kubectl get secret azurite-credentials -o jsonpath='{.data.connection-string}' | base64 -d) + + # Start port forwarding + kubectl port-forward service/azblob-service 20001:10000 & + PORT_FORWARD_PID=$! + + # Wait for port-forward to be ready + sleep 5 + + # Create container with error handling + if ! az storage container create -n longhorn-test-azurite --connection-string "$AZURITE_CONN_STR"; then + echo "Failed to create Azurite container" + exit 1 + fie2e/README.md (1)
17-24
: Provide test reports for CIFS and Azurite.As requested in the previous review, please provide test reports from Jenkins for the CIFS and Azurite robot test cases to validate the implementation.
🧰 Tools
🪛 Markdownlint
19-19: null
Bare URL used(MD034, no-bare-urls)
20-20: null
Fenced code blocks should have a language specified(MD040, fenced-code-language)
manager/integration/tests/backupstore.py (6)
20-21
: LGTM: Import statements are correctly added.The new imports for CIFS and Azurite backup targets are properly placed with other backup target imports.
69-72
: LGTM: Backup store type handling is properly extended.The
set_random_backupstore
fixture correctly handles the new CIFS and Azure blob storage types.
125-135
: LGTM: CIFS backup store setup is implemented correctly.The implementation follows the same pattern as other backup store types, properly handling URL and credentials.
137-146
: LGTM: Azurite backup store setup is implemented correctly.The implementation follows the same pattern as other backup store types, properly handling URL and credentials.
71-71
: Verify test coverage for CIFS operations.Based on the comment from @yangchiu, we need to verify if backup-related test cases like
test_backup_volume_list
,test_backup_block_deletion
,test_backup_metadata_deletion
, andtest_all_replica_restore_failure
are properly handled or need to be skipped for CIFS.#!/bin/bash # Description: Find test cases that might need updates for CIFS support # Test: Search for backup-related test cases echo "Backup-related test cases that might need updates:" rg "def test_backup.*" -B 2 -A 5
307-307
:⚠️ Potential issueCritical: Missing implementations for CIFS and Azurite backup operations.
While the basic setup for CIFS and Azurite is implemented, several critical backup operations are still missing implementations and currently skip the tests:
- backupstore_get_backup_volume_prefix
- backupstore_get_backup_cfg_file_path
- backupstore_get_volume_cfg_file_path
- backupstore_get_backup_blocks_dir
- backupstore_create_file
- backupstore_delete_file
- backupstore_delete_backup_cfg_file
- backupstore_delete_volume_cfg_file
- backupstore_delete_random_backup_block
- backupstore_count_backup_block_files
This matches the concerns raised in the past review comments. These operations are essential for backup functionality.
Also applies to: 344-344, 367-367, 390-390, 416-416, 466-466, 514-514, 554-554, 596-596, 665-665, 714-714
test_framework/scripts/longhorn-setup.sh (3)
336-337
: LGTM: Backup store URL definitionsThe new URL variables for CIFS and Azurite backup stores follow the existing pattern and convention.
339-342
: LGTM: Backup store installationThe installation of CIFS and Azurite backup stores is properly integrated with the existing backup store deployment process.
427-432
: Duplicate code in CIFS handling logicThis issue was previously identified in the past review comments. The suggestion to extract the duplicated CIFS handling logic into a helper function remains valid.
Also applies to: 487-492
manager/integration/tests/common.py (2)
3898-3900
: LGTM!The
is_backupTarget_cifs
function follows the established pattern for backup target validation functions and correctly checks for the CIFS URL prefix.
3902-3904
: LGTM!The
is_backupTarget_azurite
function follows the established pattern for backup target validation functions and correctly checks for the Azure Blob Storage URL prefix.
@@ -14,7 +14,7 @@ sed -i 's|#baseurl=http://mirror.centos.org|baseurl=http://vault.centos.org|g' / | |||
|
|||
sudo yum update -y | |||
sudo yum group install -y "Development Tools" | |||
sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools nc | |||
sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools nc samba |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Missing cifs-utils package for CIFS mount support
The samba
package installation is good, but for complete CIFS functionality, the cifs-utils
package is also required as it provides essential mount.cifs utilities. Please add cifs-utils
to the package installation line:
sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools nc samba cifs-utils
🔗 Analysis chain
LGTM on samba package installation.
The addition of the samba
package is correct and aligns with the PR objective to support CIFS in the testing environment.
Let's verify if any additional samba-related packages or configurations might be needed:
Let me gather more information about the CIFS configuration to ensure we have all necessary dependencies.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for common samba dependencies and configuration patterns in other files
# Test 1: Search for other samba-related packages in provisioning scripts
echo "Checking for samba package patterns in other provisioning scripts..."
rg -g '*/provision*.sh*' 'yum.*install.*samba'
# Test 2: Search for samba/cifs configuration patterns
echo "Checking for samba/cifs configuration patterns..."
rg -g '*.sh*' -g '*.yaml' -g '*.yml' 'smb\.conf|cifs'
Length of output: 1795
Script:
#!/bin/bash
# Check for any samba-related configuration files or mount utilities
echo "Checking for samba/cifs mount utilities and configurations..."
rg -g '*.sh*' -g '*.yaml' -g '*.yml' 'mount\.cifs|cifs-utils'
# Check if cifs-utils is installed alongside samba in any provisioning scripts
echo "Checking for cifs-utils package installation..."
fd -g '*provision*.sh*' -x grep -l 'cifs-utils'
# Look for any CIFS/SMB related environment variables or configurations
echo "Checking for CIFS/SMB related configurations..."
rg -g '*.sh*' -g '*.yaml' -g '*.yml' 'SMB_|CIFS_|smbclient'
Length of output: 1436
@@ -4,7 +4,7 @@ set -e | |||
set -x | |||
|
|||
apt-get update | |||
apt-get install -y nfs-common cryptsetup dmsetup linux-modules-extra-`uname -r` | |||
apt-get install -y nfs-common cryptsetup dmsetup samba linux-modules-extra-`uname -r` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Additional CIFS-related package required for complete functionality
The codebase shows active usage of CIFS for backup stores and mounting operations. While samba
is being added, the cifs-utils
package is also required for full CIFS functionality, especially for mounting CIFS shares. Consider adding cifs-utils
to the package installation list:
apt-get install -y nfs-common cryptsetup dmsetup samba cifs-utils linux-modules-extra-`uname -r`
Key evidence from codebase:
- CIFS backup store configuration:
cifs://longhorn-test-cifs-svc.default/backupstore
- Multiple scripts handling CIFS operations in test and integration environments
🔗 Analysis chain
LGTM: Package installation aligns with CIFS support requirements.
The addition of samba
package is necessary for CIFS support and aligns with the PR objectives. The package is correctly added alongside other essential system packages.
Let's verify if any additional CIFS-related packages might be needed:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for common CIFS-related package dependencies in Ubuntu
# Expected: Should find references to cifs-utils or similar packages in other provisioning scripts
# Search for CIFS-related package installations in other provisioning scripts
rg -l 'cifs-utils|smbclient'
Length of output: 68
Script:
#!/bin/bash
# Let's check for any CIFS-related configurations or mount operations in the codebase
rg -A 3 'cifs|samba|smb' --type sh
# Also check for any documentation or configuration files mentioning CIFS setup
rg -A 3 'cifs|samba|smb' --type yaml --type md
Length of output: 6078
@@ -10,7 +10,7 @@ fi | |||
|
|||
sudo yum update -y | |||
sudo yum group install -y "Development Tools" | |||
sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper nc | |||
sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper nc samba |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider adding cifs-utils package
While samba provides the core functionality, consider adding cifs-utils package which provides additional tools for mounting and managing CIFS shares.
Apply this diff to add the cifs-utils package:
-sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper nc samba
+sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper nc samba cifs-utils
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper nc samba | |
sudo yum install -y iscsi-initiator-utils nfs-utils nfs4-acl-tools cryptsetup device-mapper nc samba cifs-utils |
RETRY=0 | ||
MAX_RETRY=60 | ||
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do | ||
echo 'Waiting azurite pod running' | ||
sleep 5 | ||
if [ $RETRY -eq $MAX_RETRY ]; then | ||
break | ||
fi | ||
RETRY=$((RETRY+1)) | ||
done |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Enhance retry mechanism with proper error handling.
The current retry implementation has several areas for improvement:
- No error handling when max retries are reached
- Magic numbers in retry constants
Consider this improved implementation:
- RETRY=0
- MAX_RETRY=60
+ readonly AZURITE_POD_WAIT_SECONDS=300
+ readonly AZURITE_CHECK_INTERVAL=5
+ end_time=$(($(date +%s) + ${AZURITE_POD_WAIT_SECONDS}))
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do
echo 'Waiting azurite pod running'
- sleep 5
- if [ $RETRY -eq $MAX_RETRY ]; then
+ sleep ${AZURITE_CHECK_INTERVAL}
+ if [ $(date +%s) -gt $end_time ]; then
+ echo "Error: Timeout waiting for Azurite pod to be ready"
+ exit 1
+ fi
- break
fi
- RETRY=$((RETRY+1))
done
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
RETRY=0 | |
MAX_RETRY=60 | |
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do | |
echo 'Waiting azurite pod running' | |
sleep 5 | |
if [ $RETRY -eq $MAX_RETRY ]; then | |
break | |
fi | |
RETRY=$((RETRY+1)) | |
done | |
readonly AZURITE_POD_WAIT_SECONDS=300 | |
readonly AZURITE_CHECK_INTERVAL=5 | |
end_time=$(($(date +%s) + ${AZURITE_POD_WAIT_SECONDS})) | |
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do | |
echo 'Waiting azurite pod running' | |
sleep ${AZURITE_CHECK_INTERVAL} | |
if [ $(date +%s) -gt $end_time ]; then | |
echo "Error: Timeout waiting for Azurite pod to be ready" | |
exit 1 | |
fi | |
done |
setup_azuitize_backup_store(){ | ||
RETRY=0 | ||
MAX_RETRY=60 | ||
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do | ||
echo 'Waiting azurite pod running' | ||
sleep 5 | ||
if [ $RETRY -eq $MAX_RETRY ]; then | ||
break | ||
fi | ||
RETRY=$((RETRY+1)) | ||
done | ||
|
||
AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64) | ||
kubectl -n longhorn-system patch secret azblob-secret \ | ||
--type=json \ | ||
-p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]" | ||
|
||
CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip) | ||
CONTROL_PLANE_PRIVATE_IP=$(kubectl get nodes -o wide | awk '/control-plane/ {print $6}') | ||
# port forward and az container create need to be run on control node | ||
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &" | ||
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;'" | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Several improvements needed in Azurite setup function
The function has several areas that need attention:
- Magic numbers in retry logic
- Missing error handling for SSH commands
- Background port-forward process may persist
- Hardcoded storage account credentials in command
Suggested improvements:
setup_azuitize_backup_store(){
+ # Maximum wait time for Azurite pod: 5 minutes
RETRY=0
- MAX_RETRY=60
+ MAX_RETRY=30
+ RETRY_INTERVAL=10
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do
echo 'Waiting azurite pod running'
- sleep 5
+ sleep $RETRY_INTERVAL
if [ $RETRY -eq $MAX_RETRY ]; then
+ echo "Error: Timeout waiting for Azurite pod"
break
fi
RETRY=$((RETRY+1))
done
+ # Verify pod is actually running
+ if ! kubectl get pods | grep 'longhorn-test-azblob' | grep -q 'Running'; then
+ echo "Error: Azurite pod failed to start"
+ return 1
+ fi
AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64)
kubectl -n longhorn-system patch secret azblob-secret \
--type=json \
-p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]"
CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip)
CONTROL_PLANE_PRIVATE_IP=$(kubectl get nodes -o wide | awk '/control-plane/ {print $6}')
+
+ # Store port-forward PID for cleanup
+ PORT_FORWARD_PID_FILE="/tmp/azurite_port_forward.pid"
+
# port forward and az container create need to be run on control node
- ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &"
+ if ! ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 & echo \$! > ${PORT_FORWARD_PID_FILE}"; then
+ echo "Error: Failed to start port forwarding"
+ return 1
+ fi
+
+ # Load connection string from secret or config to avoid hardcoding
+ STORAGE_CONN_STRING="DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=\${AZURITE_ACCOUNT_KEY};BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;"
- ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;'"
+ if ! ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string '${STORAGE_CONN_STRING}'"; then
+ echo "Error: Failed to create storage container"
+ # Cleanup port-forward
+ ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "kill \$(cat ${PORT_FORWARD_PID_FILE})"
+ return 1
+ fi
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
setup_azuitize_backup_store(){ | |
RETRY=0 | |
MAX_RETRY=60 | |
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do | |
echo 'Waiting azurite pod running' | |
sleep 5 | |
if [ $RETRY -eq $MAX_RETRY ]; then | |
break | |
fi | |
RETRY=$((RETRY+1)) | |
done | |
AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64) | |
kubectl -n longhorn-system patch secret azblob-secret \ | |
--type=json \ | |
-p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]" | |
CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip) | |
CONTROL_PLANE_PRIVATE_IP=$(kubectl get nodes -o wide | awk '/control-plane/ {print $6}') | |
# port forward and az container create need to be run on control node | |
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &" | |
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;'" | |
} | |
setup_azuitize_backup_store(){ | |
# Maximum wait time for Azurite pod: 5 minutes | |
RETRY=0 | |
MAX_RETRY=30 | |
RETRY_INTERVAL=10 | |
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do | |
echo 'Waiting azurite pod running' | |
sleep $RETRY_INTERVAL | |
if [ $RETRY -eq $MAX_RETRY ]; then | |
echo "Error: Timeout waiting for Azurite pod" | |
break | |
fi | |
RETRY=$((RETRY+1)) | |
done | |
# Verify pod is actually running | |
if ! kubectl get pods | grep 'longhorn-test-azblob' | grep -q 'Running'; then | |
echo "Error: Azurite pod failed to start" | |
return 1 | |
fi | |
AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64) | |
kubectl -n longhorn-system patch secret azblob-secret \ | |
--type=json \ | |
-p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]" | |
CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip) | |
CONTROL_PLANE_PRIVATE_IP=$(kubectl get nodes -o wide | awk '/control-plane/ {print $6}') | |
# Store port-forward PID for cleanup | |
PORT_FORWARD_PID_FILE="/tmp/azurite_port_forward.pid" | |
# port forward and az container create need to be run on control node | |
if ! ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 & echo \$! > ${PORT_FORWARD_PID_FILE}"; then | |
echo "Error: Failed to start port forwarding" | |
return 1 | |
fi | |
# Load connection string from secret or config to avoid hardcoding | |
STORAGE_CONN_STRING="DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=\${AZURITE_ACCOUNT_KEY};BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;" | |
if ! ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string '${STORAGE_CONN_STRING}'"; then | |
echo "Error: Failed to create storage container" | |
# Cleanup port-forward | |
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "kill \$(cat ${PORT_FORWARD_PID_FILE})" | |
return 1 | |
fi | |
} |
3c57f3b
to
19842e9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Outside diff range and nitpick comments (3)
manager/integration/README.md (2)
19-19
: Improve documentation formatting and accessibility
- The bare URL should be properly formatted as a markdown link
- Consider making the documentation publicly accessible or providing alternative public documentation
Apply this diff to improve the formatting:
-For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). +For Azurite, additional manual steps are required after deploying the manifest. See [Azurite Backupstore Setup Guide](https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing) for details.Also applies to: 21-21
24-26
: Improve command block formatting and readability
- Add language specification to the code block
- Consider improving command readability with line continuations
Apply this diff to improve the formatting:
-``` +```bash kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yamle2e/README.md (1)
20-20
: Add language specifications to code blocks.For better readability and syntax highlighting, specify the language for all code blocks. For example:
-``` +```shell kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \🧰 Tools
🪛 Markdownlint
20-20: null
Fenced code blocks should have a language specified(MD040, fenced-code-language)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (23)
e2e/README.md
(1 hunks)manager/integration/README.md
(1 hunks)manager/integration/deploy/test.yaml
(1 hunks)manager/integration/tests/backupstore.py
(14 hunks)manager/integration/tests/common.py
(1 hunks)pipelines/utilities/install_backupstores.sh
(1 hunks)pipelines/utilities/run_longhorn_e2e_test.sh
(3 hunks)pipelines/utilities/run_longhorn_test.sh
(2 hunks)test_framework/scripts/longhorn-setup.sh
(3 hunks)test_framework/terraform/aws/centos/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/centos/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/oracle/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/oracle/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rhel/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rhel/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rockylinux/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rockylinux/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_server.sh.tpl
(2 hunks)test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_server.sh.tpl
(1 hunks)test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (19)
- manager/integration/deploy/test.yaml
- manager/integration/tests/backupstore.py
- pipelines/utilities/install_backupstores.sh
- pipelines/utilities/run_longhorn_e2e_test.sh
- pipelines/utilities/run_longhorn_test.sh
- test_framework/terraform/aws/centos/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/centos/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/oracle/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/oracle/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/rhel/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/rhel/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/rockylinux/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/rockylinux/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_server.sh.tpl
- test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_server.sh.tpl
- test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
🧰 Additional context used
🪛 Markdownlint
e2e/README.md
19-19: null
Bare URL used
(MD034, no-bare-urls)
20-20: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
manager/integration/README.md
21-21: null
Bare URL used
(MD034, no-bare-urls)
22-22: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (11)
manager/integration/README.md (1)
24-26
: Verify manifest URLs are accessible
Let's ensure all the manifest URLs are valid and accessible.
✅ Verification successful
All manifest URLs are accessible and valid
All the referenced manifest URLs in the README.md file are verified to be accessible:
- nfs-backupstore.yaml
- cifs-backupstore.yaml
- azurite-backupstore.yaml
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify that all manifest URLs are accessible
echo "Checking manifest URLs..."
for manifest in minio-backupstore.yaml nfs-backupstore.yaml cifs-backupstore.yaml azurite-backupstore.yaml; do
url="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/$manifest"
if curl --output /dev/null --silent --head --fail "$url"; then
echo "✓ $manifest is accessible"
else
echo "✗ $manifest is not accessible"
fi
done
Length of output: 1876
e2e/README.md (3)
17-17
: LGTM!
The description clearly indicates the addition of CIFS and Azurite backup store servers.
19-19
: 🛠️ Refactor suggestion
Improve documentation and consider automation.
- Format the URL as a proper markdown link:
-For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing).
+For Azurite, there are some manual steps need to be done after manifest deployed. See [Azurite Backupstore Setup Guide](https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing).
- Consider adding:
- Brief explanation of why manual steps are required
- Estimated time for manual setup
- Impact on automation pipelines
Note: This aligns with the previous review comment about eliminating manual steps. Consider automating this process to improve user experience and CI reliability.
🧰 Tools
🪛 Markdownlint
19-19: null
Bare URL used
(MD034, no-bare-urls)
22-24
: LGTM! The deployment commands are clear and well-structured.
The addition of CIFS and Azurite YAML files aligns with the PR objectives.
Let's verify the existence of the new YAML files:
✅ Verification successful
Let me verify the URLs directly since the local files weren't found.
All backup store YAML files are accessible and valid
The verification confirms that all three YAML files (nfs-backupstore.yaml, cifs-backupstore.yaml, and azurite-backupstore.yaml) are accessible at the specified URLs in the Longhorn repository, with HTTP 200 responses and valid content lengths. The deployment commands in the README are correct and functional.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the existence of backup store YAML files
# Expected: Both YAML files should exist in the repository
echo "Checking for CIFS and Azurite YAML files..."
for file in "cifs-backupstore.yaml" "azurite-backupstore.yaml"; do
fd -t f "$file" || echo "Warning: $file not found"
done
Length of output: 293
Script:
#!/bin/bash
# Check if the YAML files are accessible at the specified URLs
for url in "https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml" \
"https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml" \
"https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml"; do
echo "Checking URL: $url"
curl -s -f -I "$url" || echo "Warning: Failed to access $url"
done
Length of output: 4666
test_framework/scripts/longhorn-setup.sh (5)
336-337
: LGTM: New backup store URLs added.
The new URLs for CIFS and Azurite backup stores follow the established pattern and correctly point to the master branch.
339-342
: LGTM: Backup stores installation updated.
The changes correctly add CIFS and Azurite backup stores to the installation process, maintaining consistency with the existing pattern.
345-367
: Reusing existing review comment.
The previous review comment about improvements needed in the Azurite setup function is still valid and applicable.
427-432
: Reusing existing review comment.
The previous review comment about extracting duplicated CIFS handling logic is still valid and applicable to these changes.
487-492
: Reusing existing review comment.
The previous review comment about extracting duplicated CIFS handling logic is still valid and applicable to these changes.
manager/integration/tests/common.py (2)
3898-3900
: LGTM! Clean implementation for CIFS backup target check.
The function follows the established pattern for backup target validation and correctly checks for the "cifs://" prefix.
3902-3904
: LGTM! Clean implementation for Azurite backup target check.
The function follows the established pattern for backup target validation and correctly checks for the "azblob://" prefix.
Updated! |
The title and description of this PR are still incorrect. |
c6d9aaf
to
0ff93c2
Compare
b815e95
to
54891df
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🧹 Outside diff range and nitpick comments (6)
manager/integration/README.md (2)
19-21
: Improve documentation formatting and clarity.
- Format the Confluence URL as a proper markdown link
- Consider adding a brief summary of the manual steps required for Azurite setup
Apply this diff to improve the documentation:
1. Deploy all backupstore servers(including `NFS` server and `Minio` as s3 server `CIFS` and `Azurite` server) for test purposes. - For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). + For Azurite, there are some manual steps needed after deploying the manifest. See [Azurite Backupstore Setup Guide](https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing) for detailed instructions.🧰 Tools
🪛 Markdownlint
21-21: null
Bare URL used(MD034, no-bare-urls)
22-26
: Improve code block formatting.
- Add shell language specification to the code block
- Align the backslashes for better readability
Apply this diff to improve the formatting:
-``` +```shell kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ - -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ + -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml<details> <summary>🧰 Tools</summary> <details> <summary>🪛 Markdownlint</summary> 22-22: null Fenced code blocks should have a language specified (MD040, fenced-code-language) </details> </details> </blockquote></details> <details> <summary>manager/integration/tests/backupstore.py (1)</summary><blockquote> Line range hint `1-766`: **Architecture Advice: Consider implementing a Backup Store Interface.** The current implementation duplicates similar patterns across different backup store types. Consider implementing a backup store interface or abstract class to: 1. Ensure consistent implementation across different backup store types 2. Reduce code duplication 3. Make it easier to add new backup store types 4. Enforce implementation of all required methods Example interface structure: ```python from abc import ABC, abstractmethod class BackupStore(ABC): @abstractmethod def get_backup_volume_prefix(self, volume_name: str) -> str: pass @abstractmethod def get_backup_cfg_file_path(self, volume_name: str, backup_name: str) -> str: pass # ... other abstract methods ... class CIFSBackupStore(BackupStore): def get_backup_volume_prefix(self, volume_name: str) -> str: # CIFS-specific implementation pass class AzuriteBackupStore(BackupStore): def get_backup_cfg_file_path(self, volume_name: str, backup_name: str) -> str: # Azurite-specific implementation pass
test_framework/scripts/longhorn-setup.sh (1)
339-342
: Fix typo in function name and approve changes.The backup store installation changes look good, but there's a typo in the function name:
- "setup_azuitize_backup_store" should be "setup_azurite_backup_store"
- setup_azuitize_backup_store + setup_azurite_backup_storemanager/integration/tests/common.py (2)
3898-3900
: Consider renaming function and adding docstring to follow Python conventions.The function should be renamed from camelCase to snake_case as
is_backup_target_cifs()
to follow Python naming conventions. Also add a docstring to document the function purpose.-def is_backupTarget_cifs(s): +def is_backup_target_cifs(s): + """ + Check if the backup target URL is using CIFS protocol. + + Args: + s: The backup target URL string to check + Returns: + bool: True if URL starts with "cifs://", False otherwise + """ return s.startswith("cifs://")
3902-3904
: Consider renaming function and adding docstring to follow Python conventions.The function should be renamed from camelCase to snake_case as
is_backup_target_azurite()
to follow Python naming conventions. Also add a docstring to document the function purpose.-def is_backupTarget_azurite(s): +def is_backup_target_azurite(s): + """ + Check if the backup target URL is using Azure Blob Storage protocol. + + Args: + s: The backup target URL string to check + Returns: + bool: True if URL starts with "azblob://", False otherwise + """ return s.startswith("azblob://")
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (23)
e2e/README.md
(1 hunks)manager/integration/README.md
(1 hunks)manager/integration/deploy/test.yaml
(1 hunks)manager/integration/tests/backupstore.py
(14 hunks)manager/integration/tests/common.py
(1 hunks)pipelines/utilities/install_backupstores.sh
(1 hunks)pipelines/utilities/run_longhorn_e2e_test.sh
(3 hunks)pipelines/utilities/run_longhorn_test.sh
(2 hunks)test_framework/scripts/longhorn-setup.sh
(3 hunks)test_framework/terraform/aws/centos/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/centos/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/oracle/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/oracle/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rhel/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rhel/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rockylinux/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rockylinux/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_server.sh.tpl
(2 hunks)test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_server.sh.tpl
(1 hunks)test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (16)
- manager/integration/deploy/test.yaml
- pipelines/utilities/run_longhorn_e2e_test.sh
- test_framework/terraform/aws/centos/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/centos/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/oracle/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/oracle/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/rhel/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/rhel/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/rockylinux/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/rockylinux/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_server.sh.tpl
- test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_server.sh.tpl
- test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
🧰 Additional context used
🪛 Markdownlint
e2e/README.md
19-19: null
Bare URL used
(MD034, no-bare-urls)
20-20: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
manager/integration/README.md
21-21: null
Bare URL used
(MD034, no-bare-urls)
22-22: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (13)
pipelines/utilities/install_backupstores.sh (3)
14-23
: Improve retry mechanism with proper error handling.
Since this is still a valid concern from the previous review, I'll skip generating a duplicate comment.
25-28
: LGTM: Secret patching implementation.
The secret patching implementation for the Azurite endpoint is well-structured and follows Kubernetes best practices.
30-33
:
Security and portability concerns in SSH implementation.
Several issues need to be addressed:
- Hardcoded credentials in the connection string pose a security risk
- The
ec2-user
username limits compatibility to specific distributions - No error handling for SSH commands
- Background process management could be improved
Consider these improvements:
- Move credentials to a secret or environment variable
- Use a distribution-agnostic approach for SSH user
- Add error handling for SSH commands
- Implement proper process management
- CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip)
- # port forward and az container create need to be run on control node
- ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &"
- ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://0.0.0.0:20001/devstoreaccount1;'"
+ # Load configuration
+ CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip)
+ SSH_USER=${SSH_USER:-"ec2-user"} # Allow override via environment
+ AZURITE_CONNECTION_STRING=${AZURITE_CONNECTION_STRING:-"DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=..."}
+
+ # Function to check SSH command status
+ check_ssh_error() {
+ if [ $? -ne 0 ]; then
+ echo "Error: SSH command failed: $1"
+ exit 1
+ fi
+ }
+
+ # Start port forwarding with proper process management
+ echo "Starting port forwarding on control plane node..."
+ ssh ${SSH_USER}@${CONTROL_PLANE_PUBLIC_IP} "pkill -f 'kubectl port-forward.*azblob-service' || true"
+ ssh ${SSH_USER}@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /tmp/azurite-portforward.log 2>&1 & echo \$! > /tmp/azurite-portforward.pid"
+ check_ssh_error "Failed to start port forwarding"
+
+ # Wait for port forwarding to be ready
+ sleep 5
+
+ # Create storage container
+ echo "Creating Azurite storage container..."
+ ssh ${SSH_USER}@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string '${AZURITE_CONNECTION_STRING}'"
+ check_ssh_error "Failed to create storage container"
Let's verify the SSH user assumptions across different environments:
e2e/README.md (1)
22-24
: Verify manifest files and improve code block formatting.
The code block should specify the shell language and we should verify the existence of the new manifest files.
manager/integration/tests/backupstore.py (6)
20-21
: LGTM: Import statements are correctly added.
The new imports for CIFS and Azurite backup target detection are properly placed with other similar imports.
69-72
: LGTM: Backup store type handling is properly extended.
The set_random_backupstore
fixture correctly handles the new CIFS and Azurite backup store types.
125-135
: LGTM: CIFS backup store setup is implemented correctly.
The set_backupstore_cifs
function follows the same pattern as the existing S3 and NFS implementations, properly handling URL and credentials.
137-146
: LGTM: Azurite backup store setup is implemented correctly.
The set_backupstore_azurite
function follows the same pattern as other backup store implementations.
307-307
:
Critical: Missing implementations for CIFS and Azurite backup operations.
The changes replace raise NotImplementedError
with pytest.skip()
for unsupported backup store types. However, this doesn't address the need for actual CIFS and Azurite implementations of these critical backup operations:
- backupstore_get_backup_volume_prefix
- backupstore_get_backup_cfg_file_path
- backupstore_get_volume_cfg_file_path
- backupstore_get_backup_blocks_dir
- backupstore_create_file
- backupstore_delete_file
- backupstore_delete_backup_cfg_file
- backupstore_delete_volume_cfg_file
- backupstore_delete_random_backup_block
- backupstore_count_backup_block_files
These functions need CIFS and Azurite-specific implementations similar to the existing S3 and NFS implementations. Simply skipping the tests will mask potential issues in production.
Also applies to: 344-344, 367-367, 390-390, 416-416, 466-466, 514-514, 554-554, 596-596, 665-665, 714-714
307-307
: Verify the impact of skipping tests.
The change from raising NotImplementedError
to skipping tests might hide issues in CI.
Let's check which test cases would be affected:
Also applies to: 344-344, 367-367, 390-390, 416-416, 466-466, 514-514, 554-554, 596-596, 665-665, 714-714
✅ Verification successful
Let me gather more information about the specific test cases that would be affected.
Let me check the context around these skip conditions to understand the control flow better.
The change from NotImplementedError
to skip is appropriate and safe
The skip implementation is correct because:
- These are utility functions used to test different backup store types (S3, NFS)
- The skip is only triggered when the specific backup store type being tested is not supported/configured
- Tests are already properly marked with conditions (e.g.
@pytest.mark.skipif('s3' not in BACKUPSTORE)
) - The change maintains test isolation by skipping unsupported scenarios rather than failing with errors
This approach follows pytest best practices by explicitly skipping tests that aren't applicable to the current test environment configuration, rather than raising errors that could be misinterpreted as test failures.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Find test cases that use these functions
echo "Test cases potentially affected by skipped backup operations:"
rg -l "def test_.*" | xargs rg -l "backupstore_(get_backup_volume_prefix|get_backup_cfg_file_path|get_volume_cfg_file_path|get_backup_blocks_dir|create_file|delete_file|delete_backup_cfg_file|delete_volume_cfg_file|delete_random_backup_block|count_backup_block_files)"
Length of output: 480
Script:
#!/bin/bash
# Check the actual test cases in the affected files
echo "Affected test cases in test_basic.py:"
rg "def test_.*" manager/integration/tests/test_basic.py -B1 -A1 | grep -A1 -B1 "backupstore_"
echo -e "\nAffected test cases in test_ha.py:"
rg "def test_.*" manager/integration/tests/test_ha.py -B1 -A1 | grep -A1 -B1 "backupstore_"
# Check the actual skip condition
echo -e "\nSkip condition context:"
rg -B2 -A2 "Skip test case because the backup store type is not supported" manager/integration/tests/backupstore.py
Length of output: 1923
Script:
#!/bin/bash
# Check the full context of the backupstore functions
echo "Context of backupstore functions:"
rg -B5 -A2 "def backupstore_" manager/integration/tests/backupstore.py
# Check if there are any existing skip patterns for backup tests
echo -e "\nExisting skip patterns for backup tests:"
rg "skip.*backup" manager/integration/tests/
Length of output: 7298
test_framework/scripts/longhorn-setup.sh (3)
336-337
: LGTM: New backup store URLs are well-defined.
The new URLs for CIFS and Azurite backup stores follow the existing pattern and naming convention.
486-491
: LGTM: Backup store type handling is correct.
The implementation for CIFS and Azurite backup stores is correct, though it could benefit from the refactoring suggested earlier.
426-431
: 🛠️ Refactor suggestion
Consider extracting duplicated backup store handling logic.
The backup store type handling logic is duplicated between run_longhorn_upgrade_test
and run_longhorn_tests
. Consider extracting this into a helper function.
+get_backup_store_for_test() {
+ local manifest_file="$1"
+ local store_type="$2"
+ local store_index=1
+
+ case "$store_type" in
+ "s3") store_index=1 ;;
+ "nfs") store_index=2 ;;
+ "cifs") store_index=3 ;;
+ "azurite") store_index=4 ;;
+ esac
+
+ yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' "$manifest_file" | \
+ awk -F ',' "{print \$$store_index}" | sed 's/ *//'
+}
This helper function would simplify both run_longhorn_upgrade_test
and run_longhorn_tests
.
Likely invalid or redundant comment.
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ | ||
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ | ||
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
The referenced manifest files are not available in the repository
The verification shows that:
- The
deploy/backupstores
directory does not exist in the repository - The manifest files
cifs-backupstore.yaml
andazurite-backupstore.yaml
are not found anywhere in the codebase - The README is referencing files from the master branch of the repository through raw GitHub URLs, but these files should exist in the repository itself
🔗 Analysis chain
Verify the existence of new manifest files.
Let's ensure that the CIFS and Azurite manifest files are available in the repository.
Let me try another verification approach by checking the deploy/backupstores directory specifically and examining its contents.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if the new manifest files exist in the repository
# Expected: Both manifest files should be found
echo "Checking for CIFS and Azurite manifest files..."
fd -t f "cifs-backupstore.yaml|azurite-backupstore.yaml"
Length of output: 163
Script:
#!/bin/bash
# Check the contents of deploy/backupstores directory
echo "Checking deploy/backupstores directory..."
fd . "deploy/backupstores" -t f
# Also check if these files exist in any other location
echo -e "\nChecking entire repository for these manifest files..."
rg -l "kind: Deployment.*[cifs|azurite].*backupstore"
Length of output: 412
CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml" | ||
AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml" | ||
kubectl create -f ${MINIO_BACKUPSTORE_URL} \ | ||
-f ${NFS_BACKUPSTORE_URL} | ||
-f ${NFS_BACKUPSTORE_URL} \ | ||
-f ${CIFS_BACKUPSTORE_URL} \ | ||
-f ${AZURITE_BACKUPSTORE_URL} | ||
setup_azuitize_backup_store | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix typo in function name: "azuitize" should be "azurite".
The function name contains a typo that should be corrected for consistency with the rest of the codebase and documentation.
Apply this diff to fix the typo:
- setup_azuitize_backup_store
+ setup_azurite_backup_store
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml" | |
AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml" | |
kubectl create -f ${MINIO_BACKUPSTORE_URL} \ | |
-f ${NFS_BACKUPSTORE_URL} | |
-f ${NFS_BACKUPSTORE_URL} \ | |
-f ${CIFS_BACKUPSTORE_URL} \ | |
-f ${AZURITE_BACKUPSTORE_URL} | |
setup_azuitize_backup_store | |
} | |
CIFS_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml" | |
AZURITE_BACKUPSTORE_URL="https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml" | |
kubectl create -f ${MINIO_BACKUPSTORE_URL} \ | |
-f ${NFS_BACKUPSTORE_URL} \ | |
-f ${CIFS_BACKUPSTORE_URL} \ | |
-f ${AZURITE_BACKUPSTORE_URL} | |
setup_azurite_backup_store | |
} |
1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server, `CIFS` and `Azurite` server) for test purposes. | ||
|
||
For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Documentation needs improvement for external users.
The documentation has several areas that need attention:
- The Confluence URL is internal and not accessible to external users. Consider documenting the Azurite setup steps directly in the README or in a public documentation.
- Similar to Azurite, please document any specific setup requirements or prerequisites for CIFS.
Consider adding:
1. Deploy all backupstore servers (including `NFS` server and `Minio` as s3 server, `CIFS` and `Azurite` server) for test purposes.
- For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing).
+ ### Prerequisites
+ - For CIFS:
+ - <document CIFS requirements>
+ - For Azurite:
+ 1. <document step 1>
+ 2. <document step 2>
+ ...
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Markdownlint
19-19: null
Bare URL used
(MD034, no-bare-urls)
elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then | ||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $4}' | sed 's/ *//'` | ||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix inconsistent naming of "azurite".
The PR title uses "azurtize" but the code uses "azurite". This inconsistency should be resolved by using the correct term "azurite" consistently throughout the PR.
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | ||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` | ||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | ||
elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then | ||
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $4}' | sed 's/ *//'` | ||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codebase verification
Based on the test manifest content, I can now generate the final response:
Environment variable structure verified and issues found
The code assumes incorrect environment variable name. In the manifest, the backup stores are defined in LONGHORN_BACKUPSTORES
(plural) while the script looks for LONGHORN_BACKUPSTORE
(singular). The manifest shows the correct structure:
value: "s3://backupbucket@us-east-1/backupstore$minio-secret, nfs://longhorn-test-nfs-svc.default:/opt/backupstore, cifs://longhorn-test-cifs-svc.default/backupstore$cifs-secret, azblob://[email protected]/$azblob-secret"
Issues found:
- Script uses wrong environment variable name (
env[1].value
containsLONGHORN_BACKUPSTORES
, notLONGHORN_BACKUPSTORE
) - The order of values is fixed in the manifest: s3, nfs, cifs, azblob
- Script needs to be updated to use the correct environment variable name and ensure compatibility with this structure
🔗 Analysis chain
Verify the environment variable structure.
The code assumes a specific order and number of comma-separated values in env[1].value
. Please verify:
- The order of backup store values in the environment variable
- The presence of all required values
- The impact on existing tests when adding new values
Also applies to: 116-121
Let me try a different approach to verify the environment variable structure using grep
and cat
instead of yq
.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the structure of env[1].value in test manifests
# Expected: Should show the complete env[1].value string with all backup store values
echo "Checking test.yaml manifest:"
yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1]' "manager/integration/deploy/test.yaml"
echo -e "\nChecking upgrade_test.yaml manifest (if exists):"
find . -name "upgrade_test.yaml" -exec yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1]' {} \;
Length of output: 474
Script:
#!/bin/bash
# Search for test manifest files and show their content
echo "=== Content of test.yaml ==="
find . -name "test.yaml" -exec cat {} \;
echo -e "\n=== Content of upgrade_test.yaml ==="
find . -name "upgrade_test.yaml" -exec cat {} \;
# Also search for any references to backup store configuration
echo -e "\n=== References to backup store configuration ==="
rg -A 5 -B 5 "BACKUP_STORE" --type yaml
Length of output: 7402
🛠️ Refactor suggestion
Consider using associative arrays for backup store types.
The current implementation using hardcoded array indices (1,2,3,4) is brittle and error-prone. Consider using an associative array to map backup store types to their respective positions:
+declare -A STORE_POSITIONS=(
+ ["s3"]=1
+ ["nfs"]=2
+ ["cifs"]=3
+ ["azurite"]=4
+)
if [[ $BACKUP_STORE_TYPE = "s3" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $1}' | sed 's/ *//'`
+ position="${STORE_POSITIONS[$BACKUP_STORE_TYPE]}"
+ BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $'${position}'}' | sed 's/ *//'`
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | |
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'` | |
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | |
elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then | |
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $4}' | sed 's/ *//'` | |
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | |
declare -A STORE_POSITIONS=( | |
["s3"]=1 | |
["nfs"]=2 | |
["cifs"]=3 | |
["azurite"]=4 | |
) | |
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | |
position="${STORE_POSITIONS[$BACKUP_STORE_TYPE]}" | |
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $'${position}'}' | sed 's/ *//'` | |
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | |
elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then | |
position="${STORE_POSITIONS[$BACKUP_STORE_TYPE]}" | |
BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $'${position}'}' | sed 's/ *//'` | |
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} |
7d6e464
to
d12e677
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
🧹 Outside diff range and nitpick comments (20)
manager/integration/README.md (2)
19-21
: Fix grammar and formatting in the Azurite setup instructions.The Azurite setup instructions need grammatical improvements and better formatting:
Apply these changes:
-1. Deploy all backupstore servers(including `NFS` server and `Minio` as s3 server `CIFS` and `Azurite` server) for test purposes. +1. Deploy all backupstore servers (including `NFS` server, `Minio` as S3 server, `CIFS`, and `Azurite` server) for test purposes. -For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). +For Azurite, there are some manual steps that need to be completed after the manifest is deployed. See [Azurite Backupstore Setup Guide](https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing).🧰 Tools
🪛 LanguageTool
[uncategorized] ~21-~21: A determiner appears to be missing. Consider inserting it.
Context: ...or Azurite, there are some manual steps need to be done after manifest deployed(http...(AI_EN_LECTOR_MISSING_DETERMINER)
[uncategorized] ~21-~21: You might be missing the article “the” here.
Context: ...some manual steps need to be done after manifest deployed(https://confluence.suse.com/di...(AI_EN_LECTOR_MISSING_DETERMINER_THE)
🪛 Markdownlint
21-21: null
Bare URL used(MD034, no-bare-urls)
22-26
: Add language identifier to the code block.The fenced code block should specify the shell language for better syntax highlighting.
-``` +```shell kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml<details> <summary>🧰 Tools</summary> <details> <summary>🪛 Markdownlint</summary> 22-22: null Fenced code blocks should have a language specified (MD040, fenced-code-language) </details> </details> </blockquote></details> <details> <summary>e2e/README.md (1)</summary><blockquote> `19-19`: **Fix grammar and formatting in the Azurite setup note.** The sentence structure needs improvement, and the URL should be properly formatted in Markdown. Apply this diff: ```diff - For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). + For Azurite, there are some manual steps that need to be done after the manifest is deployed. See [Azurite Setup Guide](https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing).
🧰 Tools
🪛 LanguageTool
[uncategorized] ~19-~19: A determiner appears to be missing. Consider inserting it.
Context: ...or Azurite, there are some manual steps need to be done after manifest deployed(http...(AI_EN_LECTOR_MISSING_DETERMINER)
[uncategorized] ~19-~19: You might be missing the article “the” here.
Context: ...some manual steps need to be done after manifest deployed(https://confluence.suse.com/di...(AI_EN_LECTOR_MISSING_DETERMINER_THE)
🪛 Markdownlint
19-19: null
Bare URL used(MD034, no-bare-urls)
manager/integration/tests/backupstore.py (2)
137-146
: Consider adding error handling for Azure credentials.The Azurite setup function should validate the credential format and handle potential errors when splitting the backupstore URL.
def set_backupstore_azurite(client): backupstores = get_backupstore_url() poll_interval = get_backupstore_poll_interval() for backupstore in backupstores: if is_backupTarget_azurite(backupstore): + if '$' not in backupstore: + raise ValueError("Invalid Azure backup store URL format. Expected format: <url>$<credentials>") backupsettings = backupstore.split("$") + if len(backupsettings) != 2: + raise ValueError("Invalid Azure credentials format") set_backupstore_url(client, backupsettings[0]) set_backupstore_credential_secret(client, backupsettings[1]) set_backupstore_poll_interval(client, poll_interval) break
Line range hint
748-752
: Consider updating the TODO comment.The TODO comment about implementing delete functions for lock removal should be updated to reflect the current state of the implementation and any new requirements for CIFS and Azurite support.
- """ - waits 150 seconds which is the lock duration - TODO: once we have implemented the delete functions, - we can switch to removing the locks directly - """ + """ + Waits for the lock duration (150 seconds) to expire. + TODO: Implement direct lock removal for all supported backup store types + (S3, NFS, CIFS, and Azurite) to avoid waiting. + """manager/integration/tests/test_ha.py (9)
Line range hint
76-119
: Use descriptive variable names for improved readabilityIn the function
ha_rebuild_replica_test
, the variablev
is used to represent the volume within the loop starting at line 93. Using a single-letter variable name can reduce code clarity. Consider renamingv
to a more descriptive name likevolume
orcurrent_volume
.def ha_rebuild_replica_test(client, volname): # NOQA volume = client.by_id_volume(volname) check_volume_endpoint(volume) assert len(volume.replicas) == 2 replica0 = volume.replicas[0] assert replica0.name != "" replica1 = volume.replicas[1] assert replica1.name != "" data = write_volume_random_data(volume) volume = volume.replicaRemove(name=replica0.name) # wait until we saw a replica starts rebuilding new_replica_found = False for i in range(RETRY_COUNTS): - v = client.by_id_volume(volname) - for r in v.replicas: + volume = client.by_id_volume(volname) + for r in volume.replicas: if r.name != replica0.name and \ r.name != replica1.name: new_replica_found = True break if new_replica_found: break time.sleep(RETRY_INTERVAL) wait_for_rebuild_complete(client, volname) assert new_replica_found volume = wait_for_volume_healthy(client, volname) volume = client.by_id_volume(volname) assert volume.state == common.VOLUME_STATE_ATTACHED assert volume.robustness == common.VOLUME_ROBUSTNESS_HEALTHY assert len(volume.replicas) >= 2 found = False for replica in volume.replicas: if replica.name == replica1.name: found = True break assert found check_volume_data(volume, data)
Line range hint
124-132
: Implement all test cases outlined in the TODOThe docstring for
test_ha_salvage
mentions four test cases that need to be covered, but currently, only a subset is implemented. To ensure comprehensive test coverage, please implement all the outlined test cases.Would you like assistance in implementing the remaining test cases? I can help draft the code for the missing scenarios.
Line range hint
152-219
: Refactor to reduce code duplication inha_salvage_test
The
ha_salvage_test
function contains repeated code blocks for different cases, such as setting the auto salvage setting and creating volumes. Consider refactoring these common steps into helper functions or loops to enhance maintainability and reduce redundancy.
Line range hint
474-479
: Cleanup commented-out code for clarityThere are several lines of commented-out code that reference issue comments and temporary workarounds. While it's helpful to reference external issues, consider removing or updating these comments to reflect the current state of the code, ensuring future maintainers are not confused.
Line range hint
900-901
: Ensurepytest.mark.skipif
is used correctlyThe
@pytest.mark.skipif
decorator intest_restore_volume_with_invalid_backupstore
uses a string condition's3' not in BACKUPSTORE
, which may not evaluate as intended. The condition should reference the variable directly without quotes.Apply this diff to fix the condition:
-@pytest.mark.skipif('s3' not in BACKUPSTORE, reason='This test is only applicable for s3') # NOQA +@pytest.mark.skipif('s3' not in BACKUPSTORE, reason='This test is only applicable for s3') # NOQAAlternatively, ensure that
BACKUPSTORE
is properly referenced.
Line range hint
1340-1359
: Improve docstring formatting for better readabilityThe docstring in
test_replica_failure_during_attaching
has inconsistent indentation and formatting, which can make it difficult to read. Properly formatted docstrings enhance code readability and maintainability.Consider reformatting the docstring as follows:
def test_replica_failure_during_attaching(settings_reset, client, core_api, volume_name): # NOQA """ Steps: 1. Set a short interval for setting `replica-replenishment-wait-interval`. 2. Disable the setting `soft-node-anti-affinity`. 3. Create `volume1` with 1 replica and attach it to the host node. 4. Mount `volume1` to a new mount point, then use it as an extra node disk. 5. Disable the scheduling for the default disk of the host node, ensuring the extra disk is the only available disk on the node. 6. Create and attach `volume2`, then write data to `volume2`. 7. Detach `volume2`. 8. Directly unmount `volume1` and remove the related mount point directory. Verify the extra disk becomes unavailable. 9. Attach `volume2`. Verify the volume will be attached with state `Degraded`. 10. Wait for the replenishment interval. Verify a new replica cannot be scheduled. 11. Enable the default disk for the host node. 12. Wait for `volume2` to become `Healthy`. 13. Verify data content and R/W capability for `volume2`. """
Line range hint
1537-1545
: Avoid hardcoding node names and use dynamic retrievalIn
test_engine_image_not_fully_deployed_perform_volume_operations
, the code assumesnodes[0]
is the node to taint. Hardcoding indices can lead to issues if the node order changes. Consider retrieving the node dynamically based on attributes or roles.- nodes = client.list_node() - core_api.patch_node(nodes[0].id, { + node_to_taint = client.by_id_node(taint_node_id) + core_api.patch_node(node_to_taint.id, { "spec": { "taints": [{"effect": "NoSchedule", "key": "key", "value": "value"}] } })
Line range hint
1982-1996
: Refactor long comments into documentation or separate functionsThe long comment block in
test_retain_potentially_useful_replicas_in_autosalvage_loop
provides detailed test steps. Consider converting these steps into a docstring or, if the test is not yet implemented, create a placeholder and track it separately to keep the code clean.
Line range hint
900-901
: Correct the condition inpytest.mark.skipif
The condition in the
@pytest.mark.skipif
decorator should not be enclosed in quotes; otherwise, it will always evaluate toTrue
, and the test may be inadvertently skipped.Apply this diff:
-@pytest.mark.skipif('s3' not in BACKUPSTORE, reason='This test is only applicable for s3') # NOQA +@pytest.mark.skipif('s3' not in BACKUPSTORE, reason='This test is only applicable for s3') # NOQAEnsure that
BACKUPSTORE
is correctly referenced and that the condition properly checks if's3'
is not inBACKUPSTORE
.manager/integration/tests/test_basic.py (6)
676-678
: Simplify fixture usage by removing unnecessary assignmentYou can directly use
set_random_backupstore
without assigning it tobackup_store_type
, sinceset_random_backupstore
already provides the backup store type.Apply this diff to simplify the code:
- backup_store_type = set_random_backupstore - if backup_store_type not in ["nfs", "s3"]: + if set_random_backupstore not in ["nfs", "s3"]: pytest.skip("Skip test case because the backup store type is not supported") # NOQA
678-678
: Usepytest.mark.skipif
decorator for conditional test skippingInstead of calling
pytest.skip()
at runtime, consider using the@pytest.mark.skipif
decorator to conditionally skip the test at collection time. This approach provides clearer test reports and is generally preferred in pytest.Apply this diff:
+@pytest.mark.skipif(set_random_backupstore not in ["nfs", "s3"], reason="Skip test case because the backup store type is not supported") def test_backup_block_deletion(set_random_backupstore, client, core_api, volume_name): # NOQA - if set_random_backupstore not in ["nfs", "s3"]: - pytest.skip("Skip test case because the backup store type is not supported") # NOQA # existing code
1113-1115
: Simplify fixture usage by removing unnecessary assignmentAs with previous instances, you can directly use
set_random_backupstore
without assigning it tobackup_store_type
.Apply this diff:
- backup_store_type = set_random_backupstore - if backup_store_type not in ["nfs", "s3"]: + if set_random_backupstore not in ["nfs", "s3"]: pytest.skip("Skip test case because the backup store type is not supported") # NOQA
1114-1115
: Usepytest.mark.skipif
decorator for conditional test skippingConsistently use the
@pytest.mark.skipif
decorator to skip tests based on conditions.Apply this diff:
+@pytest.mark.skipif(set_random_backupstore not in ["nfs", "s3"], reason="Skip test case because the backup store type is not supported") def test_backup_volume_list(set_random_backupstore, client, core_api): # NOQA - if set_random_backupstore not in ["nfs", "s3"]: - pytest.skip("Skip test case because the backup store type is not supported") # NOQA # existing code
1211-1213
: Simplify fixture usage by removing unnecessary assignmentAgain, you can remove the assignment to
backup_store_type
and useset_random_backupstore
directly.Apply this diff:
- backup_store_type = set_random_backupstore - if backup_store_type not in ["nfs", "s3"]: + if set_random_backupstore not in ["nfs", "s3"]: pytest.skip("Skip test case because the backup store type is not supported") # NOQA
1212-1213
: Usepytest.mark.skipif
decorator for conditional test skippingApply the
@pytest.mark.skipif
decorator to improve test skipping logic.Apply this diff:
+@pytest.mark.skipif(set_random_backupstore not in ["nfs", "s3"], reason="Skip test case because the backup store type is not supported") def test_backup_metadata_deletion(set_random_backupstore, client, core_api, volume_name): # NOQA - if set_random_backupstore not in ["nfs", "s3"]: - pytest.skip("Skip test case because the backup store type is not supported") # NOQA # existing code
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (25)
e2e/README.md
(1 hunks)manager/integration/README.md
(1 hunks)manager/integration/deploy/test.yaml
(1 hunks)manager/integration/tests/backupstore.py
(14 hunks)manager/integration/tests/common.py
(1 hunks)manager/integration/tests/test_basic.py
(3 hunks)manager/integration/tests/test_ha.py
(1 hunks)pipelines/utilities/install_backupstores.sh
(1 hunks)pipelines/utilities/run_longhorn_e2e_test.sh
(3 hunks)pipelines/utilities/run_longhorn_test.sh
(2 hunks)test_framework/scripts/longhorn-setup.sh
(3 hunks)test_framework/terraform/aws/centos/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/centos/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/oracle/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/oracle/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rhel/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rhel/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rockylinux/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rockylinux/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_server.sh.tpl
(2 hunks)test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_server.sh.tpl
(1 hunks)test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (16)
- manager/integration/deploy/test.yaml
- pipelines/utilities/run_longhorn_test.sh
- test_framework/terraform/aws/centos/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/centos/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/oracle/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/oracle/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/rhel/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/rhel/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/rockylinux/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/rockylinux/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_server.sh.tpl
- test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_server.sh.tpl
- test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
🧰 Additional context used
🪛 LanguageTool
e2e/README.md
[uncategorized] ~19-~19: A determiner appears to be missing. Consider inserting it.
Context: ...or Azurite, there are some manual steps need to be done after manifest deployed(http...
(AI_EN_LECTOR_MISSING_DETERMINER)
[uncategorized] ~19-~19: You might be missing the article “the” here.
Context: ...some manual steps need to be done after manifest deployed(https://confluence.suse.com/di...
(AI_EN_LECTOR_MISSING_DETERMINER_THE)
manager/integration/README.md
[uncategorized] ~21-~21: A determiner appears to be missing. Consider inserting it.
Context: ...or Azurite, there are some manual steps need to be done after manifest deployed(http...
(AI_EN_LECTOR_MISSING_DETERMINER)
[uncategorized] ~21-~21: You might be missing the article “the” here.
Context: ...some manual steps need to be done after manifest deployed(https://confluence.suse.com/di...
(AI_EN_LECTOR_MISSING_DETERMINER_THE)
🪛 Markdownlint
e2e/README.md
19-19: null
Bare URL used
(MD034, no-bare-urls)
20-20: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
manager/integration/README.md
21-21: null
Bare URL used
(MD034, no-bare-urls)
22-22: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (16)
manager/integration/README.md (1)
24-26
:
Verify the availability of manifest files.
The kubectl command references manifest files that don't appear to be available in the repository.
Please ensure that:
- The manifest files are added to the repository
- The URLs point to the correct location
- Consider using relative paths instead of raw GitHub URLs to ensure version consistency
✅ Verification successful
Let me verify the URLs directly to ensure they are accessible and check the repository structure in the longhorn/longhorn repository.
The manifest URLs are valid and accessible
The verification shows that all three manifest URLs (nfs-backupstore.yaml, cifs-backupstore.yaml, and azurite-backupstore.yaml) are accessible and return HTTP 200 status codes. While these files don't exist in the current repository, they are correctly referenced from the longhorn/longhorn repository's master branch, which is the intended source.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if the new manifest files exist in the repository
# Expected: Both manifest files should be found
echo "Checking for CIFS and Azurite manifest files..."
fd -t f "cifs-backupstore.yaml|azurite-backupstore.yaml"
echo -e "\nChecking if these manifests exist in any branch..."
git ls-remote --heads origin | cut -f2 | \
while read -r branch; do
echo "Checking branch: ${branch##refs/heads/}"
git ls-tree -r --name-only "origin/${branch##refs/heads/}" 2>/dev/null | \
grep -E "cifs-backupstore.yaml|azurite-backupstore.yaml" || true
done
Length of output: 3290
Script:
#!/bin/bash
# Check if the URLs are accessible and verify the repository structure
echo "Checking URLs accessibility..."
for url in "https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml" \
"https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml" \
"https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml"; do
curl -s -I "$url" | head -n 1
done
echo -e "\nChecking local repository structure..."
fd -t d "deploy" -x ls -la {} \;
echo -e "\nChecking if nfs-backupstore.yaml exists locally..."
fd -t f "nfs-backupstore.yaml"
Length of output: 2648
pipelines/utilities/install_backupstores.sh (1)
4-11
: LGTM! New backup store configurations added correctly.
The changes properly integrate CIFS and Azurite backup stores alongside existing MinIO and NFS stores. The URLs and kubectl commands follow the established pattern.
e2e/README.md (1)
20-24
: Add language specification to code block and verify YAML files.
The code block should specify the language, and we should verify the existence of the new YAML files.
Apply this diff to specify the language:
-```
+```bash
kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml
Let's verify the existence of the new YAML files:
✅ Verification successful
The YAML files are correctly referenced but exist in a different repository
The CIFS and Azurite backupstore YAML files are referenced from the main Longhorn repository (longhorn/longhorn
) but we are in the test repository (longhorn/longhorn-tests
). The URLs in the documentation are correct as they point to the actual deployment files in the main repository.
- The files are referenced consistently across multiple test files:
test_framework/scripts/longhorn-setup.sh
pipelines/utilities/install_backupstores.sh
manager/integration/README.md
e2e/README.md
The code block language specification should still be added, but the YAML file references are valid.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the existence of CIFS and Azurite backupstore YAML files
# Test: Check if the YAML files exist in the repository
fd -e yaml -p "deploy/backupstores/(cifs|azurite)-backupstore.yaml"
# Test: Verify the content of these files contains the correct kind and configuration
rg -A 5 "kind: (Deployment|Service)" "deploy/backupstores/(cifs|azurite)-backupstore.yaml"
Length of output: 254
Script:
#!/bin/bash
# Let's verify if these files exist in the Longhorn repository
# First, check if we're in the right repository
git remote -v
# Check the repository structure
fd -t d "deploy"
# Look for any backup store related files
fd -e yaml "backupstore"
# Search for any mentions of these backup stores in the codebase
rg -i "cifs.*backupstore|azurite.*backupstore"
Length of output: 2878
🧰 Tools
🪛 Markdownlint
20-20: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
pipelines/utilities/run_longhorn_e2e_test.sh (2)
84-87
: LGTM!
The implementation correctly sets the LONGHORN_BACKUPSTORES
environment variable for CIFS and Azurite backup store types, maintaining consistency with the existing S3 and NFS cases.
3-4
: Verify the existence of required Kubernetes resources.
The backup store URLs reference Kubernetes services and secrets that must exist:
- CIFS:
longhorn-test-cifs-svc
service andcifs-secret
secret indefault
namespace - Azurite:
azblob-secret
secret for Azure Blob Storage credentials
✅ Verification successful
Let me gather more information about the test configuration and resources.
Let me search for any setup or test files that might create these resources dynamically during test execution.
Based on the search results, I can now provide a final response:
The backup store URLs are correctly configured with proper resource references
The CIFS and Azurite backup store URLs are part of a well-established test infrastructure where:
-
The resources are created during test setup via external manifests:
- CIFS:
https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml
- Azurite:
https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml
- CIFS:
-
The setup process is handled by dedicated scripts:
test_framework/scripts/longhorn-setup.sh
pipelines/utilities/install_backupstores.sh
-
The integration tests have proper support:
- Dedicated test helpers in
manager/integration/tests/backupstore.py
- URL validation functions in
manager/integration/tests/common.py
- Dedicated test helpers in
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Verify the existence of required Kubernetes resources
# Check for CIFS service and secret definitions in the repository
echo "Checking for CIFS resources..."
rg -A 5 "longhorn-test-cifs-svc" --type yaml
rg -A 5 "cifs-secret" --type yaml
# Check for Azurite secret definitions
echo "Checking for Azurite resources..."
rg -A 5 "azblob-secret" --type yaml
Length of output: 2122
Script:
#!/bin/bash
# Check the complete test configuration and related resources
echo "Checking test configuration..."
cat manager/integration/deploy/test.yaml
# Look for any CIFS/Azurite service definitions
echo -e "\nChecking for service definitions..."
rg -l "kind:\s+Service" --type yaml | xargs rg -A 10 "longhorn-test-(cifs-svc|azurite)"
# Look for secret definitions
echo -e "\nChecking for secret definitions..."
rg -l "kind:\s+Secret" --type yaml | xargs rg -A 10 "(cifs-secret|azblob-secret)"
Length of output: 4041
Script:
#!/bin/bash
# Look for test setup files that might create the services and secrets
echo "Checking for test setup files..."
rg -l "integration.*test" --type go | xargs rg -A 5 "(cifs|azurite|azblob)"
# Look for any Kubernetes manifest templates or setup scripts
echo -e "\nChecking for setup scripts and templates..."
fd -e yaml -e sh -e json . manager/integration | xargs rg "(createSecret|kubectl create secret|kubectl apply)"
# Look specifically for CIFS/Azurite related test code
echo -e "\nChecking for CIFS/Azurite test code..."
rg -A 10 "func.*Test.*(CIFS|Azurite|AzBlob)" --type go
Length of output: 19036
manager/integration/tests/backupstore.py (4)
20-21
: LGTM: Import statements are correctly added.
The new imports for CIFS and Azurite backup target support are properly placed with other similar imports.
69-72
: LGTM: Backup store type handling is properly extended.
The set_random_backupstore
fixture correctly handles the new backup store types (CIFS and Azure Blob) by calling their respective setup functions.
125-135
:
Implement missing CIFS-specific functions.
While the CIFS setup function is implemented correctly, several CIFS-specific functions are still missing. These functions are required for full CIFS support:
- backupstore_get_backup_volume_prefix
- backupstore_get_backup_cfg_file_path
- backupstore_get_volume_cfg_file_path
- backupstore_get_backup_blocks_dir
- backupstore_create_file_in_backupstore
- backupstore_delete_file_in_backupstore
307-307
: Consider implementing CIFS and Azurite support in these functions.
The current implementation skips tests for unsupported backup store types. While this is a valid temporary solution, consider implementing proper support for CIFS and Azurite in these functions to ensure full test coverage.
Let's verify which functions need implementation:
Also applies to: 344-344, 367-367, 390-390, 416-416, 466-466, 514-514, 554-554, 596-596, 665-665, 714-714
test_framework/scripts/longhorn-setup.sh (4)
336-337
: LGTM!
The new backup store URLs for CIFS and Azurite follow the same pattern as the existing ones and point to the correct repository paths.
345-366
: Reusing existing review comment.
The previous review already identified critical improvements needed in this function, including:
- Security concerns with hardcoded credentials
- Process management issues with background port forwarding
- Missing error handling
- Magic numbers in retry logic
426-431
: Reusing existing review comment.
The previous review already suggested extracting the duplicated backup store type handling logic into a helper function.
486-491
: Reusing existing review comment.
The previous review already suggested extracting the duplicated backup store type handling logic into a helper function.
manager/integration/tests/common.py (2)
3898-3900
: LGTM! Clean implementation of CIFS backup target check.
The function follows the established pattern for backup target validation and correctly checks for the "cifs://" prefix.
3902-3904
: LGTM! Clean implementation of Azurite backup target check.
The function follows the established pattern for backup target validation and correctly checks for the "azblob://" prefix.
manager/integration/tests/test_ha.py (1)
Line range hint 453-494
: Handle exceptions when crashing the engine process
In the test_engine_crash_for_restore_volume
function, the engine process is crashed using crash_engine_process_with_sigkill
. Ensure that the function handles exceptions appropriately in case the crash does not behave as expected, to prevent the test from failing unexpectedly.
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &" | ||
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://0.0.0.0:20001/devstoreaccount1;'" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security concerns in Azurite setup.
- Port forwarding is exposed to all interfaces (0.0.0.0), which is broader than necessary. Consider limiting it to specific interfaces or using localhost.
- The connection string contains sensitive credentials in plaintext. Consider:
- Moving credentials to a Kubernetes secret
- Using environment variables
- Using a more secure method to pass credentials
Example improvement for port forwarding:
- ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &"
+ ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward service/azblob-service 20001:10000 > /dev/null 2>&1 &"
Example improvement for credentials:
- ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://0.0.0.0:20001/devstoreaccount1;'"
+ CONNECTION_STRING=$(kubectl get secret azurite-credentials -n longhorn-system -o jsonpath='{.data.connection-string}' | base64 -d)
+ ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string \"\${CONNECTION_STRING}\""
Committable suggestion skipped: line range outside the PR's diff.
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | ||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | ||
elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then | ||
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${AZURITE_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use name-based selection for environment variables.
The current implementation uses index-based access (env[1]
) which is fragile. As noted in previous reviews, the environment variable should be selected by name for reliability.
Apply this diff to fix the environment variable selection:
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
+ yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[] |= select(.name == "LONGHORN_BACKUPSTORE").value = "'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${AZURITE_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
+ yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[] |= select(.name == "LONGHORN_BACKUPSTORE").value = "'${AZURITE_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
This change also needs to be applied to the existing S3 and NFS cases for consistency.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | |
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | |
elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then | |
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${AZURITE_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | |
elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then | |
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[] |= select(.name == "LONGHORN_BACKUPSTORE").value = "'${CIFS_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} | |
elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then | |
yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[] |= select(.name == "LONGHORN_BACKUPSTORE").value = "'${AZURITE_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH} |
-f ${NFS_BACKUPSTORE_URL} \ | ||
-f ${CIFS_BACKUPSTORE_URL} \ | ||
-f ${AZURITE_BACKUPSTORE_URL} | ||
setup_azuitize_backup_store |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix typo in function name.
The function name setup_azuitize_backup_store
contains a typo. It should be setup_azurite_backup_store
to match the backup store type.
- setup_azuitize_backup_store
+ setup_azurite_backup_store
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
-f ${NFS_BACKUPSTORE_URL} \ | |
-f ${CIFS_BACKUPSTORE_URL} \ | |
-f ${AZURITE_BACKUPSTORE_URL} | |
setup_azuitize_backup_store | |
-f ${NFS_BACKUPSTORE_URL} \ | |
-f ${CIFS_BACKUPSTORE_URL} \ | |
-f ${AZURITE_BACKUPSTORE_URL} | |
setup_azurite_backup_store |
backup_store_type = set_random_backupstore | ||
if backup_store_type not in ["nfs", "s3"]: | ||
pytest.skip("Skip test case because the backup store type is not supported") # NOQA | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Use pytest.mark.skipif
decorator for conditional test skipping
Inside the function test_all_replica_restore_failure
, the test is conditionally skipped using pytest.skip
. For better practice, use the @pytest.mark.skipif
decorator at the function definition to conditionally skip the test based on the backup store type.
Apply this diff:
- backup_store_type = set_random_backupstore
- if backup_store_type not in ["nfs", "s3"]:
- pytest.skip("Skip test case because the backup store type is not supported") # NOQA
+@pytest.mark.skipif(set_random_backupstore not in ["nfs", "s3"], reason="Backup store type not supported")
def test_all_replica_restore_failure(set_random_backupstore, client, core_api, volume_name, csi_pv, pvc, pod_make): # NOQA
Committable suggestion skipped: line range outside the PR's diff.
longhorn/longhorn-9699 Signed-off-by: Chris <[email protected]>
d12e677
to
20a758d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (10)
manager/integration/README.md (2)
19-19
: Improve documentation formatting and clarity.The documentation needs some refinements:
- The Azurite setup note should be properly formatted:
-For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). +For Azurite, there are some manual steps that need to be done after the manifest is deployed. See [Azurite Backupstore Setup Guide](https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing) for details.Also applies to: 21-21
22-26
: Add language specifier to code block.Add
shell
as the language specifier to the code block for proper syntax highlighting:-``` +```shell kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml ```🧰 Tools
🪛 Markdownlint
22-22: null
Fenced code blocks should have a language specified(MD040, fenced-code-language)
e2e/README.md (1)
19-19
: Improve grammar in the Azurite setup note.The sentence structure needs improvement.
Apply this change:
- For Azurite, there are some manual steps need to be done after manifest deployed(https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing). + For Azurite, there are some manual steps that need to be completed after the manifest is deployed (https://confluence.suse.com/display/LON/Setup+Azurite+Backupstore+For+Testing).🧰 Tools
🪛 LanguageTool
[uncategorized] ~19-~19: A determiner appears to be missing. Consider inserting it.
Context: ...or Azurite, there are some manual steps need to be done after manifest deployed(http...(AI_EN_LECTOR_MISSING_DETERMINER)
[uncategorized] ~19-~19: You might be missing the article “the” here.
Context: ...some manual steps need to be done after manifest deployed(https://confluence.suse.com/di...(AI_EN_LECTOR_MISSING_DETERMINER_THE)
🪛 Markdownlint
19-19: null
Bare URL used(MD034, no-bare-urls)
pipelines/utilities/run_longhorn_e2e_test.sh (1)
29-30
: Document Azurite's SLES-only support.According to the PR objectives, Azurite currently only supports the SLES pipeline. Consider adding a comment above the Azurite condition to document this limitation and prevent misuse on unsupported distributions.
+ # Note: Azurite currently only supports SLES pipeline elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${AZURITE_BACKUP_STORE}'"' ${LONGHORN_TESTS_MANIFEST_FILE_PATH}
manager/integration/tests/backupstore.py (3)
125-135
: Consider adding error handling for missing credentials.The CIFS setup function should validate the credential secret format and handle cases where the credential secret is missing or malformed.
def set_backupstore_cifs(client): backupstores = get_backupstore_url() poll_interval = get_backupstore_poll_interval() for backupstore in backupstores: if is_backupTarget_cifs(backupstore): backupsettings = backupstore.split("$") + if len(backupsettings) != 2: + raise ValueError("Invalid CIFS backup target format. Expected format: <url>$<credential>") set_backupstore_url(client, backupsettings[0]) set_backupstore_credential_secret(client, backupsettings[1]) set_backupstore_poll_interval(client, poll_interval) break
137-147
: Consider adding error handling for missing credentials.Similar to the CIFS setup, the Azurite setup function should validate the credential secret format and handle cases where the credential secret is missing or malformed.
def set_backupstore_azurite(client): backupstores = get_backupstore_url() poll_interval = get_backupstore_poll_interval() for backupstore in backupstores: if is_backupTarget_azurite(backupstore): backupsettings = backupstore.split("$") + if len(backupsettings) != 2: + raise ValueError("Invalid Azurite backup target format. Expected format: <url>$<credential>") set_backupstore_url(client, backupsettings[0]) set_backupstore_credential_secret(client, backupsettings[1]) set_backupstore_poll_interval(client, poll_interval) break
307-307
: Consider implementing CIFS operations first.Since CIFS is a file-based protocol similar to NFS, you could implement CIFS operations by adapting the existing NFS implementations. This would provide a good foundation before tackling the Azurite implementations.
Would you like me to help generate the CIFS implementations for these operations? We can start with the most critical ones first.
Also applies to: 344-344, 367-367, 390-390, 416-416, 466-466, 514-514, 554-554, 596-596, 665-665, 714-714
manager/integration/tests/test_basic.py (3)
676-679
: Consider extracting backup store type validation to a helper function.The backup store type validation logic is duplicated across multiple test functions. Consider extracting it to a reusable helper function for better maintainability and consistency.
def skip_if_unsupported_backupstore(backup_store_type): """Helper function to skip tests for unsupported backup store types""" if backup_store_type not in ["nfs", "s3"]: pytest.skip("Test requires NFS or S3 backup store type")
1113-1116
: Duplicated backup store validation code.This is the same validation logic as in test_backup_block_deletion(). Consider using the suggested helper function to reduce code duplication.
1211-1214
: Duplicated backup store validation code.This is the same validation logic as in previous test functions. Consider using the suggested helper function to reduce code duplication.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (25)
e2e/README.md
(1 hunks)manager/integration/README.md
(1 hunks)manager/integration/deploy/test.yaml
(1 hunks)manager/integration/tests/backupstore.py
(14 hunks)manager/integration/tests/common.py
(1 hunks)manager/integration/tests/test_basic.py
(4 hunks)manager/integration/tests/test_ha.py
(1 hunks)pipelines/utilities/install_backupstores.sh
(1 hunks)pipelines/utilities/run_longhorn_e2e_test.sh
(3 hunks)pipelines/utilities/run_longhorn_test.sh
(2 hunks)test_framework/scripts/longhorn-setup.sh
(3 hunks)test_framework/terraform/aws/centos/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/centos/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/oracle/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/oracle/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rhel/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rhel/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rockylinux/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/rockylinux/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_server.sh.tpl
(2 hunks)test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_server.sh.tpl
(1 hunks)test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/aws/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
(1 hunks)test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (17)
- manager/integration/deploy/test.yaml
- pipelines/utilities/install_backupstores.sh
- pipelines/utilities/run_longhorn_test.sh
- test_framework/terraform/aws/centos/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/centos/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/oracle/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/oracle/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/rhel/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/rhel/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/rockylinux/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/rockylinux/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/aws/sles/user-data-scripts/provision_k3s_server.sh.tpl
- test_framework/terraform/aws/sles/user-data-scripts/provision_rke2_server.sh.tpl
- test_framework/terraform/aws/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/aws/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
- test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_k3s_agent.sh.tpl
- test_framework/terraform/equinix/ubuntu/user-data-scripts/provision_rke2_agent.sh.tpl
🧰 Additional context used
🪛 LanguageTool
e2e/README.md
[uncategorized] ~19-~19: A determiner appears to be missing. Consider inserting it.
Context: ...or Azurite, there are some manual steps need to be done after manifest deployed(http...
(AI_EN_LECTOR_MISSING_DETERMINER)
[uncategorized] ~19-~19: You might be missing the article “the” here.
Context: ...some manual steps need to be done after manifest deployed(https://confluence.suse.com/di...
(AI_EN_LECTOR_MISSING_DETERMINER_THE)
manager/integration/README.md
[uncategorized] ~21-~21: A determiner appears to be missing. Consider inserting it.
Context: ...or Azurite, there are some manual steps need to be done after manifest deployed(http...
(AI_EN_LECTOR_MISSING_DETERMINER)
[uncategorized] ~21-~21: You might be missing the article “the” here.
Context: ...some manual steps need to be done after manifest deployed(https://confluence.suse.com/di...
(AI_EN_LECTOR_MISSING_DETERMINER_THE)
🪛 Markdownlint
e2e/README.md
19-19: null
Bare URL used
(MD034, no-bare-urls)
20-20: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
manager/integration/README.md
21-21: null
Bare URL used
(MD034, no-bare-urls)
22-22: null
Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (13)
manager/integration/README.md (1)
24-26
:
Critical: Referenced manifest files are missing.
The kubectl command references manifest files that don't exist in the repository:
cifs-backupstore.yaml
azurite-backupstore.yaml
These files need to be added to the repository before merging this PR. Additionally, consider:
- Adding these files to the
deploy/backupstores/
directory - Documenting the configuration options for each backup store type
e2e/README.md (1)
22-24
: Verify the existence and content of the new backup store YAML files.
The deployment commands reference new YAML files for CIFS and Azurite backup stores.
Let's verify the existence and content of these files:
Additionally, since Azurite requires manual setup steps after deployment, consider adding a comment in the command block to remind users about this requirement:
kubectl create -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/minio-backupstore.yaml \
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/nfs-backupstore.yaml \
-f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml \
- -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml
+ -f https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml # Requires additional setup steps after deployment
✅ Verification successful
The YAML files exist and are correctly referenced
The verification shows that:
- Both YAML files (
cifs-backupstore.yaml
andazurite-backupstore.yaml
) exist in the master branch of the longhorn/longhorn repository (HTTP 200 responses) - The files are consistently referenced across multiple files in the codebase
- The deployment order is maintained across all references
- There's a
setup_azuitize_backup_store
function called after deployment, indicating proper handling of additional setup requirements
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if the YAML files exist and contain the necessary configurations
# Test 1: Check if the YAML files exist in the repository
echo "Checking YAML files existence..."
for file in cifs-backupstore.yaml azurite-backupstore.yaml; do
rg -l "deploy/backupstores/$file"
done
# Test 2: Verify the content of these files
echo "Checking YAML files content..."
for file in cifs-backupstore.yaml azurite-backupstore.yaml; do
rg -A 10 "kind: (Deployment|Service)" "deploy/backupstores/$file"
done
Length of output: 1084
Script:
#!/bin/bash
# Let's check the content of the files that reference these YAML files to understand the context better
echo "Checking test_framework/scripts/longhorn-setup.sh..."
rg -A 5 "cifs-backupstore.yaml|azurite-backupstore.yaml" "test_framework/scripts/longhorn-setup.sh"
echo -e "\nChecking pipelines/utilities/install_backupstores.sh..."
rg -A 5 "cifs-backupstore.yaml|azurite-backupstore.yaml" "pipelines/utilities/install_backupstores.sh"
echo -e "\nChecking manager/integration/README.md..."
rg -A 5 "cifs-backupstore.yaml|azurite-backupstore.yaml" "manager/integration/README.md"
# Let's also check if these files exist in the master branch of longhorn/longhorn repository
echo -e "\nChecking if files exist in the master branch URLs..."
for url in "https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/cifs-backupstore.yaml" "https://raw.githubusercontent.com/longhorn/longhorn/master/deploy/backupstores/azurite-backupstore.yaml"; do
curl -s -I "$url" | head -n 1
done
Length of output: 2772
pipelines/utilities/run_longhorn_e2e_test.sh (3)
3-4
: LGTM! Backup store URLs follow standard patterns.
The new backup store URLs for CIFS and Azurite follow consistent patterns with proper service endpoints and secret references.
27-30
: Use name-based selection for environment variables.
The implementation follows the existing pattern but inherits the fragility of index-based selection. As noted in previous reviews, the environment variable should be selected by name for reliability.
84-87
: LGTM! Out-of-cluster test configuration is consistent.
The implementation correctly sets the LONGHORN_BACKUPSTORES variable for CIFS and Azurite, following the same pattern as existing backup store types.
manager/integration/tests/backupstore.py (3)
20-21
: LGTM: Import statements are correctly placed.
The new imports for CIFS and Azurite detection functions are appropriately added alongside other backup target imports.
69-72
: LGTM: Backup store type handling is properly extended.
The set_random_backupstore
fixture correctly handles the new backup store types by calling their respective setup functions.
307-307
:
Missing implementations for CIFS and Azurite backup operations.
Several backup store operations are skipping CIFS and Azurite implementations:
- backupstore_get_backup_volume_prefix
- backupstore_get_backup_cfg_file_path
- backupstore_get_volume_cfg_file_path
- backupstore_get_backup_blocks_dir
- backupstore_create_file
- backupstore_write_backup_cfg_file
- backupstore_delete_file
- backupstore_delete_backup_cfg_file
- backupstore_delete_volume_cfg_file
- backupstore_delete_random_backup_block
- backupstore_count_backup_block_files
While the current implementation skips these operations for unsupported types, this limits the testing coverage for CIFS and Azurite backup stores.
Let's verify which test cases are affected by these missing implementations:
Also applies to: 344-344, 367-367, 390-390, 416-416, 466-466, 514-514, 554-554, 596-596, 665-665, 714-714
test_framework/scripts/longhorn-setup.sh (3)
486-491
: Use the same helper function for regular tests.
426-431
: 🛠️ Refactor suggestion
Consider extracting duplicated backup store handling logic.
The backup store type handling logic is duplicated between upgrade tests and regular tests. Consider extracting this into a helper function to improve maintainability.
Create a helper function:
+get_backup_store_for_test() {
+ local manifest_file="$1"
+ local store_type="$2"
+ local store_index=1
+
+ case "$store_type" in
+ "s3") store_index=1 ;;
+ "nfs") store_index=2 ;;
+ "cifs") store_index=3 ;;
+ "azurite") store_index=4 ;;
+ esac
+
+ yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' "$manifest_file" | \
+ awk -F ',' "{print \$$store_index}" | sed 's/ *//'
+}
Then use it in both functions:
- elif [[ $BACKUP_STORE_TYPE = "cifs" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $3}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
- elif [[ $BACKUP_STORE_TYPE = "azurite" ]]; then
- BACKUP_STORE_FOR_TEST=`yq e 'select(.spec.containers[0] != null).spec.containers[0].env[1].value' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH} | awk -F ',' '{print $4}' | sed 's/ *//'`
- yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
+ elif [[ $BACKUP_STORE_TYPE = "cifs" ]] || [[ $BACKUP_STORE_TYPE = "azurite" ]]; then
+ BACKUP_STORE_FOR_TEST=$(get_backup_store_for_test "$LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH" "$BACKUP_STORE_TYPE")
+ yq e -i 'select(.spec.containers[0] != null).spec.containers[0].env[1].value="'${BACKUP_STORE_FOR_TEST}'"' ${LONGHORN_UPGRADE_TESTS_MANIFEST_FILE_PATH}
Likely invalid or redundant comment.
336-342
:
Fix typo in function name.
The function name setup_azuitize_backup_store
contains a typo. It should be setup_azurite_backup_store
to match the backup store type.
Apply this diff to fix the typo:
- setup_azuitize_backup_store
+ setup_azurite_backup_store
Likely invalid or redundant comment.
manager/integration/tests/common.py (2)
3898-3900
: LGTM!
The function follows the established pattern for backup target URL validation and is implemented correctly.
3902-3904
: LGTM!
The function follows the established pattern for backup target URL validation and is implemented correctly.
setup_azuitize_backup_store(){ | ||
RETRY=0 | ||
MAX_RETRY=60 | ||
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do | ||
echo 'Waiting azurite pod running' | ||
sleep 5 | ||
if [ $RETRY -eq $MAX_RETRY ]; then | ||
break | ||
fi | ||
RETRY=$((RETRY+1)) | ||
done | ||
|
||
AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64) | ||
kubectl -n longhorn-system patch secret azblob-secret \ | ||
--type=json \ | ||
-p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]" | ||
|
||
CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip) | ||
# port forward and az container create need to be run on control node | ||
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &" | ||
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://0.0.0.0:20001/devstoreaccount1;'" | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical improvements needed in Azurite setup function.
Several issues need to be addressed:
- Security: Hardcoded credentials in the connection string
- Process Management: Background port-forward process lacks cleanup
- Error Handling: Missing error checks for SSH commands
- Code Quality: Magic numbers in retry logic
Apply these improvements:
-setup_azuitize_backup_store(){
+setup_azurite_backup_store(){
+ # Maximum wait time for Azurite pod: 5 minutes
RETRY=0
- MAX_RETRY=60
+ MAX_RETRY=30
+ RETRY_INTERVAL=10
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do
echo 'Waiting azurite pod running'
- sleep 5
+ sleep $RETRY_INTERVAL
if [ $RETRY -eq $MAX_RETRY ]; then
+ echo "Error: Timeout waiting for Azurite pod"
break
fi
RETRY=$((RETRY+1))
done
+ # Verify pod is actually running
+ if ! kubectl get pods | grep 'longhorn-test-azblob' | grep -q 'Running'; then
+ echo "Error: Azurite pod failed to start"
+ return 1
+ fi
AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64)
kubectl -n longhorn-system patch secret azblob-secret \
--type=json \
-p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]"
CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip)
CONTROL_PLANE_PRIVATE_IP=$(kubectl get nodes -o wide | awk '/control-plane/ {print $6}')
+
+ # Store port-forward PID for cleanup
+ PORT_FORWARD_PID_FILE="/tmp/azurite_port_forward.pid"
# port forward and az container create need to be run on control node
- ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &"
+ if ! ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 & echo \$! > ${PORT_FORWARD_PID_FILE}"; then
+ echo "Error: Failed to start port forwarding"
+ return 1
+ fi
+
+ # Load connection string from secret or config
+ STORAGE_CONN_STRING="DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=\${AZURITE_ACCOUNT_KEY};BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;"
- ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;'"
+ if ! ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string '${STORAGE_CONN_STRING}'"; then
+ echo "Error: Failed to create storage container"
+ # Cleanup port-forward
+ ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "kill \$(cat ${PORT_FORWARD_PID_FILE})"
+ return 1
+ fi
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
setup_azuitize_backup_store(){ | |
RETRY=0 | |
MAX_RETRY=60 | |
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do | |
echo 'Waiting azurite pod running' | |
sleep 5 | |
if [ $RETRY -eq $MAX_RETRY ]; then | |
break | |
fi | |
RETRY=$((RETRY+1)) | |
done | |
AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64) | |
kubectl -n longhorn-system patch secret azblob-secret \ | |
--type=json \ | |
-p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]" | |
CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip) | |
# port forward and az container create need to be run on control node | |
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 &" | |
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string 'DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==;BlobEndpoint=http://0.0.0.0:20001/devstoreaccount1;'" | |
} | |
setup_azurite_backup_store(){ | |
# Maximum wait time for Azurite pod: 5 minutes | |
RETRY=0 | |
MAX_RETRY=30 | |
RETRY_INTERVAL=10 | |
until (kubectl get pods | grep 'longhorn-test-azblob' | grep 'Running'); do | |
echo 'Waiting azurite pod running' | |
sleep $RETRY_INTERVAL | |
if [ $RETRY -eq $MAX_RETRY ]; then | |
echo "Error: Timeout waiting for Azurite pod" | |
break | |
fi | |
RETRY=$((RETRY+1)) | |
done | |
# Verify pod is actually running | |
if ! kubectl get pods | grep 'longhorn-test-azblob' | grep -q 'Running'; then | |
echo "Error: Azurite pod failed to start" | |
return 1 | |
fi | |
AZBLOB_ENDPOINT=$(echo -n "http://$(kubectl get svc azblob-service -o jsonpath='{.spec.clusterIP}'):10000/" | base64) | |
kubectl -n longhorn-system patch secret azblob-secret \ | |
--type=json \ | |
-p="[{'op': 'replace', 'path': '/data/AZBLOB_ENDPOINT', 'value': \"${AZBLOB_ENDPOINT}\"}]" | |
CONTROL_PLANE_PUBLIC_IP=$(cat /tmp/controlplane_public_ip) | |
CONTROL_PLANE_PRIVATE_IP=$(kubectl get nodes -o wide | awk '/control-plane/ {print $6}') | |
# Store port-forward PID for cleanup | |
PORT_FORWARD_PID_FILE="/tmp/azurite_port_forward.pid" | |
# port forward and az container create need to be run on control node | |
if ! ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "nohup kubectl port-forward --address 0.0.0.0 service/azblob-service 20001:10000 > /dev/null 2>&1 & echo \$! > ${PORT_FORWARD_PID_FILE}"; then | |
echo "Error: Failed to start port forwarding" | |
return 1 | |
fi | |
# Load connection string from secret or config | |
STORAGE_CONN_STRING="DefaultEndpointsProtocol=http;AccountName=devstoreaccount1;AccountKey=\${AZURITE_ACCOUNT_KEY};BlobEndpoint=http://${CONTROL_PLANE_PRIVATE_IP}:20001/devstoreaccount1;" | |
if ! ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "az storage container create -n longhorn-test-azurite --connection-string '${STORAGE_CONN_STRING}'"; then | |
echo "Error: Failed to create storage container" | |
# Cleanup port-forward | |
ssh ec2-user@${CONTROL_PLANE_PUBLIC_IP} "kill \$(cat ${PORT_FORWARD_PID_FILE})" | |
return 1 | |
fi | |
} |
Updated. |
Which issue(s) this PR fixes:
Issue #9699
What this PR does / why we need it:
ci: support cifs and azurite on pytest and e2e, set
BACKUP_STORE_TYPE
on jenkins to cifs or azuriteSpecial notes for your reviewer:
Currently azurite only support on sles pipeline, for other distro, need to more time for implement
Verified on local jenkins
Additional documentation or context
N/A
Summary by CodeRabbit
Release Notes
New Features
Documentation
Chores
samba
package in various provisioning scripts to support CIFS functionality.samba
installation.