-
Notifications
You must be signed in to change notification settings - Fork 545
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Upgrade from Chart 2.4.4 #1372
Labels
kind/bug
Categorizes issue or PR as related to a bug.
lifecycle/stale
Denotes an issue or PR has remained open with no activity and has become stale.
Comments
The Kubernetes project currently lacks enough contributors to adequately respond to all issues. This bot triages un-triaged issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
k8s-ci-robot
added
the
lifecycle/stale
Denotes an issue or PR has remained open with no activity and has become stale.
label
Sep 10, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
kind/bug
Categorizes issue or PR as related to a bug.
lifecycle/stale
Denotes an issue or PR has remained open with no activity and has become stale.
/kind bug
What happened?
We recently upgraded our EKS cluster to 1.29. We are using Managed Nodes with amazon-eks-node-1.29-v20240227 AMI and we are using the EFS CSI Driver 1.5.6 deployed by Helm. Chart 2.4.4.
Following an upgrade of the driver from Chart 2.4.4 to Chart 2.4.5 (or higher), we encountered an issue where deployments using the EFS Storage Class ceased functioning correctly. Both Pods and Nodes failed to respond to the 'df' command. In examining /var/log/messages on the node, we found the following error message:
If we move the Pods mounting EFS volumenes to a new node, the Pod runs as expected.
Upon comparing both charts, the significant alteration lies in the EFS State Directory as outlined in the CHANGELOG. This leads us to suspect that stunnel may not be capable of resuming connections post-upgrade.
To avoid refreshing the nodes, we have identified two workarounds. The first approach involves patching the DaemonSet to utilize the original path. This can be achieved by executing the following command:
The second approach it to create a symbolic link prior the upgrade:
Is this the expected behavior? Thanks in advance!
What you expected to happen?
Containers volumes should remain operational after the upgrade.
How to reproduce it (as minimally and precisely as possible)?
Upgrade from Chart 2.4.4 to Chart 2.4.5 or higher using Helmfile.
The text was updated successfully, but these errors were encountered: