-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
No vars are available incase of conflicting var #16
Comments
Removing this This does not account for when a conflict is between two different pod-presets that apply (with my suggestion above, the ENV value from the first preset evaluated would be set), but I'm not sure how you would want to best handle this. For a quick solution, one could be clear in the documentation that the ENV set by a pod/deploy/etc would have precedence over any presets, but if the conflict arises between two presets, then the behavior is unpredictable. This would help us move along quickly. |
BTW in legacy podpresets.settings.k8s.io the original ENV from container.spec takes precedence over the definition from podpreset. (At least when compared with k8s 1.10) |
happened to us as well, |
fell into this pit, found this issue, forgot, fell into it again, re-found this issue... There seem to be three approaches to merging the container's environment with the PodPreset environment, WRT variables existing in both:
In case 3, two different reactions seem plausible: To me it seems that "3a" is what the developers had in mind - but instead of failing the complete webhook operation, the merging error isn't propagated. Hence the resulting problem that the pod is starting, but without any environment variables. I do second @wpmoore 's approach that if the variable is already set in the container, then do not override it (variant 1) - only new variables are added to the container. I've created a version that, as noted in @wpmoore 's comment, had that "if" block removed and started testing it (so far successful, but not all cases covered yet). While looking at the code, I noticed https://github.com/redhat-cop/podpreset-webhook/blob/master/pkg/handler/handler.go#L186 - can someone explain why the original env is updated, too? I'd have though it'd be sufficient to append the new var to "mergedEnv", which is returned at the end of the function. |
Added PR #31, which may be extended to allow configuration of conflict resolution strategy via PodPreset element and to add messages to indicate results. |
Found my answer: The update to the "original env" is done so that further loops over further PodPresets do already see the updated value and have no need to re-apply a change to mergedEnv. |
@jmozd are additional actions needed or did this resolve the issue you experienced? |
@sabre1041 The code does seem to work (the issues I originally experienced, AKA no env vars at all for containers with conflicting variables set, were resolved), but I'd love to see two improvements where I could profit from a helping hand:
|
Issue: When there is a conflict env var b/w container and podpresets, no vars are available to the pod.
Environment - openshift 4.6 (managed cluster on IBM Cloud)
Reproduce:
Noticed behavior: When pods are created from the above deployment, no env vars are loaded. When there is no conflict, I don't see this issue but when there is a conflict in the above case,
VAR1
, no vars (either from container or from podpreset) are available to the pod.Question: Is there a bug or is this expected ?
Additional notes: According k8s v.18 (old version) - https://v1-19.docs.kubernetes.io/docs/tasks/inject-data-application/podpreset/#conflict-example , when there is a conflict, podpreset is not applied to the pods but, I see that its applied here (the annotation
podpreset.admission.kubernetes.io/podpreset-sample-app=<resource-version>
is added to the podsThe text was updated successfully, but these errors were encountered: