Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No vars are available incase of conflicting var #16

Open
pbikki opened this issue Jun 7, 2021 · 8 comments
Open

No vars are available incase of conflicting var #16

pbikki opened this issue Jun 7, 2021 · 8 comments

Comments

@pbikki
Copy link

pbikki commented Jun 7, 2021

Issue: When there is a conflict env var b/w container and podpresets, no vars are available to the pod.

Environment - openshift 4.6 (managed cluster on IBM Cloud)

Reproduce:

  • create a deployment
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      labels:
        app: sample-app
      name: sample-app
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: sample-app
      strategy: {}
      template:
        metadata:
          labels:
            app: sample-app
        spec:
          containers:
          - image: alpine:3.8
            command: ["/bin/sh"]
            args: ["-c", "while true; do date ; sleep 10; done"]
            name: sample-app
            env:
              - name: VAR1
                value: var1value
              - name: VAR2
                value: var2value
      
    
  • create a podpreset as below
    apiVersion: redhatcop.redhat.io/v1alpha1
    kind: PodPreset
    metadata:
      name: sample-app
    spec:
      env:
      - name: VAR3
        value: var3value
      - name: VAR1
        value: changed-var1-value
      selector:
        matchLabels:
          app: sample-app 
    

Noticed behavior: When pods are created from the above deployment, no env vars are loaded. When there is no conflict, I don't see this issue but when there is a conflict in the above case, VAR1, no vars (either from container or from podpreset) are available to the pod.

Question: Is there a bug or is this expected ?

Additional notes: According k8s v.18 (old version) - https://v1-19.docs.kubernetes.io/docs/tasks/inject-data-application/podpreset/#conflict-example , when there is a conflict, podpreset is not applied to the pods but, I see that its applied here (the annotation podpreset.admission.kubernetes.io/podpreset-sample-app=<resource-version> is added to the pods

@pbikki pbikki changed the title No vars are available when incase of conflicting var No vars are available incase of conflicting var Jun 7, 2021
@wpmoore
Copy link

wpmoore commented Jun 8, 2021

Removing this if block solved the issue for us. To us, it makes sense to give the original ENV variable precedence.
https://github.com/redhat-cop/podpreset-webhook/blob/master/pkg/handler/handler.go#L193

This does not account for when a conflict is between two different pod-presets that apply (with my suggestion above, the ENV value from the first preset evaluated would be set), but I'm not sure how you would want to best handle this. For a quick solution, one could be clear in the documentation that the ENV set by a pod/deploy/etc would have precedence over any presets, but if the conflict arises between two presets, then the behavior is unpredictable. This would help us move along quickly.

@splichy
Copy link

splichy commented Jul 16, 2021

BTW in legacy podpresets.settings.k8s.io the original ENV from container.spec takes precedence over the definition from podpreset. (At least when compared with k8s 1.10)
Thus this if actually changes behavior - so in some cases, you can't just simly switch from legacy to podpresets.redhatcop.redhat.io
It would make sense to have precedence configurable in podpreset object itself, while defaults should respect original behavior, e.g.
conflictHandling: override / fail [default: override] # how to handle conflicts, while both should generate k8s event for affected container/pod e.g. override -> Info, fail -> Warning
precedence: spec / podpreset [default: spec] # which definition should take precedence

@shahaf600
Copy link

happened to us as well,
looking forward to a solution.

@jmozd
Copy link

jmozd commented Mar 12, 2023

fell into this pit, found this issue, forgot, fell into it again, re-found this issue...

There seem to be three approaches to merging the container's environment with the PodPreset environment, WRT variables existing in both:

  1. The container's env is prioritized, so the var keeps its value
  2. The PodPreset is prioritized, so the container var's value will be overwritten
  3. The controller doesn't want to decide

In case 3, two different reactions seem plausible:
3a. The operation fails
3b. The operation is aborted regarding the modification of the environment, so the container keeps its original env unmodified.

To me it seems that "3a" is what the developers had in mind - but instead of failing the complete webhook operation, the merging error isn't propagated. Hence the resulting problem that the pod is starting, but without any environment variables.

I do second @wpmoore 's approach that if the variable is already set in the container, then do not override it (variant 1) - only new variables are added to the container.

I've created a version that, as noted in @wpmoore 's comment, had that "if" block removed and started testing it (so far successful, but not all cases covered yet).

While looking at the code, I noticed https://github.com/redhat-cop/podpreset-webhook/blob/master/pkg/handler/handler.go#L186 - can someone explain why the original env is updated, too? I'd have though it'd be sufficient to append the new var to "mergedEnv", which is returned at the end of the function.

@jmozd
Copy link

jmozd commented Mar 12, 2023

Added PR #31, which may be extended to allow configuration of conflict resolution strategy via PodPreset element and to add messages to indicate results.

@jmozd
Copy link

jmozd commented Mar 12, 2023

While looking at the code, I noticed https://github.com/redhat-cop/podpreset-webhook/blob/master/pkg/handler/handler.go#L186 - can someone explain why the original env is updated, too? I'd have though it'd be sufficient to append the new var to "mergedEnv", which is returned at the end of the function.

Found my answer: The update to the "original env" is done so that further loops over further PodPresets do already see the updated value and have no need to re-apply a change to mergedEnv.

@sabre1041
Copy link
Collaborator

@jmozd are additional actions needed or did this resolve the issue you experienced?

@jmozd
Copy link

jmozd commented Mar 14, 2023

@sabre1041 The code does seem to work (the issues I originally experienced, AKA no env vars at all for containers with conflicting variables set, were resolved), but I'd love to see two improvements where I could profit from a helping hand:

  1. Like already noted above by @splichy it would be favorable to "generate k8s event for affected container/pod e.g. override -> Info, fail -> Warning", but I have no clue on how to implement that. Any pointers to the required function calls, anyone?
  2. Again already mentioned by @splichy, it'd be nice to be able to set/override the behavior via config items in the PodPreset. I assume that this would require a lot of changes (to add the extra field), which is beyond what I'm currently able to contribute. Also, the currently hard-coded behavior (original values of container win) might be sufficent, making this one a low priority item.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants