Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tigera-operator helm chart unable to set csiNodeDriverDaemonSet resource memory/cpu requests & limits #3356

Closed
mashiofu-cafex opened this issue May 22, 2024 · 5 comments

Comments

@mashiofu-cafex
Copy link

I am unable to add resource limits/requests in my K8s cluster for csi-node-driver pods

Please see my config in my values.yaml file:

    csiNodeDriverDaemonSet: #Currently an issue and PR open to allow these values to be supported - https://github.com/projectcalico/calico/issues/8400
        spec:
            template:
                spec:
                    containers:
                    - name: csi-node-driver
                      resources:
                        limits:
                            memory: 100Mi
                        requests:
                            cpu: 75m

I am using projectcalico/tigera-operator v3.28.0 helm chart.

Expected Behavior

When helm installing/upgrading for csiNodeDriverDaemonSet resource limits & request, the pods and daemonset should be configured afterwards.

Current Behavior

Helm detects the change but the change is not configured, when running helm upgrade:

Comparing release=calico, chart=projectcalico/tigera-operator
tigera-operator, default, Installation (operator.tigera.io) has changed:
...
+   csiNodeDriverDaemonSet:
+     spec:
+       template:
+         spec:
+           containers:
+           - name: csi-node-driver
+             resources:
+               limits:
+                 memory: 100Mi
+               requests:
+                 cpu: 75m
    imagePullSecrets: []
...

Affected releases are:
  calico (projectcalico/tigera-operator) UPDATED

Do you really want to apply?
  Helmfile will apply all your changes, as shown above.

 [y/n]: y
Upgrading release=calico, chart=projectcalico/tigera-operator
Release "calico" has been upgraded. Happy Helming!
NAME: calico
LAST DEPLOYED: Wed May 22 18:08:40 2024
NAMESPACE: tigera-operator
STATUS: deployed
REVISION: 13
TEST SUITE: None

Listing releases matching ^calico$
calico  tigera-operator 13              2024-05-22 18:08:40.510076 -0500 CDT    deployed        tigera-operator-v3.28.0 v3.28.0    

UPDATED RELEASES:
NAME     CHART                           VERSION   DURATION
calico   projectcalico/tigera-operator   v3.28.0        50s

Yet after this is applied, the csiNodeDriver daemonSet & pods resource values are still not configured. Unfortunately, there are no errors in the logs indicating where the issue can be occurring.

There has been a couple of similar issues/PRs that claim this issue has been resolved, here are a couple:

@tmjd
Copy link
Member

tmjd commented May 23, 2024

You've not used a correct container name, see the docs for the allowed container names (calico-csi or csi-node-driver-registrar).

I'm not sure why we still allowed csi-node-driver as an allowed value, I'm guessing it was so we didn't break anything for someone that has used that as a container name, it was allowed before and if we removed it from the allowed values then a previously functioning config would have resulted in errors.

@devops-cafex
Copy link

Tried both suggested containers names with helm but both give the error suggestions the support value is csi-node-driver"

Error: UPGRADE FAILED: release calico failed, and has been rolled back due to atomic being set: cannot patch "default" with kind Installation: Installation.operator.tigera.io "default" is invalid: spec.csiNodeDriverDaemonSet.spec.template.spec.containers[0].name: Unsupported value: "calico-csi": supported values: "csi-node-driver"

Error: UPGRADE FAILED: release calico failed, and has been rolled back due to atomic being set: cannot patch "default" with kind Installation: Installation.operator.tigera.io "default" is invalid: spec.csiNodeDriverDaemonSet.spec.template.spec.containers[0].name: Unsupported value: "csi-node-driver-registrar": supported values: "csi-node-driver"

@mashiofu-cafex
Copy link
Author

You've not used a correct container name, see the docs for the allowed container names (calico-csi or csi-node-driver-registrar).

I'm not sure why we still allowed csi-node-driver as an allowed value, I'm guessing it was so we didn't break anything for someone that has used that as a container name, it was allowed before and if we removed it from the allowed values then a previously functioning config would have resulted in errors.

Perhaps the reason this cannot be set might be due to this line in the chart? https://github.com/projectcalico/calico/blob/master/charts/tigera-operator/crds/operator.tigera.io_installations_crd.yaml#L6869

@tmjd
Copy link
Member

tmjd commented Jun 5, 2024

Yeah it seems the source of the helm chart has not pulled in the proper updates from this repo. You can see the expected allowed values in this repo here https://github.com/tigera/operator/blob/master/api/v1/csi_node_driver.go#L28

@tmjd
Copy link
Member

tmjd commented Jun 5, 2024

I've created projectcalico/calico#8883 in the repo where the helm chart comes from because this is a helm chart issue, not an operator functionality issue. I'm going to close this issue. If you believe there is an operator issue here please comment and I can re-open this.

@tmjd tmjd closed this as completed Jun 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants