Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot create a ServiceMeshControlPlane with prometheus enabled, in a project with resource limits quotas #166

Closed
regnauch opened this issue Aug 12, 2020 · 3 comments

Comments

@regnauch
Copy link

Bug description

Red Hat OpenShift Service Mesh v1.1.7 is used.
The targetted project has resource quotas defined : limits.cpu and limits.memory.

I try to deploy a ServiceMeshControlPlane in my project, with following yaml CR:

  istio:
    security:
      resources:
        limits:
          cpu: 500m
          memory: 256Mi
        requests:
          cpu: 200m
          memory: 128Mi
    kiali:
      enabled: true
    tracing:
      enabled: true
      jaeger:
        resources:
          limits:
            cpu: 500m
            memory: 256Mi
          requests:
            cpu: 200m
            memory: 128Mi
        template: all-in-one
    grafana:
      enabled: true
    mixer:
      policy:
        autoscaleEnabled: false
      telemetry:
        autoscaleEnabled: false
        resources:
          limits:
            cpu: 500m
            memory: 256Mi
          requests:
            cpu: 200m
            memory: 128Mi
    prometheus:
      resources:
        limits:
          cpu: 500m
          memory: 256Mi
        requests:
          cpu: 200m
          memory: 128Mi
    galley:
      resources:
        limits:
          cpu: 500m
          memory: 256Mi
        requests:
          cpu: 200m
          memory: 128Mi
    gateways:
      istio-egressgateway:
        autoscaleEnabled: false
        enabled: false
      istio-ingressgateway:
        autoscaleEnabled: false
        enable: true
        ior_enabled: false
    pilot:
      autoscaleEnabled: false
      resources:
        limits:
          cpu: 500m
          memory: 256Mi
        requests:
          cpu: 200m
          memory: 128Mi
      traceSampling: 100

prometheus pod cannot start due to lack of limit resource specification.
After looking carefully at deploy prometheus replicaset, I observed that prometheus pod is made of two containers ( prometheus and prometheus-proxy).
prometheus container is well patched by the resource section defined in ServiceMeshControlPlane prometheus part, while prometheus-proxy container is not. As a consequence, prometheus-proxy container has no resource limit defined, and cannot be started in my project.
The consequence is : Among 60 kubernetes items deployed by an healthy control plane, only 14 of them are deployed, blocking on prometheus install.

Affected product area (please put an X in all that apply)

[x ] Configuration Infrastructure
[ ] Docs
[ ] Installation
[ ] Networking
[ ] Performance and Scalability
[ ] Policies and Telemetry
[ ] Security
[ ] Test and Release
[ ] User Experience
[ ] Developer Infrastructure

Affected features (please put an X in all that apply)

[ ] Multi Cluster
[ ] Virtual Machine
[x ] Multi Control Plane

Expected behavior

Steps to reproduce the bug

Version (include the output of istioctl version --remote and kubectl version and helm version if you used Helm)
OpenShift4.4
Red Hat OpenShift Service Mesh v1.1.7
How was Istio installed?
OpenShift 4.4
Red Hat OpenShift Service Mesh v1.1.7
Environment where bug was observed (cloud vendor, OS, etc)

Additionally, please consider attaching a cluster state archive by attaching
the dump file to this issue.

@nicop311
Copy link

I think I had the same sort of issue with Jaeger and spec.resources.limits.

See more details on that here in this Maistra Issue

@dgn
Copy link
Contributor

dgn commented Jan 19, 2021

Closing this as we don't use GitHub issues. If you encounter a bug, please file an issue on our Red Hat JIRA

@dgn dgn closed this as completed Jan 19, 2021
@michelmeeuwissen
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants