-
Notifications
You must be signed in to change notification settings - Fork 96
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Significant CPU usage and possibly etcd usage when deploying this #447
Comments
I see this happening every 3 seconds. Does this service act like a watcher, watching for changes in the cluster? Can we add labels so it only looks at specific configmaps instead of looking at all of them
|
@drewwells reflector opens a watcher with a default timeout (in k8s) of around 40 minutes. The fact that the connection closes every 3 seconds is extremely odd. I would need to know more about the setup. Also, are you sure you didn't set the timeout to something like 3 seconds in the configuration? |
Nothing special about the cluster, it's running v1.25.10 |
@drewwells Is this standard k8s or any other flavor (like k3s or something). Also are you self hosting or using a cloud provider? |
it's deployed with kops and the nodes are hosted on AWS. Hmm, usage is vastly different across clusters. The only thing that is consistent is significant ETCD storage usage like 2x before deploying the service.
|
Same issue here:
What I found is that cpu usage is high when there are too many
Both cluster doing one and only one same thing: copy one given namespace's TLS secret to other namespaces. annotations:
cert-manager.io/alt-names: "*.example.io,example.io"
cert-manager.io/certificate-name: wildcard-example-io
cert-manager.io/common-name: example.io
cert-manager.io/ip-sans: ""
cert-manager.io/issuer-group: ""
cert-manager.io/issuer-kind: ClusterIssuer
cert-manager.io/issuer-name: cluster-issuer-example
cert-manager.io/uri-sans: ""
reflector.v1.k8s.emberstack.com/reflection-allowed: "true"
reflector.v1.k8s.emberstack.com/reflection-allowed-namespaces: \w+-system,\w+-frontend,ns-[\-a-z0-9]*
reflector.v1.k8s.emberstack.com/reflection-auto-enabled: "true"
reflector.v1.k8s.emberstack.com/reflection-auto-namespaces: \w+-system,\w+-frontend,ns-[\-a-z0-9]*
labels:
controller.cert-manager.io/fao: "true" IMO, reflector controller only monitor one namespace's secret and copy it to others when changes happen kubernetes is standard deployed on GCP VM. |
An easy way to limiter the watchers is using labels. Also usage goes up after it creates configmaps or secrets, I don't think it needs to watch generated resources. If people change them, let it be until next sync wipes out those changes. |
In my case, if same secret name in two different namespaces try to sync to common namespace thats when I see reflector closing connection every 4secs. example: "secretA" from "nsA" and "nsB" trying to sync to "nsC". |
We noticed our ETCD storage usage doubled after doing a production release that included deploying reflector. Is there an architecture document for how this service watches for object changes and decided on API calls to make to kubeapi?
We have one configmap that rarely changes. This is the labels and annotations on it.
Here's the CPU and memory usage of reflector
The text was updated successfully, but these errors were encountered: