-
Notifications
You must be signed in to change notification settings - Fork 289
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reflector ListAndWatch trace error #858
Comments
Hey @bygui86, I couldn't reproduce this behavior on a few long-running Botkube 0.15 installations (k3d, colima and Oracle Cloud clusters). I'd suspect some issues with your cluster (K8s API server specifically), or some inability to set up watch connection from the Botkube pod. Do you have proper RBAC configured for Botkube pod like in the default configuration? And yes, we can suppress such logs (like here: https://github.com/werf/werf/pull/1754/files), but I'm not sure if this is a good idea 🤔 This might sometimes help with debugging some issues. But one thing we can do is to link these traces with But here, in this case, I think there's an error which results in constantly listing all resources. Do the Botkube notifications work for you as they should? |
@pkosiec thanks for your quick answer.
I run Botkube in 2 GKE cluster, one public, one private.
I cannot give Botkube the default amount of permissions, it's too high. Especially about Secrets. apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: botkube
rules:
# K8s
- apiGroups:
- ""
resources:
- events
- configmaps
- endpoints
- limitranges
- namespaces
- nodes
- persistentvolumeclaims
- persistentvolumes
- pods
- resourcequotas
- serviceaccounts
- services
verbs:
- get
- watch
- list
- apiGroups:
- apps
resources:
- deployments
- statefulsets
- daemonsets
verbs:
- get
- watch
- list
- apiGroups:
- batch
resources:
- cronjobs
- jobs
verbs:
- get
- watch
- list
- apiGroups:
- policy
resources:
- poddisruptionbudgets
- podsecuritypolicies
verbs:
- get
- watch
- list
- apiGroups:
- apiextensions.k8s.io
resources:
- customresourcedefinitions
verbs:
- get
- watch
- list
- apiGroups:
- autoscaling
resources:
- horizontalpodautoscalers
verbs:
- get
- watch
- list
- apiGroups:
- rbac.authorization.k8s.io
resources:
- clusterrolebindings
- clusterroles
- rolebindings
- roles
verbs:
- get
- watch
- list
- apiGroups:
- admissionregistration.k8s.io
resources:
- mutatingwebhookconfigurations
- validatingwebhookconfigurations
verbs:
- get
- watch
- list
- apiGroups:
- networking.k8s.io
resources:
- networkpolicies
- ingresses
verbs:
- get
- watch
- list
- apiGroups:
- scheduling.k8s.io
resources:
- priorityclasses
verbs:
- get
- watch
- list
- apiGroups:
- storage.k8s.io
resources:
- storageclasses
verbs:
- get
- watch
- list
- apiGroups:
- snapshot.storage.k8s.io
resources:
- volumesnapshotclasses
- volumesnapshotcontents
- volumesnapshot
verbs:
- get
- watch
- list
# CRDs
- apiGroups:
- cloud.google.com
- internal.autoscaling.gke.io
- networking.gke.io
- hub.gke.io
- nodemanagement.gke.io
- crd.projectcalico.org
- rbacmanager.reactiveops.io
- cassandraoperator.instaclustr.com
- kafka.strimzi.io
- monitoring.coreos.com
- logging.banzaicloud.io
- logging-extensions.banzaicloud.io
- traefik.containo.us
resources:
- "*"
verbs:
- get
- watch
- list
I see your point but this should be a "debug" level log, not info. I suggest not to suppress it, but to make level lower so we could enable it in case of deeper debugging required.
Notifications seem to work fine, but we have a huge configuration so I cannot distinguish if we are getting the same amount of notifications as before. |
Oh, okay, so that means it's not a big issue. Alright, I think we can reconfigure klog based on the log configuration 👍 The only hesitation from my side is that the traces you see are actually in the |
@pkosiec for now not a big issue, but I don't know long-term. |
Description
In Botkube logs I see multiple times following trace error:
What is it related to?
How can I fix them?
It seems there is no impact on Botkube functionalities, but it's raising attention of our alerts...
Expected behavior
Do not see any trace error log.
Steps to reproduce
Additional information
0.15.0
1.24.4-gke.800
The text was updated successfully, but these errors were encountered: