Skip to content
This repository has been archived by the owner on Aug 31, 2022. It is now read-only.

Kubernetes events not getting exported to ES #192

Open
ankit-arora-369 opened this issue Apr 6, 2022 · 5 comments
Open

Kubernetes events not getting exported to ES #192

ankit-arora-369 opened this issue Apr 6, 2022 · 5 comments

Comments

@ankit-arora-369
Copy link

In the logs we are getting a lot of these errors:

I0406 08:03:24.643857 1 request.go:665] Waited for 9.597002153s due to client-side throttling, not priority and fairness, request: GET:https://172.20.0.1:443/apis/vpcresources.k8s.aws/v1beta1?timeout=32s
I0406 08:03:34.643985 1 request.go:665] Waited for 1.991593338s due to client-side throttling, not priority and fairness, request: GET:https://172.20.0.1:443/apis/argoproj.io/v1alpha1?timeout=32s
I0406 08:03:44.644158 1 request.go:665] Waited for 11.991616865s due to client-side throttling, not priority and fairness, request: GET:https://172.20.0.1:443/apis/scheduling.k8s.io/v1?timeout=32s
I0406 08:03:54.843817 1 request.go:665] Waited for 4.597627979s due to client-side throttling, not priority and fairness, request: GET:https://172.20.0.1:443/apis/admissionregistration.k8s.io/v1beta1?timeout=32s
I0406 08:04:04.844467 1 request.go:665] Waited for 14.598115749s due to client-side throttling, not priority and fairness, request: GET:https://172.20.0.1:443/apis/vpcresources.k8s.aws/v1beta1?timeout=32s

kube-events-exporter version: 0.11

Can you please help how we can resolve these errors or is it something wrong from exporter's end?

@ankit-arora-369
Copy link
Author

For now, I have reverted back to 0.10 version and will monitor if this is happening on 0.10 as well.

Can anybody please help why that's happening?

@ankit-arora-369
Copy link
Author

Can someone explain if this is something wrong on the exporter end or our APIServer is reaching it's limits?

@ankit-arora-369
Copy link
Author

ankit-arora-369 commented Apr 6, 2022

Reverting back to 0.10 and 0.9 versions also didn't help.

I noticed in the debug logs that difference in the count of events coming and events getting sinked is too high.
100s of events are coming per second and kube-events-exporter is also showing them in logs but "sink" events are very low.

Can anybody please help here?

We are using kubernetes version 1.21 on EKS.

Config using:

config.yaml: |
    logFormat: pretty
    logLevel: debug
    receivers:
    - elasticsearch:
        hosts:
        - http://xxxxxxxxxxxxxxx.com:9200
        index: kube-events
        indexFormat: kube-events-{2006-01-02}
        layout:
          cluster: '{{ .InvolvedObject.Namespace }}'
          count: '{{ .Count }}'
          creationTimestamp: '{{ .CreationTimestamp | date "2006-01-02T15:04:05Z" }}'
          firstTimestamp: '{{ .FirstTimestamp | date "2006-01-02T15:04:05Z" }}'
          involvedObjectData:
            annotations: '{{ toJson .InvolvedObject.Annotations }}'
            apiVersion: '{{ .InvolvedObject.APIVersion }}'
            component: '{{ .Source.Component }}'
            host: '{{ .Source.Host }}'
            kind: '{{ .InvolvedObject.Kind }}'
            labels: '{{ toJson .InvolvedObject.Labels }}'
            name: '{{ .InvolvedObject.Name }}'
            uid: '{{ .InvolvedObject.UID }}'
          lastTimestamp: '{{ .LastTimestamp | date "2006-01-02T15:04:05Z" }}'
          managedFields: '{{ toJson .ManagedFields }}'
          message: '{{ .Message }}'
          name: '{{ .Name }}'
          reason: '{{ .Reason }}'
          selfLink: '{{ .SelfLink }}'
          source: '{{ toJson .Source }}'
          type: '{{ .Type }}'
          uid: '{{ .UID }}'
          zone: "string"
        useEventID: true
      name: es-dump
    route:
      routes:
      - match:
        - receiver: es-dump

Can you please help here? @mustafaakin @jun06t @xmcqueen

@xmcqueen
Copy link
Contributor

xmcqueen commented Apr 6, 2022

This really does not sound like a problem with kubernetes-event-exporter. Are you able to test out the exporter in a low traffic, testing environment? If your target servers are overloaded, you can expect throttling, and you should expect to scale up up a bit to handle the increased capacity. Normally, under throttling conditions, clients will try to locall cache and subsequently retry requests, but all such caches have a maximum size, and if you are streaming at a high rate to a throttled endpoint, you probably hit the cap on any local retry cache. In short, the scenario you describe is a sign of an overloaded event receiving server, and not a sign of a problem in kubernetes-event-exporter.

@lobshunter
Copy link

Kubernetes client-go has a client side throttling setting. #171 Should be able to solve this issue.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants