Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Karpenter Replica Issue on Kubernetes #7141

Closed
dextercrypt opened this issue Oct 1, 2024 · 2 comments
Closed

Karpenter Replica Issue on Kubernetes #7141

dextercrypt opened this issue Oct 1, 2024 · 2 comments
Assignees
Labels
lifecycle/closed lifecycle/stale triage/solved Mark the issue as solved by a Karpenter maintainer. This gives time for the issue author to confirm.

Comments

@dextercrypt
Copy link

Description

Observed Behavior:
Karpenter helm Chart deploys by default 2 replica of Karpenter.

Both should run fine if its a default value.

Expected Behavior:

One of the Replica always says this issue:

{"level":"INFO","time":"2024-10-01T20:25:27.649Z","logger":"controller","message":"attempting to acquire leader lease kube-system/karpenter-leader-election...","commit":"62a726c"}

While another one works and does it work? Can we just run 1 pod then?

Reproduction Steps (Please include YAML):

Versions:

  • Chart Version: Latest (On documentation)
  • Kubernetes Version (kubectl version): 1.28 EKS
  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
@dextercrypt dextercrypt added bug Something isn't working needs-triage Issues that need to be triaged labels Oct 1, 2024
@jmdeal
Copy link
Contributor

jmdeal commented Oct 1, 2024

By it's nature Karpenter doesn't horizontally scale (hence the leader election), but multiple replicas help ensure high availability. For example, there could be an AZ outage but since the replicas are in different AZs, the unaffected replica will be able to take over and continue operating. You're free to reduce the replica count, Karpenter will still function, but we recommend running multiple replicas spread across AZs.

@jmdeal jmdeal added triage/solved Mark the issue as solved by a Karpenter maintainer. This gives time for the issue author to confirm. and removed bug Something isn't working needs-triage Issues that need to be triaged labels Oct 1, 2024
@jmdeal jmdeal self-assigned this Oct 7, 2024
Copy link
Contributor

This issue has been inactive for 14 days. StaleBot will close this stale issue after 14 more days of inactivity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Nov 6, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/closed lifecycle/stale triage/solved Mark the issue as solved by a Karpenter maintainer. This gives time for the issue author to confirm.
Projects
None yet
Development

No branches or pull requests

2 participants