Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add ability to configure a pod topology spread constraint #37

Merged
merged 1 commit into from
Aug 16, 2023

Conversation

domgoodwin
Copy link
Contributor

This adds the ability to specify a pod topology spread constraint. This is the newer way of making sure a deployment won't run on the same node or in the same AZ as another pod, but by using a maxSkew instead of just an affinity.

This helps newer scaling tools, like Karpenter, add node where they're needed based on this maxSkew and the constraints.

Make it configurable by env variables on the egress operator deployment - ideally in the future we'd expand the CRD to specify this in the external service (or set sensible defaults)

@domgoodwin domgoodwin merged commit 4cf63b4 into master Aug 16, 2023
1 check passed
@domgoodwin domgoodwin deleted the egress-operator-enabled-pod-topology branch August 16, 2023 14:43
Comment on lines +107 to +109
if !zoneKeyFound {
zoneKey = "topology.kubernetes.io/zone"
}

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when would be the case where we want to specify a different zone key?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is an old zone key Kubernetes used to use or the user's cluster might have a different zone topology

}
podTopologySpread = append(podTopologySpread, corev1.TopologySpreadConstraint{
TopologyKey: zoneKey,
WhenUnsatisfiable: corev1.ScheduleAnyway,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this field so that we schedule a pod even if none of the rules match?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this stops us not scheduling pods for when an AZ goes down or there's a small amount of nodes (usually in a nonprod environment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants