-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ClusterThrottles support PodGroup aka Gang #12
Conversation
Signed-off-by: utam0k <[email protected]>
f3f5a5f
to
03a069e
Compare
Signed-off-by: utam0k <[email protected]>
03a069e
to
a5e2678
Compare
ns, err := c.namespaceInformer.Lister().Get(groupPod.Namespace) | ||
if err != nil { | ||
return nil, nil, nil, nil, nil, err | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All pods in a group have the same namespace, so these lines can be moved out of the for-loop
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. I don't assume the premise that all pods in a group are in the same namespace. This is because this assumption doesn't allow us to improve its implementation and performance that much better than the current implementation in this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is because this assumption doesn't allow us to improve its implementation and performance that much better than the current implementation in this PR.
Understood.
But, the current implementation assumes all pods in a group are in the same namespace, IIUC.
candidatePods, err := c.podInformer.Lister().Pods(pod.Namespace).List(labels.Everything()) |
We can modify the implementation to fetch the namespace for each pod in a group in the future when we remove this assumption?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right. I had missed it. Doing so would have meant exploring all the namespace pods, which would have had a huge impact on performance. I still think you are right, let's assume they are in the same namespace.
Co-authored-by: Hidehito Yabuuchi <[email protected]> Signed-off-by: utam0k <[email protected]>
…o group-pods-cluster
Signed-off-by: utam0k <[email protected]>
Signed-off-by: utam0k <[email protected]>
Signed-off-by: utam0k <[email protected]>
Co-authored-by: Hidehito Yabuuchi <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM except for a small comment.
Thank you for the nice work!
for _, pod := range podPassedGroup { | ||
Eventually(PodIsScheduled(ctx, DefaultNs, pod.Name)).Should(Succeed()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is duplicate with L236-240.
Either can be deleted.
Scheduling methods like PodGroup and Gang require not only a scheduling pod but also all the resources of a pod group. kube-throttler needs to take this into consideration in PreFilter in order to reduce wasted scheduling cycles and preemptions.
This pull request allows us to reject a scheduling pod that is part of a pod group if the remaining resources are insufficient to schedule the entire PodGroup.