You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 17, 2022. It is now read-only.
We're using kops 1.10.0 and k8s 1.10.11. We're using two separate instance groups (IG), nodes (on-demand) and spots (spot). As mentioned in #54, the rescheduler thinks it can move all pods, fails due to a PDB, and leaves the node underutilized and tainted. And here's where my issue comes in, it keeps trying to drain that node over and over again. If it can't drain the node due to whatever reason, e.g. PDBs, availability zone conflicts (#53), etc., it should move on to the next on-demand node and try that one.
The text was updated successfully, but these errors were encountered:
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
We're using kops
1.10.0
and k8s1.10.11
. We're using two separate instance groups (IG),nodes
(on-demand) andspots
(spot). As mentioned in #54, the rescheduler thinks it can move all pods, fails due to a PDB, and leaves the node underutilized and tainted. And here's where my issue comes in, it keeps trying to drain that node over and over again. If it can't drain the node due to whatever reason, e.g. PDBs, availability zone conflicts (#53), etc., it should move on to the next on-demand node and try that one.The text was updated successfully, but these errors were encountered: