Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Image swap issue during node draining #170

Open
dracut5 opened this issue Sep 4, 2024 · 0 comments
Open

Image swap issue during node draining #170

dracut5 opened this issue Sep 4, 2024 · 0 comments

Comments

@dracut5
Copy link

dracut5 commented Sep 4, 2024

Hi,

We have faced with an issue when we try to drain the kubernetes node group, where k8s-image-swapper pods are running alongside with a bunch of other pods: it is not capable to swap images paths for all pods in time.
So, I can find that some images are lost

kubectl get pods -A -o jsonpath="{.items[*].spec['initContainers', 'containers'][*].image}" |tr -s '[[:space:]]' '\n' |sort |uniq -c | egrep -v '(dkr.ecr|public.ecr)'

Of course, pdb has been enabled and set to 3, podAntiAffinity in preferred mode has been configured as well, even priorityClassName: system-cluster-critical has been added (wasn't expected that it would help a lot).

Total number of running pods is 4, I suppose, we could try to increase the replica count and pdb accordingly even more and, probably, it might help, but I see no sense in such ineffective scaling,

I am mostly sure that the reason is in livenessProbe and readinessProbe: they return success to fast, the service is not capable to handle requests yet.
https://github.com/estahn/charts/blob/main/charts/k8s-image-swapper/templates/deployment.yaml#L76

Could you, please, check these probes? Does they represent the service health correctly?
The possibility of configuration of this block in the helm chart might be considered too: I guess, increasing successThreshold from default 1 to 2 could have some impact since it will provide the service more time on initialization.

Mostly forgot to say, we was able to mitigate mentioned issue by adding kubectl cordon and kubectl rollout restart deployment k8s-image-swapper, only after that we run kubectl drain. It buys some time for the service to start, but we would be very happy to remove this additional logic.

Many thanks for your work!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant