-
Notifications
You must be signed in to change notification settings - Fork 370
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cgroups inheritance when using k0s in docker #4234
Comments
Your observations are indeed correct. The current way the "k0s in Docker" docs are written are not optimized for running multiple workers on the same Docker host. In particular, the steps for cgroupsv2 weaken the isolation between the host and the k0s container quite a bit. The culprit here is that certain things related to cgroups need to be in place for kubelet and the container runtime to be happy, such as a writable cgroup root filesystem with all the necessary controllers enabled. While this can be achieved with some shenanigans like a clever Docker container entrypoint script, k0s doesn't have that support right now. You can try to work around this by giving each k0s worker Docker container some different values for the various cgroup-related kubelet configuration options: Try adding these args to each of your k0s worker's kubelet extraArgs and experiment with the outcome:
I took a stab at the Docker entrypoint script a few months ago, but haven't polished it up for a PR yet. That might provide some additional insight. |
Thank you for this answers @twz123, this is pretty much what I managed to implement, however it kind of feels hacky. I start by running the container waiting for its configuration file. During this operation docker will create the
While the container is waiting I then set the limits inside the cgroup (doing it with docker allows me to do it without sudo):
I then construct the
Processes are now in the correct cgroup (instead ot kubelet): Memory limits and cpu works, I can see that if I constrain too tight the cluster it won't start and won't swap as I was expecting. |
Thanks for experimenting and sharing the results @turdusmerula! For historic reasons, k0s will disregard the |
Kubelet tells me the
That's why I used the |
Yes, the flags are deprecated, but k0s will currently ignore the |
Have you already managed to use the |
Everything is working now, I had been quite unlucky. I choose to override the kubelet configuration by passing it in a file called However I confirm that passing parameters through
I think there is probably something prone to improvement, the way I have to do this feels way too hacky for now. |
The issue is marked as stale since no activity has been recorded in 30 days |
The issue is marked as stale since no activity has been recorded in 30 days |
The issue is marked as stale since no activity has been recorded in 30 days |
The issue is marked as stale since no activity has been recorded in 30 days |
The issue is marked as stale since no activity has been recorded in 30 days |
The issue is marked as stale since no activity has been recorded in 30 days |
The issue is marked as stale since no activity has been recorded in 30 days |
The issues with cgroups in the docker docs and entrypoint have been addressed in #5263. |
Before creating an issue, make sure you've checked the following:
Platform
Version
v1.29.2+k0s.0
Sysinfo
`k0s sysinfo`
What happened?
I use the
k0sproject/k0s:v1.29.2-k0s.0
docker image to run k0s with the following command:The goal is to be able to launch several instances in parallel, this works fine.
The problem I'm facing is with the cgroups. K0s runs correctly inside the container cgroup scope so the 4GB memory barrier works correctly. But if I look to the processes spawned by the
containerd-shim
they are launched in/kubepods
so they are not constrained.Is there a way to have the cgroup '/kubepods` created inside my container cgroup?
I don't quite know if it is a bug, a lack of configuration on my side or if it's a feature request, any help would be really helpful :)
Steps to reproduce
Expected behavior
No response
Actual behavior
No response
Screenshots and logs
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: