Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
fix network cleanup flake in play kube
When using service containers and play kube we create a complicated set of dependencies. First in a pod all conmon/container cgroups are part of one slice, that slice will be removed when the entire pod is stopped resulting in systemd killing all processes that were part in it. Now the issue here is around the working of stopPodIfNeeded() and stopIfOnlyInfraRemains(), once a container is cleaned up it will check if the pod should be stopped depending on the pod ExitPolicy. If this is the case it wil stop all containers in that pod. However in our flaky test we calle podman pod kill which logically killed all containers already. Thus the logic now thinks on cleanup it must stop the pod and calls into pod.stopWithTimeout(). Then there we try to stop but because all containers are already stopped it just throws errors and never gets to the point were it would call Cleanup(). So the code does not do cleanup and eventually calls removePodCgroup() which will cause all conmon and other podman cleanup processes of this pod to be killed. Thus the podman container cleanup process was likely killed while actually trying to the the proper cleanup which leaves us in a bad state. Following commands such as podman pod rm will try to the cleanup again as they see it was not completed but then fail as they are unable to recover from the partial cleanup state. Long term network cleanup needs to be more robust and ideally should be idempotent to handle cases were cleanup was killed in the middle. Fixes #21569 Signed-off-by: Paul Holzinger <[email protected]>
- Loading branch information