-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Builder pods not removed after deploy #487
Comments
This behavior can easily inspected with:
The number of completed pods will increase by one for each build. |
related: #57 This seems like in recent versions of k8s, they stopped cleaning up pods in the "success" state. Probably some research needs to be done on how to turn this functionality back on. |
I'm running K8s 1.4.x if that matters. Regarding #57 suggestion for Jobs – neither Jobs nor Pods are removed automatically. From the K8s Job docs:
|
Interestingly the docs on Pod Lifecycle say:
This seems to be in contrast to what I'm actually seeing… |
I have opened kubernetes/kubernetes#41787 for clarification of the above statement from the docs. |
I just got feedback to the kubernetes issue, it looks like by default completed or failed pods are garbage collected if there are more than 12,500 pods. Obviously that is not very helpful in this case, so an automatic cleanup by the builder should be implemented. |
Quoting here from the
|
Any progress on this ? Sounds like a waste of resources and space for everyone. |
Same here, it may be linked to a issue I've opened last week.
|
I'm using this tiny git pre-push hook for deletion https://gist.github.com/pfeodrippe/116c8b570ee2ffcdce8aa15bbae5a22b. It deletes the last slugbuild created for the app when you |
+1 This bit me after a couple of weeks of deploying applications to my deis cluster. |
This issue was moved to teamhephy/builder#17 |
Currently (as of deis-builder v2.7.1) the slugbuild and dockerbuild pods are not deleted after a successful or failed build.
This means that the pod (eg. slugbuild-example-e24fafeb-b31237bb) will continue to exist in state "Completed" or state "Error" and the docker container associated with the pod can never be garbage collected by Kubernetes, causing the node to quickly run out of disk space.
Example:
On a k8s node with an uptime of 43 days and 95 GB disk storage for docker there where 249 completed (or some erred) slugbuild and dockerbuild pods whose docker images accounted for 80 GB of disk storage, while the deployed apps and deis services only required 15 GB storage.
Expected Behavior:
The expected behavior for the builder would be, that it automatically deletes the build pod after is has completed or erred, so that the K8s garbage collection can remove the docker containers which frees the disk space allocated to them.
The text was updated successfully, but these errors were encountered: