You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For the past few weeks I’ve been getting this error message in the logs for every container which has attempted to update:
“level=
error
msg=
Error response from daemon: Cannot kill container:****: No such container: ****”
The log then reports the updates as failed however I believe it is successfully updating the containers and functioning fine. There is a new container running which is up to date and there are no leftover containers afterwards.
My docker config is standard and unchanged, running on latest Unraid on a bridge network. Only thing I can think of which has changed was that I did run a duplicate watchtower container briefly (which stopped my current instance), but I have since deleted this container. I’ve tried recreating the new container but this doesn’t help.
Debug logs from before the error message seem to suggest the new image is pulled before the current container is being stopped and I’m wondering if perhaps the original container is being removed before the stop command is being sent?
Any other ideas?
###EDIT####
Update - yesterday’s watchtower schedule successfully updated my immich stack (which is managed by Unraids docker compose plugin) without an error message. It still produced an error message when updating the containers which are managed by Unraids default docker management dockerman (docker CLI) however. See log example.
Hi there! 👋🏼 As you're new to this repo, we'd like to suggest that you read our code of conduct as well as our contribution guidelines. Thanks a bunch for opening your first issue! 🙏
Describe the bug
For the past few weeks I’ve been getting this error message in the logs for every container which has attempted to update:
“level=
error
msg=
Error response from daemon: Cannot kill container:****: No such container: ****”
The log then reports the updates as failed however I believe it is successfully updating the containers and functioning fine. There is a new container running which is up to date and there are no leftover containers afterwards.
My docker config is standard and unchanged, running on latest Unraid on a bridge network. Only thing I can think of which has changed was that I did run a duplicate watchtower container briefly (which stopped my current instance), but I have since deleted this container. I’ve tried recreating the new container but this doesn’t help.
Debug logs from before the error message seem to suggest the new image is pulled before the current container is being stopped and I’m wondering if perhaps the original container is being removed before the stop command is being sent?
Any other ideas?
###EDIT####
Update - yesterday’s watchtower schedule successfully updated my immich stack (which is managed by Unraids docker compose plugin) without an error message. It still produced an error message when updating the containers which are managed by Unraids default docker management dockerman (docker CLI) however. See log example.
Steps to reproduce
Expected behavior
No error message
Screenshots
No response
Environment
Config:
docker run
-d
--name='watchtower'
--net='bridge'
--pids-limit 2048
-e TZ="TZ***"
-e HOST_OS="Unraid"
-e HOST_HOSTNAME="NAME***"
-e HOST_CONTAINERNAME="watchtower"
-e 'WATCHTOWER_CLEANUP'='true'
-e 'WATCHTOWER_INCLUDE_STOPPED'='true'
-e 'WATCHTOWER_SCHEDULE'='0 0 5 * * '
-e 'WATCHTOWER_REVIVE_STOPPPED'='false'
-e 'WATCHTOWER_NOTIFICATION_REPORT'='true'
-e 'WATCHTOWER_NOTIFICATION_URL'='URL**'
-e 'WATCHTOWER_DEBUG'='true'
-l net.unraid.docker.managed=dockerman
-l net.unraid.docker.icon='https://containrrr.dev/watchtower/images/logo-450px.png'
-v '/var/run/docker.sock':'/var/run/docker.sock':'rw' 'containrrr/watchtower'
803ad909ef71c1a48f33a6387603567b258aad9a94254472e69e0617a3e58829
Your logs
The text was updated successfully, but these errors were encountered: