-
-
Notifications
You must be signed in to change notification settings - Fork 171
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Help: Docker container shows as up but unhealthy. #843
Comments
I have the same issue as well. I thought it was just me. I'm using portainer with Cloudflare. |
Seeing this as well in Dockge using Cloudflare, DuckDNS, & NoIP. |
Maybe you need to set the cloudflare |
Otherwise, please share the healthcheck failing error message which you can find with |
I have 1 with proxied true and one without same issue. |
I keep seeing things like this. But I removed that website and it still shows another. Funny thing is that it still works. I changed the IPs and Macs.
|
I have the same type of message as [tegralens]. It is due to split-brain DNS. ddns-updater was resolving widgets.domain.com locally to 172.16.x.x and was configure to update the A record for widgets.domain.com externally (ovbiously to a different address). |
Hey, sorry it took a week. OK I tried proxied true and it showed me a different error in the web UI. Here's the docker inspect results: after adding proxied:true > https://pastebin.com/Tk6LPYEk So I'm guessing when a DNS record is fails to update, for some reason reports the Docker container as unhealthy, which then UptimeKuma sees as unhealthy. Is there a way to unlink the health of the container from if DNS don't get updated? And to reiterate, this only started happening several updates ago. Ok thank you! |
TLDR: * Docker container shows as up but unhealthy.
This actually started happening months ago. I've just been too lazy to report it.
It was working fine, then one day it stopped.
Probably after an update or something.
The container will show as up but unhealthy.
For example, right now it shows "Up 16 hours (unhealthy)".
It seems to be working fine; the only reason I noticed is because now UptimeKuma reports it as down even though it's running.
I posted over there at the UptimeKuma github and they suggested if the container is showing up and unhealthy, it means the container itself is reporting an issue. (but if it showed healthy, then UptimeKuma could have been the issue).
Logs:
Configuration file (remove your credentials!):
And lastly, my container settings for ddns-updater:
The text was updated successfully, but these errors were encountered: