You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi!
I switched a couple of my projects to Kamal. In some of them, especially in staging environments, I use multiple containers on one machine.
Usually, we work with separate repositories for the frontend (FE) and backend (BE). On both, I use Kamal to deploy containers. Most of these applications are behind a load balancer, which also handles SSL termination.
-> App server 1 (BE+FE containers)
Load Balancer -> App server 2 (BE+FE containers)
-> App server 3 (BE+FE containers)
The issue is that I experienced weird behavior during deployments. I postponed creating this issue because I couldn't find a common reason for it. Generally, sometimes deployments stop working, showing that the health check doesn't pass. When I stop the container and rerun the deployment in my CI/CD pipeline, it works again.
The error looks like this:
ERROR (SSHKit::Command::Failed): Exception while executing on host <domain of random App Server>: docker exit status: 1
docker stdout: Nothing written
docker stderr: Error: target failed to become healthy
Additionally, I ran the deployment in verbose mode, and none of the containers returned a status of "unhealthy."
In my humble opinion, deploying multiple containers on one machine is a common use case. As I've investigated this issue for a while, I can say that the load balancer layer works just fine, and the containers are healthy. I assume the issue lies somewhere in Kamal's proxy and the way Kamal handles health checks.
I would love some hints or advice, or maybe there's something I’m doing wrong when defining health checks. Perhaps someone has successfully run such an architecture and can share the solution.
Hi!
I switched a couple of my projects to Kamal. In some of them, especially in staging environments, I use multiple containers on one machine.
Usually, we work with separate repositories for the frontend (FE) and backend (BE). On both, I use Kamal to deploy containers. Most of these applications are behind a load balancer, which also handles SSL termination.
The issue is that I experienced weird behavior during deployments. I postponed creating this issue because I couldn't find a common reason for it. Generally, sometimes deployments stop working, showing that the health check doesn't pass. When I stop the container and rerun the deployment in my CI/CD pipeline, it works again.
The error looks like this:
Additionally, I ran the deployment in verbose mode, and none of the containers returned a status of "unhealthy."
In my humble opinion, deploying multiple containers on one machine is a common use case. As I've investigated this issue for a while, I can say that the load balancer layer works just fine, and the containers are healthy. I assume the issue lies somewhere in Kamal's proxy and the way Kamal handles health checks.
I would love some hints or advice, or maybe there's something I’m doing wrong when defining health checks. Perhaps someone has successfully run such an architecture and can share the solution.
Those are my configs:
Backend app:
Frontend app:
I deploy containers from CI/CD pipeline using:
- kamal deploy --skip-push --version=$CI_COMMIT_REF_SLUG -d staging -v
Thats how I build containers in my Gitlab CI, maybe it is important in that case:
The text was updated successfully, but these errors were encountered: