-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Container couchdb is going into restart loop right after deploy without any logs #121
Comments
We are also experiencing this same issue when trying to go to |
That is not helping in my case. I get the same behaviour with different versions. Even 2.X.X |
I have a same problem described in here |
My guess is that this is a permissions issue. If you can reproduce in a test environment, I would see whether you can get the container running using a custom command e.g. update the deployment to set:
and then exec into the container. The standard container entrypoint is defined at https://github.com/apache/couchdb-docker/blob/main/3.3.2/docker-entrypoint.sh, so you could try running that manually from a shell and see whether any commands fail. |
I am not sure how to change an entrypoint of a docker image that is being deployed via helm chart. Values.yaml doesn't give me such a possibility... |
@lolszowy I would just |
As the author of the issue I am sorry, but currently I don't have much time to invest in it. As soon as I can I will proceed with further testing too. I tested with different storage class ( Amazon EBS and Longhorn ) and it was working as expected. |
problem is definitely with mounting pv.
k describe pod couchdb-statefulset-0
While using Helm Chart I had that error |
Describe the bug
Upon executing helm install couchdb -n couchdb couchdb/couchdb -f values.yaml, the main container enters a continuous restart loop, lacking explanatory logs. This issue surfaces when persistence is enabled; without it, the container starts successfully. The PVC and PV are properly created, mounted and writable ( i tested from another container ).
Experimenting with a custom Deployment resulted in same behaviour. Consequently, the issue could originate from my storage configuration or permissions and how the docker container or the software expects them. It's noteworthy that other applications (Prometheus, RabbitMQ) operate without issues on the same storage. cluster, helm.
Any information or further steps will be appreciated. Thank you!
Version of Helm and Kubernetes:
Kubernetes
Provider: Amazon EKS, Kubernetes Version: v1.24.13 -0a21954
Helm:
version.BuildInfo{Version:"v3.9.4", GitCommit:"dbc6d8e20fe1d58d50e6ed30f09a04a77e4c68db", GitTreeState:"clean", GoVersion:"go1.17.13"}
StorageClass:
What happened:
The StatefulSet is unable to start with Amzon EFS Persistence Storage
How to reproduce it (as minimally and precisely as possible):
Create EFS Storage on EKS and deploy following the guide in the README.
Anything else we need to know:
values.yaml
kubectl describe pod couchdb-couchdb-0 -n couchdb-qa
kubectl logs couchdb-qa-couchdb-0 -n couchdb-qa
Defaulted container "couchdb" out of: couchdb, init-copy (init)
kubectl logs couchdb-qa-couchdb-0 --container init-copy -n couchdb-qa
The text was updated successfully, but these errors were encountered: