-
Notifications
You must be signed in to change notification settings - Fork 143
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Failure to start cluster with unmanaged: true
on latest versions
#1607
Comments
I attempted this on a new k8s cluster, with the 1.15.4 version of the operator, and it failed in the same way. |
Attempting the following cluster also fails.
|
The following based off of, https://github.com/percona/percona-server-mongodb-operator/blob/v1.16.2/deploy/cr-minimal.yaml, fails the exact same way as well
|
Yeah at this point I am really confused. Either I am missing something I need to get an unmanaged cluster working, or unmanaged cluster are fully broken. I am assuming I am missing something. |
unmanaged: true
unmanaged: true
on latest versions
Hey @jonathon2nd, to make it work, all your replica set nodes need to be exposed. That way you can form a full mesh connection accross all of the nodes.
Make sure that |
That is not it unfortunately. I had tried that already, though I added it here but I did not. minimal-cluster-rs0-0_mongod.log percona/percona-server-mongodb-operator:1.16.2
yaml
|
can you show If I can't figure it out, then might need to see the logs of the replica set Pod itself. |
I have already provided a log from a replica pod
|
My hypothesis is that you need to form a replica set so that liveness probe passes. Standalone unmanaged node will fail the lifeness probe. |
I am getting auth errors. |
What errors are you getting on the "managed" side? |
From rs.status() what else should I be looking at?
|
I am not able to connect to the new nodes with any auth in the pods command line. |
The operator is still waiting for the pods
I suspect that the operator has not added the users to the nodes yet. |
Yes that seems to be what is happening, it does not setup the users till into the setup process
When not unmanaged |
Any ideas @spron-in ? |
I opened this issue when I attempted with previous version, but even latest versions do not work, so updated title.
Report
Creation of cluster with
unmanaged: true
fails, mongodb nodes bootloop.Operator continues to log the following repeatedly as the mongo node bootloop.
INFO Replset is not exposed. Make sure each pod in the replset can reach each other. {"controller": "psmdb-controller", "object": {"name":"example-mongodb","namespace":"mongodb"}, "namespace": "mongodb", "name": "example-mongodb", "reconcileID": "4071c15b-9595-443f-bb20-5705204cbd3d", "replset": "rs0"}
More about the problem
I am attempting to follow this guide
to migrate one of our legacy mongodb without downtime.
SSL off as the internal db we are using does not have that on, I will turn it on later with short downtime after migration is successful.
Yaml
Steps to reproduce
Versions
Anything else?
The DB I am attempting to migrate requires a target of 4.4, hence the cr version selection.
The deployment works without
unmanaged: true
Two pod logs from start up to bootloop
example-mongodb-rs0-0_mongod.log
example-mongodb-rs0-1_mongod.log
Operator logs:
The text was updated successfully, but these errors were encountered: