-
Notifications
You must be signed in to change notification settings - Fork 107
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WMAgent: install/run CouchDB from Dockerhub #11312
Comments
logging it here, because this is important. Whenever I was trying to run this container I was getting a silent
So it was the process was killed by the |
Indeed, my VM's profile was of type
While, reading the CouchDB documentation, the minimum recommended resources for CouchDB should be:
Adding a swap file to the machine did fix the issue, for the time being. |
I just noticed, because the PR for resolving this issue is in the CMSKubrnetes repository, it is not able to automatically resolve the current issue in the DMWM repository. So here is the linik: dmwm/CMSKubernetes#1409 @khurtado Please consider giving it a try and eventually some initial feedback. I plan to apply few stuff from what we have discovered for the MariaDB container here as well though. |
I just found an interesting documentation on this which might be useful: I spent only a few minutes reading it, but it made me wonder if we should run the service with the standard image user and work on the nodes puppet template to have that user added to the schedds and in the same group as the users we use to run WMAgent(?) |
hi @khurtado I figured out I was not adding the
|
@amaltaro As we talked yesterday, This:
Is the minor Error, I was talking about, The one I had it solved 6 moths ago and did not preserve the solution in the config files. I hope you've met that unauthorized error before. same when I try to push the couchappps:
It seems I forget to set the user password somewhere. Do you have the previous configuration on top of your head? |
@todor-ivanov I can confirm the setup is working as intended. Additionally, I went through that authorization issue and fixed it by using the
Then, couchDB encrypts With that said, should we require a new file That is:
|
Hi @khurtado, p.s. I was hoping to merge this before our meeting today, but I could not provide the code on time... Anyway, we can still merge it later, though. |
Hi @todor-ivanov I created the following files:
There, I have COUCH_ROOT and COUCH_ROOTPASS in CouchDB.secrets and COUCH_USER and COUCH_PASS in the agent secrets. Then, I build and run the container. I can verify those files were binded into /data/admin/{couchdb,wmagent} inside the container. The content of
Extra info: Couch secrets looks like this
WMAgent like this:
The original locals.ini of course is like this:
|
thanks @khurtado Well one thing is for sure: the As of the duplicated cmst1 account, It is indeed interesting to investigate. Could you please send mi the logs as well.
|
@todor-ivanov Ahh, thank you! I deleted the container, but once I changed
would be useful.
I'm confused here. Let's say we run as |
hi @khurtado
I get it, so am I. We need to invent a new one. So far (just like it was with MariaDB) we've never had a user with only database access rights. We were allways running boldly with the server admin user dealing also with access from the WMAgent itself to the database. Which was not a big deal until now, as long as everything was running in the same installation. But now we are on the path of splitting the databases from the WMAgent service. So we better have:
Then here are some more details on how to give that new user the proper database only access level. |
My experience with authz in CouchDB tells me that this setup isn't simple, as users need to be granted privileges by database (and I think the allowed operations as well). Is there any problem in sticking with the current setup (single admin user for running the service and accessing the database)? If such changes are not required at the moment, then I would suggest to come back to this once it's no longer an urgent migration. |
@todor-ivanov Thank you! I think I get it now. In that case, I do agree that is the ideal but it should likely be separated from this current issue, which is trying to replicate exactly what we had but in containers, that includes software versions and user account setup. |
Ok then, Please take a look at my latest commit to the PR in the CMSKubernetes repository: dmwm/CMSKubernetes@ca5ddbc . It fixes the double account issue. And it is now using only whatever password is provided at the WMAgent.secrets file. |
With all the comments from the PR review addressed, I have pushed one image to the registry as well. Here is the rsult from running this from both local repository and from CERN registry:
With this I'd say we are ready for merging and closing this issue as well. |
resolved by: dmwm/CMSKubernetes#1409 |
Impact of the new feature
WMAgent
Fixed by: dmwm/CMSKubernetes#1409
Is your feature request related to a problem? Please describe.
As part of the migration to PyPi and RPM-less deployment, we should start looking into running CouchDB as a container in the WMAgent nodes.
Describe the solution you'd like
Update and/or use the CouchDB dockerfile already available in (latest stable tag is v5):
https://github.com/dmwm/CMSKubernetes/tree/master/docker/couchdb
Also provide relevant configuration changes and or scripts (we might have to update the configuration with the replication section, which is not used in central services)
Changes to the WMAgent deployment script might be required as well, moving away from a localhost MariaDB deployment to a containerized model (through host network or similar).
Note that all of the CouchDB data/logs needs to be in a persisted storage (resilient to container restarts/recreation).
Lastly, once the image is stable, we need to label it accordingly to avoid automated registry cleanup.
Valentin says that the policy is: tag needs to have a -stable suffix.
Describe alternatives you've considered
None
Additional context
None
Part of the following meta issue: #11314
The text was updated successfully, but these errors were encountered: