Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

set passwords for a set of accounts #101

Open
ibotty opened this issue Feb 27, 2016 · 22 comments
Open

set passwords for a set of accounts #101

ibotty opened this issue Feb 27, 2016 · 22 comments
Assignees

Comments

@ibotty
Copy link

ibotty commented Feb 27, 2016

It would be great to create roles and set the password for many accounts when starting the pod.

For many databases you have fine-grained permissions. It would be great to manage credentials in one place (one secret per database role), instead of having to set it once in the database, and once in the application.

I propose, that instead of hardcoding /etc/credentials/pgmaster, /etc/credentials/pguser, /etc/credentials/pgadmin, we use every directory within /etc/credentials, create the role if it does not exist yet, and set the password.

If/when a solution to downward-api for secrets (kubernetes/kubernetes#18372) lands, we could/should set role passwords automatically. That still leaves the question on what to do with additions. That can only be fixed properly when one can mount new volumes (or at least secrets) into running kubernetes pods, I suppose.

@ibotty
Copy link
Author

ibotty commented Sep 26, 2016

Well, now that secrets get updated automatically, I'd like to get a run at this issue.

My plan of action is the following. I do welcome comments on the approach.

  • change start script to also set credentials from mounted secrets, with secrets having preference over environment variables,
  • add a simple inotify-watcher script to (re)set the credentials in a separate container in the same pod.

What do you think?

@bparees
Copy link
Collaborator

bparees commented Sep 26, 2016

the additional container feels heavy-weight, why can't you watch in the background of the existing container?

@ibotty
Copy link
Author

ibotty commented Sep 26, 2016

Should be possible, but is harder to monitor. Fine with me either way.

@praiskup
Copy link
Contributor

In the beginning, this issue seemed to be about additional roles, but during start of the container. Now it looks like we need to make some async actions for changing passwords -- which all looks like a wrong idea to me (this might be done by client tool, doesn't have to be done by container itself).

I'm not sure, can you elaborate please what is really needed and how it should be done?

@ibotty
Copy link
Author

ibotty commented Sep 26, 2016

I guess you can say that's a meta-issue. For my part, there are three things I'd like:

  • Storing credentials in secrets (benefit: no credentials in logs, one place to change credentials (postgresql server and client));
  • allowing many database users;
  • updating database credentials automatically (to keep them in sync).

I think the first two part are the most important (for my use case), but the third is a very nice thing when updating credentials. That secrets mounts get changed automatically when the secret changes is to enable that kind of thing. Is there a technical reason for not wanting it?

I also don't see what part of an application deployment should be in charge of changing credentials if not the containers:
When rerolling postgresql user credentials, are you expected to either redeploy (with downtime), or require the user to change the secret (to persist the change) and run psql?

@praiskup
Copy link
Contributor

Such change sounds like way to have race conditions pretty easily .. it sounds like there should be some container API where environment (openshift) should be able to request changing credentials, with clear error reporting.

@praiskup
Copy link
Contributor

I'm still interested in the no credentials in logs. How that can be guaranteed that password is not going to occur in logs? That still requires maintainer to keep the script sane.

@ibotty
Copy link
Author

ibotty commented Sep 26, 2016

Well, right now every environment variable set as env variable will be visible in the host's docker logs as well as openshift's node and master logs. With secrets, that's not the case.

Of course I assume, the container scripts don't log the password by accident and that can't be guaranteed. I frankly don't see the point of your remark.

@ibotty
Copy link
Author

ibotty commented Sep 26, 2016

I missed your comment re race conditions above.

Volume mounts are eventually consistent. Can you clarify what race conditions you expect? I can not think of a scenario in which it won't converge to the correct outcome.

@praiskup
Copy link
Contributor

Well, right now every environment variable set as env variable will be visible in the host's docker logs as well as openshift's node and master logs. With secrets, that's not the case.

That's not remark, one question mark is missing.. :) I'm rather curious.

Docker has something like docker run -e ENVVAR, that can be used instead of docker run -e ENVVAR=value. So, if maintainer really pays attention (and it is not clear now whether it is really needed), there is no real need to put the environment variable content somewhere. Maybe that's something to be fixed yet, regardless of the secrets feature.

The problem is that we need to read the password from secrets .. somehow, and as we (by default) put everything into stderr (by set -x) it is very likely credentials will be on stderr too ... so, something like "policy" for working with credentials should be written?

@praiskup
Copy link
Contributor

Volume mounts are eventually consistent. Can you clarify what race conditions you expect? I can not think of a scenario in which it won't converge to the correct outcome.

What if the request for password change is hit within very short period, while the password change is done in some infinite loop in background?

@ibotty
Copy link
Author

ibotty commented Sep 26, 2016

Oh, I get what you mean. I was worried not about the container's log but the cluster's log. You are right, that the other thing should be fixed as well, but are you sure you enable tracing? I can only see set -x in the test script and not on production logs in a few postgresql containers.

@ibotty
Copy link
Author

ibotty commented Sep 26, 2016

Re password change race: do you mean that the application might have the wrong password for a small amount of time? I don't see how that can be prohibited without stopping one of the two containers. I think it's important to converge as quickly as possible to a working solution. Am I making sense?

@praiskup
Copy link
Contributor

I can only see set -x in the test script and not on production logs in a few postgresql containers.

Correct, sorry for mystification. set -x is not enabled, and should never be :)

So the what exactly exposes the credentials? Would doing this:

export PASS=value
docker run -e PASS ...

.. instetad of:

docker run -e PASS=value

help?

@ibotty
Copy link
Author

ibotty commented Sep 26, 2016 via email

@ibotty
Copy link
Author

ibotty commented Sep 27, 2016

Re password change race: do you mean that the application might have the wrong password for a small amount of time? I don't see how that can be prohibited without stopping one of the two containers. I think it's important to converge as quickly as possible to a working solution. Am I making sense?

@praiskup
Copy link
Contributor

On Tuesday, September 27, 2016 8:20:09 AM CEST Tobias Florek wrote:

Re password change race: do you mean that the application might have the wrong password for a small amount of time?

Exactly. If container got into such situtation, recovering it would be
nontrivial task for user.

I don't see how that can be prohibited without stopping one of the two containers. I think it's important to converge as quickly as possible to a working solution. Am I making sense?

IMO, there just needs to be a "protocol" between environment (kubernetes) and
container. As I'm not familiar with kubernetes, I'm not sure whether this is
realistic dream, but environment should be able to wait till the request in
container-cluster is really done (containers should be able to confirm,
somehow).

Putting something into file and blindly expect that the request is going to be
executed (immediately) sounds like really bad design, that would definitely bite
us soon. In container - atfter executing the request - can I write something
into "secrets" to simply inform kubernetes know about "exit status"?

@ibotty
Copy link
Author

ibotty commented Sep 29, 2016

I am not sure, I am following.

ConfigMaps are eventually consistent (and additionally: updates in it are atomic), so the latest inotify-event will have the latest change. If the inotify-events gets handled serially (that is not concurrent), I still don't see, how you could end up in a state that does not correct itself.

@ibotty
Copy link
Author

ibotty commented Sep 29, 2016

And: (sorry forgot) kube does not have any idea about confirmation re configmap changes.

@ibotty
Copy link
Author

ibotty commented Sep 29, 2016

Care to comment on the wip pull request: #145?

@praiskup
Copy link
Contributor

I still don't see what happens if the request isn't correctly handled.. but app thinks it is handled. That can happen because of low-memory issue, queue problem or whatever other reason. When we talk about transactional database, this looks like really weak best-effort approach.

@ibotty
Copy link
Author

ibotty commented Sep 29, 2016

What request? The password change? If so, if it is not handled correctly, the app won't be able to connect until the request is handled. That's the same that happens when resetting passwords in traditional deployments, so I don't quiet see what you are after.

I have no problem not doing that part btw, so we can also agree to disagree here ;).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants