-
Notifications
You must be signed in to change notification settings - Fork 212
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
set passwords for a set of accounts #101
Comments
Well, now that secrets get updated automatically, I'd like to get a run at this issue. My plan of action is the following. I do welcome comments on the approach.
What do you think? |
the additional container feels heavy-weight, why can't you watch in the background of the existing container? |
Should be possible, but is harder to monitor. Fine with me either way. |
In the beginning, this issue seemed to be about additional roles, but during start of the container. Now it looks like we need to make some async actions for changing passwords -- which all looks like a wrong idea to me (this might be done by client tool, doesn't have to be done by container itself). I'm not sure, can you elaborate please what is really needed and how it should be done? |
I guess you can say that's a meta-issue. For my part, there are three things I'd like:
I think the first two part are the most important (for my use case), but the third is a very nice thing when updating credentials. That secrets mounts get changed automatically when the secret changes is to enable that kind of thing. Is there a technical reason for not wanting it? I also don't see what part of an application deployment should be in charge of changing credentials if not the containers: |
Such change sounds like way to have race conditions pretty easily .. it sounds like there should be some container API where environment (openshift) should be able to request changing credentials, with clear error reporting. |
I'm still interested in the |
Well, right now every environment variable set as env variable will be visible in the host's docker logs as well as openshift's node and master logs. With secrets, that's not the case. Of course I assume, the container scripts don't log the password by accident and that can't be guaranteed. I frankly don't see the point of your remark. |
I missed your comment re race conditions above. Volume mounts are eventually consistent. Can you clarify what race conditions you expect? I can not think of a scenario in which it won't converge to the correct outcome. |
That's not remark, one question mark is missing.. :) I'm rather curious. Docker has something like The problem is that we need to read the password from secrets .. somehow, and as we (by default) put everything into stderr (by |
What if the request for password change is hit within very short period, while the password change is done in some infinite loop in background? |
Oh, I get what you mean. I was worried not about the container's log but the cluster's log. You are right, that the other thing should be fixed as well, but are you sure you enable tracing? I can only see |
Re password change race: do you mean that the application might have the wrong password for a small amount of time? I don't see how that can be prohibited without stopping one of the two containers. I think it's important to converge as quickly as possible to a working solution. Am I making sense? |
Correct, sorry for mystification. So the what exactly exposes the credentials? Would doing this:
.. instetad of:
help? |
I am only concerned about openshift/kubernetes usage, so I have not thought deeply about a `docker run` case.
In openshift, specifying environment vars directly logs them in the pod's annotation as well as in the master's and node's log (I am not sure with env variables set through `valueFrom`).
Any use of environment variables (docker or openshift) will expose the env on the host. See `ps e`. I think using volume mounts for credentials has way less potential for getting exposed.
|
Re password change race: do you mean that the application might have the wrong password for a small amount of time? I don't see how that can be prohibited without stopping one of the two containers. I think it's important to converge as quickly as possible to a working solution. Am I making sense? |
On Tuesday, September 27, 2016 8:20:09 AM CEST Tobias Florek wrote:
Exactly. If container got into such situtation, recovering it would be
IMO, there just needs to be a "protocol" between environment (kubernetes) and Putting something into file and blindly expect that the request is going to be |
I am not sure, I am following. ConfigMaps are eventually consistent (and additionally: updates in it are atomic), so the latest inotify-event will have the latest change. If the inotify-events gets handled serially (that is not concurrent), I still don't see, how you could end up in a state that does not correct itself. |
And: (sorry forgot) kube does not have any idea about confirmation re configmap changes. |
Care to comment on the wip pull request: #145? |
I still don't see what happens if the request isn't correctly handled.. but app thinks it is handled. That can happen because of low-memory issue, queue problem or whatever other reason. When we talk about transactional database, this looks like really weak best-effort approach. |
What request? The password change? If so, if it is not handled correctly, the app won't be able to connect until the request is handled. That's the same that happens when resetting passwords in traditional deployments, so I don't quiet see what you are after. I have no problem not doing that part btw, so we can also agree to disagree here ;). |
It would be great to create roles and set the password for many accounts when starting the pod.
For many databases you have fine-grained permissions. It would be great to manage credentials in one place (one secret per database role), instead of having to set it once in the database, and once in the application.
I propose, that instead of hardcoding /etc/credentials/pgmaster, /etc/credentials/pguser, /etc/credentials/pgadmin, we use every directory within /etc/credentials, create the role if it does not exist yet, and set the password.
If/when a solution to downward-api for secrets (kubernetes/kubernetes#18372) lands, we could/should set role passwords automatically. That still leaves the question on what to do with additions. That can only be fixed properly when one can mount new volumes (or at least secrets) into running kubernetes pods, I suppose.
The text was updated successfully, but these errors were encountered: