Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pod annotation to require a routing secret to be present #11

Open
AdamMagaluk opened this issue Jan 26, 2017 · 9 comments
Open

Pod annotation to require a routing secret to be present #11

AdamMagaluk opened this issue Jan 26, 2017 · 9 comments

Comments

@AdamMagaluk
Copy link
Contributor

In the case when you have a namespace deployed with a routing secret and a pod that is routable then you go ahead and delete the namespace using kubectl. The order in which things are deleted matters.

In all the cases that i've seen kubernetes will first delete the secret configs before removing the pod there is usually a slight delay (10s-15s) on minikube that the pod then becomes routable without a secret. I assume there are probably other cases where kubernets and kubelet will preform operations on the separate resources in different order and timings.

This seems like a potential for abuse.

Would it make sense to add an optional annotation to the pod that tells dispatcher it should be expecting a routing secret it's namespace and if dispatcher does not have it already the pod is not routable.

@whitlockjc @mpnally @jbowen93 @noahdietz ?

@jbowen93
Copy link

Would it be a good idea to have a configuration variable on dispatcher itself that requires all pods it's routing to have a secret otherwise they aren't routed?

@AdamMagaluk
Copy link
Contributor Author

The only downside to that is if there pods that do not require secrets and others that do. Im not sure if that's the case with out shipyard deployment or not. I know it would cause issue with my local testing with minikube though that's easy to get around.

@jbowen93
Copy link

I agree it could be a pain for local testing. I'm curious if there is any legitimate production use case where only some pods in a namespace would be "secured". If there's such a case then your suggestion should be easy to implement. Let me know and I'll make sure enrober matches dispatcher's spec.

@AdamMagaluk
Copy link
Contributor Author

In production do enrober and kiln get public routing keys on their namespaces in prod? I don't see anything in the deployment file but in there I don't see anything about creating namespaces.

@AdamMagaluk
Copy link
Contributor Author

Also we could easily support both options as well. Flag on dispatcher runtime for all pods needs a secret in a namespace too be routed and an annotation to enable that logic on specific pods if the former flag is not set.

@jbowen93
Copy link

In production I don't believe kiln or enrober are "secure" behind a routing api key. Though I may be wrong. My understanding was that their security is enforced due to the use of 'authsdk' on all calls. Unfortunately I don't have my Apigee laptop on me so I can't actually check their state on the cluster.

@whitlockjc
Copy link
Contributor

But if dispatcher is watching secrets, won't it rebuild the configuration?

@whitlockjc
Copy link
Contributor

@jbowen93: You're right, nothing about the key makes kiln or enrober any more or less secure. Since both use Edge authentication/authorization, it doesn't matter.

@AdamMagaluk
Copy link
Contributor Author

@whitlockjc Yes it is watching secrets and will rebuild the config once it has the secret but there is no guarantee the secret will be available when the pod is available, potentially leaving a pod exposed for a short while.

I was able see about a 15 second delay when deleting a namespace and kubelet first deletes the secret but takes some time to spin down the pod.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants