Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix #9: use secret for private key certs during bootstrap #21

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

xunleii
Copy link

@xunleii xunleii commented Jul 16, 2020

Because private root keys must be "securely" stored, it is most convenient to store them inside a secret instead of a simple configmaps. Futhermore, this allows us to use some features like Encrypting Secret Data at Rest or secrets encryption on k3s to protect them a bit more.

This PR also fix the #9 issue (I think).

⚠️ This PR can break something if someone else import these privates keys as configmaps


NB: certificates are not critical but it is a good practice to store them too inside a secret. If you want, I can modify my PR to do that. (I can also modify it in order to use TLS secrets in order to store root-ca and intermediate-ca like other certificates on Kubernetes ... but require more changes and can create breaking changes if someone import the certificates)

@maraino
Copy link
Collaborator

maraino commented Jul 16, 2020

Hi @xunleii this PR makes total sense to me, I'm not worried about somebody else importing those keys as configmaps. But I'm a little bit more worried if helm upgrade breaks.

Do you know if it would be possible to support both cases? I don't see an easy way without running a script or requiring a --set or a --values flag.

Storing certs as TLS resources would be also cool, the main issue is again the helm upgrade.

The root and intermediate keys are both encrypted using a password stored in a secret, so the risk of using a configmap or a secret is similar.

@maraino maraino self-requested a review July 16, 2020 22:33
@xunleii
Copy link
Author

xunleii commented Jul 17, 2020

Thanks @maraino for your quick response. I've forgotten that the private keys are encrypted.
I will try to see if I can find a solution regarding the helm upgrade without breaking anything.

@xunleii
Copy link
Author

xunleii commented Jul 17, 2020

After some tries, the only way I found to manage this PR without helm upgrade breaks is to use a Chart Hooks pre-upgrade Job which copies the configmaps to new secrets... and for me it is a non-sense to make that just because it is most convenient to store private key inside secret (and as you said, the private keys are encrypted using password stored in secrets).

However, I don't use Helm every days and other solutions probably exist.

EDIT: Another way to avoid a break is to mount configmaps and secrets as a projected volume that aggregates them inside an unique volume, with the optional field enabled (so if the secret or the configmap does not exist, there is no lock). I will try this later

@xunleii
Copy link
Author

xunleii commented Jul 17, 2020

So projected volumes works well in this case;

  • if a new user install this chart, the secret will be populated with private keys and mounted inside the pod.
  • if an existing user upgrade the current chart, it will use the existing configmap

It can be easily testable following theses steps (require git, kind, jq, step and helm)

# --- Initialize testing environment
kind create cluster --name "smallstep.pr21"
alias kubectl="kubectl --context kind-smallstep.pr21"

cd $(mktemp -d)
git clone https://github.com/xunleii/helm-charts.git
cd helm-charts


# --- Try HELM upgrade
# install step-certificates
helm --kube-context kind-smallstep.pr21 install step-certificates ./step-certificates

# wait all pods running
kubectl get secret step-certificates-secrets # must failed

# get root certificates for post-upgrade tests
kubectl port-forward svc/step-certificates 8443:443 > /dev/null &
export kpf_pid=$! && sleep 1

step ca root root_ca.crt -f --ca-url https://127.0.0.1:8443 --fingerprint $(kubectl get configmaps step-certificates-config -ojson | jq '.data["defaults.json"]' -r | jq '.fingerprint' -r)
step ca health --ca-url https://127.0.0.1:8443 --root root_ca.crt # must be ok
kill ${kpf_pid}

# checkout to this PR & upgrade step-certificates
git checkout fix-9
helm --kube-context kind-smallstep.pr21 upgrade step-certificates ./step-certificates

# wait all pods running
kubectl get secret step-certificates-secrets # must not failed because it will be created... but must be empty

# check if something has changed
kubectl port-forward svc/step-certificates 8443:443 > /dev/null &
export kpf_pid=$! && sleep 1

step ca health --ca-url https://127.0.0.1:8443 --root root_ca.crt # must be ok
kill ${kpf_pid}

# clean environment
helm --kube-context kind-smallstep.pr21 uninstall step-certificates


# --- Try HELM install
helm --kube-context kind-smallstep.pr21 install step-certificates ./step-certificates

# wait all pods running
kubectl get secret step-certificates-secrets # must not failed and must not be empty
kubectl get configmap step-certificates-secrets # must not failed but must be empty

# get root certificates for post-upgrade tests
kubectl port-forward svc/step-certificates 8443:443 > /dev/null &
export kpf_pid=$! && sleep 1

step ca root root_ca.crt -f --ca-url https://127.0.0.1:8443 --fingerprint $(kubectl get configmaps step-certificates-config -ojson | jq '.data["defaults.json"]' -r | jq '.fingerprint' -r)
step ca health --ca-url https://127.0.0.1:8443 --root root_ca.crt # must be ok
kill ${kpf_pid}


# --- Clean test environment
kind delete cluster --name smallstep.pr21

If you want, I can try to do the same thing but with TLS secrets instead (with cert and key inside).

@maraino
Copy link
Collaborator

maraino commented Jul 17, 2020

Wow @xunleii I didn't know about that. It looks promising, I'll give a try and let you know.

@maraino
Copy link
Collaborator

maraino commented Jul 22, 2020

@xunleii I'll try to review it this week

@maraino
Copy link
Collaborator

maraino commented Jul 23, 2020

@xunleii I've just tested the upgrade and it didn't work, then I noticed that you reverted the changes:

xunleii force-pushed the xunleii:fix-9 branch from 28de33e to cd0fe76 5 days ago

@maraino
Copy link
Collaborator

maraino commented Jul 23, 2020

If there's something weird going on with the projected volumes we can update the bootstrapper to take care of converting configmaps to secrets if the secrets are empty.

@xunleii
Copy link
Author

xunleii commented Jul 30, 2020

After some tests, I think it can be more easier to use a job to take care of converting configmaps to secrets. But, I think an extra job could be better than using the boostrapper; the bootstrapper is only called on installation, and not during upgrade. What do you think about a job using the Chart Hooks pre-upgrade ?

@maraino
Copy link
Collaborator

maraino commented Jul 30, 2020

@xunleii Sure it can be a separate job, it might be cleaner than doing it in the bootstrap.

@maraino
Copy link
Collaborator

maraino commented Jul 30, 2020

Make sure to add a variable to disable it, would it be possible to disable it looking at the version you are upgrading from? Is that value available?

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants