-
Notifications
You must be signed in to change notification settings - Fork 23
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enhance cluster-wide known hosts support #34
Comments
Sounds like an interesting approach, that would also lower a chance of a DDoS of the external services, although I would leave the current approach as a possibility. Going with the Or perhaps, just go with all/group/host approach similar to how I would keep the |
Thanks for the feedback. I will look into this when I have time. For now this issue is open for discussion and to not forget the idea 😉
I am not quite getting it. Can you explain?
I totally agree. The current implementation can still be used for non-DebOps hosts.
Sounds good. No problem with that 😉 |
As for the DDoS, if you have a few hosts and you want to add, say, |
Ah now I am getting it :) Yes, that could also be optimized by maybe letting the Ansible controller do the scanning once. |
That might be an issue with Ansible Controller and remote hosts use different set of DNS nameservers, potentially seeing different set of hosts. How likely it is? I'm not sure. |
Something to think about. I guess the best way to make the |
Seems that recent versions of Ansible (noticed in v2.4 but was probably introduced before) have Something like this could be done easily. This way, you only must ensure you have an authentic connection from the Ansible controller to all nodes (which you will need to verify manually anyway) and then can distribute the trust relationship to the nodes. Also, I updated my initial comment from 2016-05-15 above. I don’t think anymore that a custom file below
This would also allow us to easily rotate/regenerate host keys without manually rechecking the host key. I have this in my playbook for bootstrapping hosts based on a VM template. The template has a known fingerprint which I check after copying the template. Then, I delete the host keys and regenerate them via Rotating host keys is supported by ssh with the |
The template you linked to is interesting, especially that Ansible provides these facts as built-in, so they are always available during Ansible execution. However, instead of templating the whole file, I would go with This way we don't need to keep any state anywhere and the know hosts lists on each remote host can be expanded/updated during normal Ansible operation - just run |
I would not hash them, ref: http://blog.joeyhewitt.com/2013/12/openssh-hashknownhosts-a-bad-idea/. I disabled it myself, ref: https://github.com/ypid/dotfiles/blob/master/ssh/config_ypid_defaults#L19-L20 I would still argue to keep state on the Ansible controller, it needs/has this anyway and then this could also be used to rotate host keys, see |
I suspect that we are now talking about slightly different things:
In any case, even if you keep the list of SSH fingerprints in a file on Ansible Controller, you still need to connect to all cluster hosts at least once to update it. And if you need to refresh the list because one of the cluster nodes changed, you still need to connect to all of them to keep the list consistent. Therefore doing it via I'm suddenly reminded about keeping the SSH host fingerprints in DNS which would alleviate the above need to keep the |
All right, simplicity probably speaks for your solution, feel free to give it a try. I agree that a separate role would be good,
Not directly. I had the ideal of a unified approach which:
The idea is that you have your known hosts file on the Ansible controller anyway and it could be reused for this. |
Currently, the role allows to specify a list of FQDN to scan over the network for there public host-key. Each host does this on it’s own. This is nice as it saves the public host keys once.
But we can/should do better because:
I would propose that the public host key fingerprints get captured on the Ansible controller. They could be saved underUse the default OpenSSH host-key fingerprint file format.secret/sshd/{{ ansible_domain }}/{{ ansible_fqdn }}/public_host_key_fingerprint
. This would allow to distribute all known (and validated) fingerprints to all hosts in the same domain by default and to keep them up-to-date.An alternative to that could be to use the already setup PKI also for sshd but I have no idea how good supported that is. I think this approach with ssh public host keys would still be worth implementing.
Things to checkout:
The text was updated successfully, but these errors were encountered: