Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixes for Smart Columbus OS deployment #15

Open
wants to merge 3 commits into
base: master
Choose a base branch
from
Open

Fixes for Smart Columbus OS deployment #15

wants to merge 3 commits into from

Conversation

kMaiSmith
Copy link

@kMaiSmith kMaiSmith commented Jun 8, 2018

Two main changes needed to be made to this module to allow for utilising these scripts in our architecture:

  • A AWS credentials profile needed to be able to be passed to the plugin
  • the DNS entry for the kubernetes master needs to point to the private ip to let the slaves connect since our master isn't on a public address.

@kMaiSmith
Copy link
Author

kMaiSmith commented Jun 8, 2018

I didn't realise the extensiveness of the changes the whitespace tool had done, i can have the whitespace re-jiggering undone if desired

@scholzj
Copy link
Owner

scholzj commented Jun 8, 2018

I would't mind the white space changes per-se. But it is quite hard to find the changes you have made to the code. Could you please revert them or post somewhere a diff without them (as a gist or something like that)?

How does the default profile work with people who pass the credentials as environment variables (such as I do)? Will it still work for me?

@kMaiSmith
Copy link
Author

Sure thing. You can also tac a ?w=1 onto the end of the url to see only the non whitespace changes (https://blog.github.com/2011-10-21-github-secrets/)

Copy link
Owner

@scholzj scholzj left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I didn't know about this ?w=1 trick ... neat.

I do not like the change around the DNS record. I suggested another way which should let others use the DNS as it was used before to talk with the matser from outside over public internet. But I think it should still work fine for bootstraping the cluster through private IPs. Please have a look in the comments.

main.tf Outdated
@@ -392,6 +393,6 @@ resource "aws_route53_record" "master" {
zone_id = "${data.aws_route53_zone.dns_zone.zone_id}"
name = "${var.cluster_name}.${var.hosted_zone}"
type = "A"
records = ["${aws_eip.master.public_ip}"]
records = ["${aws_instance.master.private_ip}"]
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This DNS record is actually supposed to be used for accessing the cluster from the outside (using kubectl from your PC etc.). So this breaks this. Moreover, it leaves Elastic IP address unused.

What I think you should do is to revert this change and either:

  • Create a different hostname (e.g. something like internal.${var.cluster_name}.${var.hosted_zone})
  • Use the new hostname to link with the nodes
    or maybe be much easier and still fully sufficient:
  • Use the internal DNS record / private IP in the init_node template

That should keep the DNS record work as it worked before but also make sure that the cluster links through the private IP addresses which I agree should be much better ... even for people who have the public IP.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds a lot cleaner than this. Let me see what i can do

zone_id = "${data.aws_route53_zone.dns_zone.zone_id}"
name = "internal.${var.cluster_name}.${var.hosted_zone}"
type = "A"
records = ["${aws_network_interface.master_private_interfaces.*.private_ips}"]
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we need the network interface here? Why is not sufficient to just use ${aws_instance.master.private_ip} for the internal host name? AFAIK They should not need additional Network Interface to see each other.

@@ -178,7 +179,7 @@ data "template_file" "init_master" {

vars {
kubeadm_token = "${module.kubeadm-token.token}"
dns_name = "${var.cluster_name}.${var.hosted_zone}"
dns_name = "internal.${var.cluster_name}.${var.hosted_zone}"
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should pass here both the internal as well as the external DNS names. We should use them both to generate the certificate SANs (in init-aws-kubernetes-master.sh, lines 89-91) and create for both of them kubeconfig (same file, lines 107-114) so that you can pick the one with external access or internal access. But I can do this while merging this if you want.

@Erokos
Copy link

Erokos commented May 18, 2021

Hi, is it possible to use this module without a DNS entry? I also want to have the master node in a privete subnet...

@scholzj
Copy link
Owner

scholzj commented May 18, 2021

@Erokos I think you can create a private DNS zone and use that. It does not need to be public DNS zone with a public domain etc. But it needs the DNS.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants