Skip to content

Latest commit

 

History

History
100 lines (70 loc) · 3.9 KB

README.md

File metadata and controls

100 lines (70 loc) · 3.9 KB

Exercising with "kubernetes the hard way"

This reprository is an Outscale implementation of kubernetes the hard way.

It uses Outscale's terraform provider combined with Ansible.

Its purpose is mainly to play with Kubernetes, Terraform and Ansible on Outscale cloud.

Note that this project only follow the tutorial and has a number limitation like:

  • Services IPs are only available on worker nodes
  • No Ingress Controller installed
  • Storage management is not availabled (no CSI)

Architecture

The Kubernetes cluster is deployed inside a Net with two Subnets:

  • One subnet for control-plane nodes (3 VM by default)
  • One subnet for worker nodes (2 VM by default)

Additional services deployed:

  • A Load-balancer distributes Kubernetes's API traffic on control-planes.
  • A NAT Service is created to provide internet access to workers.
  • Each control-plane has a public IP and are used as a bastion host to access worker nodes.
  • Cloud controller manager (CCM) can be enabled in to run Service of type Load Balancer

Prerequisite

Configuration

export TF_VAR_access_key_id="myaccesskey"
export TF_VAR_secret_key_id="mysecretkey"
export TF_VAR_region="eu-west-2"

By editing 'terraform.tfvars', you can adjust the number of nodes, kubernetes version, enabling CCM, etc. Depending of your operating system, you may have to adapt terraform_os and terraform_arch variables.

Note about CCM: due to a meta-data bug (to be fixed), you will have to enable it in two steps:

  1. run terraform apply
  2. In terraform.tfvars: set with_cloud_provider to true.
  3. In workers.tf, uncomment "OscK8sClusterID" tag in outscale_vm resource.
  4. run again terraform apply

For macOS:

terraform_os = "darwin"
terraform_arch = "amd64"

Deploy

Terraform will deploy all IaaS components and will run Ansible playbooks to setup nodes.

terraform init
terraform apply

This should take few minutes to complete, time for a break.

Connect to nodes

To connect to a worker node:

ssh -F ssh_config worker-0

To connect to a control-plane node and list all worker nodes:

ssh -F ssh_config control-plane-0
kubectl get nodes

Note that worker nodes may take few seconds to register to Kubernetes.

Smoke Test

Smoke testing our newly created Kubernetes cluster can be done very similarely to kubernetes-the-hard-way. Note that workers has no public IP so you can test Nodeport service from one control-plane.

You can also deploy a Service of type Load Balancer by setting with_example_2048=true (need with_cloud_provider enabled as well). You can get the load-balancer URL through kubectl get service -n 2048.

Cleaning Up

Just run terraform destroy.

Alternatively, you can manually cleanup your resources if something goes wrong:

  • Connect to cockpit interface
  • Go to VPC->VPCs, Select the created VPC, click the "Teardown" button and validate.
  • Go to Network/Security->Keypairs and delete Keypairs created for each node (3 control-planes and 2 workers by default)
  • Go to Network/Security->External Ips and delete EIP created for each control-planes (3 by default)

Contributing

Feel free to report an issue.