The Cluster API AWS Boostrapper (or capa-bootstrap) is a Terraform-based tool that's used to quickly spin up a K3s-backed cluster on a single EC2 instance with Cluster API and Cluster API AWS installed and ready to be used.
WARNING: This is NOT meant to be production-ready! The single EC2 instance is susceptible to downtime if any failure on the node occurs, and data loss is expected if the node is recreated for any reason.
- Terraform
- kubectl installed your local terminal.
- AWS credentials via an access key and a secret key (optionally a session token for multi-factor auth), with permissions to create EC2 instances.
- SSH key already present in the AWS account for CAPA to use for node access.
Name | Version |
---|---|
terraform | >= 1.0.0 |
aws | ~> 5.30 |
local | ~> 2.4 |
tls | ~> 4.0 |
Name | Version |
---|---|
aws | 5.42.0 |
local | 2.5.1 |
tls | 4.0.5 |
Name | Source | Version |
---|---|---|
capa | ./modules/capa | n/a |
Name | Type |
---|---|
aws_instance.capa_server | resource |
aws_key_pair.capa_bootstrap_key_pair | resource |
aws_security_group.capa_bootstrap_sg_allowall | resource |
local_file.ssh_public_key_openssh | resource |
local_sensitive_file.ssh_private_key_pem | resource |
tls_private_key.global_key | resource |
aws_ami.latest_ubuntu | data source |
Name | Description | Type | Default | Required |
---|---|---|---|---|
ami_id | AMI ID to use for the management EC2 instance | string |
"" |
no |
aws_access_key | AWS access key used for account access | string |
n/a | yes |
aws_region | AWS region used for all resources | string |
"us-east-1" |
no |
aws_secret_key | AWS secret key used for account access | string |
n/a | yes |
aws_session_token | AWS session token for account access (if using MFA) | string |
"" |
no |
capa_version | Cluster API Provider AWS version (format: v0.0.0) | string |
"v2.4.1" |
no |
capi_version | Cluster API version (format v0.0.0) | string |
"v1.6.3" |
no |
experimental_features | List of experimental CAPI features to enable, e.g. ["EXP_CLUSTER_RESOURCE_SET: true"] | list(string) |
[ |
no |
instance_type | Instance type used for all EC2 instances | string |
"m5a.large" |
no |
k3s_kubernetes_version | Kubernetes version to use for k3s management cluster | string |
"v1.29.2+k3s1" |
no |
prefix | Prefix added to names of all resources | string |
"superorbital-quickstart" |
no |
Name | Description |
---|---|
capa_node_ip | Public IP address of the CAPA management node. Useful for SSH purposes. |
- Provide the AWS authentication data for the configuration:
- Via a tfvars file:
- Create a copy of the example .tfvars file provided in the root directory of this repo, and name it "terraform.tfvars".
- Fill the
aws_access_key
andaws_secret_key
variables with appropriate key data, and modify any other variables from their defaults as you see fit.
- Via environment variables:
- Export the following variables in your terminal:
export TF_VAR_aws_access_key=<YOUR ACCESS KEY> export TF_VAR_aws_secret_key=<YOUR SECRET KEY>
- Export any other variables that you'd like to modify, using the prefix
TF_VAR_
before the variable name.
- Perform a
terraform init
followed by aterraform apply
. If using a tfvars file, executeterraform apply -var-file=<YOUR TFVARS FILE>
instead. When performing an apply, you should see the following output:At the prompt, type "yes" to allow Terraform to create your infrastructure.$ terraform apply data.aws_ami.latest_ubuntu: Reading... data.aws_ami.latest_ubuntu: Read complete after 0s [id=ami-0557a15b87f6559cf] Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # aws_instance.capa_server will be created + resource "aws_instance" "capa_server" { ...
- After Terraform has finished creating your infrastructure, you should see
the following output:
This IP address is the address of the EC2 instance where your management cluster is located. You can start interacting with it using the kubeconfig
Apply complete! Resources: 10 added, 0 changed, 0 destroyed. Outputs: capa_node_ip = <IP ADDRESS>
capa-management.kubeconfig
file that Terraform has created for us in the directory where the apply was done from:$ kubectl --kubeconfig capa-management.kubeconfig get nodes NAME STATUS ROLES AGE VERSION ip-172-31-22-202 Ready control-plane,master 5m5s v1.25.5+k3s2 $ kubectl --kubeconfig capa-management.kubeconfig get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE cert-manager cert-manager-cainjector-d9bc5979d-krs7g 1/1 Running 0 4m55s kube-system local-path-provisioner-79f67d76f8-gn448 1/1 Running 0 4m55s kube-system coredns-597584b69b-np8z2 1/1 Running 0 4m55s cert-manager cert-manager-74d949c895-hqdrt 1/1 Running 0 4m55s cert-manager cert-manager-webhook-84b7ddd796-pstcd 1/1 Running 0 4m54s kube-system helm-install-traefik-crd-7j9fk 0/1 Completed 0 4m56s kube-system metrics-server-5f9f776df5-lt6vs 1/1 Running 0 4m55s kube-system svclb-traefik-005a4e24-m9cdb 2/2 Running 0 4m32s kube-system traefik-66c46d954f-hzsdz 1/1 Running 0 4m33s kube-system helm-install-traefik-n7k79 0/1 Completed 1 4m56s capi-system capi-controller-manager-7847d5c678-wkwf6 1/1 Running 0 4m32s capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-64fcd4ff4d-tsj4l 1/1 Running 0 4m21s capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-6f86697dc-47lcb 1/1 Running 0 4m17s capa-system capa-controller-manager-57666b88f6-lr726 1/1 Running 0 4m9s
To create CAPA-managed clusters, one can follow the quickstart instructions provided in their documentation.
Alternatively, you can use the sample YAML files provided in the examples directory, where one can modify some of the values and create a CAPA-managed cluster with complete visibility of the control plane, or a CAPA-managed EKS cluster where the control plane is managed by AWS.
WARNING: Clusters can be created by kubectl apply
'ing the YAML files in
the examples, however you should always clean up CAPA clusters by deleting the
cluster object from the cluster:
kubectl delete cluster -n <CAPA CLUSTER NAMESPACE> <CAPA CLUSTER NAME>
If the cluster is deleted using the YAML file, the controllers won't be able to clean up resources properly and could leave lingering objects in the cluster and in the AWS account that will need to be removed manually.
If needed, the node where the management cluster is installed is accessible
via SSH with the private key id_rsa
created for the EC2 instance:
ssh -i id_rsa ubuntu@<NODE IP ADDRESS>