Skip to content

Latest commit

 

History

History
163 lines (137 loc) · 6.24 KB

File metadata and controls

163 lines (137 loc) · 6.24 KB

EKS Cluster bootstrap with Terraform for Crossplane

This example deploys the following components

  • Creates a new sample VPC, Two Private Subnets and Two Public Subnets
  • Creates Internet gateway for Public Subnets and NAT Gateway for Private Subnets
  • Creates EKS Cluster Control plane with one managed node group
  • Crossplane Add-on to EKS Cluster
  • Upbound AWS Provider for Crossplane
  • AWS Provider for Crossplane
  • Kubernetes Provider for Crossplane
  • Helm Provider for Crossplane

Crossplane Deployment Design

graph TD;
    subgraph AWS Cloud
    id1(VPC)-->Private-Subnet1;
    id1(VPC)-->Private-Subnet2;
    id1(VPC)-->Public-Subnet1;
    id1(VPC)-->Public-Subnet2;
    Public-Subnet1-->InternetGateway
    Public-Subnet2-->InternetGateway
    Public-Subnet3-->InternetGateway
    Public-Subnet3-->Single-NATGateway
    Private-Subnet1-->EKS{{"EKS #9829;"}}
    Private-Subnet2-->EKS
    Private-Subnet3-->EKS
    EKS==>ManagedNodeGroup;
    ManagedNodeGroup-->|enable_crossplane=true|id2([Crossplane]);
    subgraph Kubernetes Add-ons
    id2([Crossplane])-.->|crossplane_aws_provider.enable=true|id3([AWS-Provider]);
    id2([Crossplane])-.->|crossplane_upbound_aws_provider.enable=true|id4([Upbound-AWS-Provider]);
    id2([Crossplane])-.->|crossplane_kubernetes_provider.enable=true|id5([Kubernetes-Provider]);
    id2([Crossplane])-.->|crossplane_helm_provider.enable=true|id6([Helm-Provider]);
    end
    end
Loading

How to Deploy

Prerequisites:

Ensure that you have installed the following tools in your Mac or Windows Laptop before start working with this module and run Terraform Plan and Apply

  1. AWS CLI
  2. Kubectl
  3. Terraform >=v1.0.0

Troubleshooting

  1. If terraform apply errors out after creating the cluster when trying to apply the helm charts, try running the command:
aws eks --region <enter-your-region> update-kubeconfig --name <cluster-name> --alias <cluster-name>

and executing terraform apply again.

  1. Make sure you have upgraded to the latest version of AWS CLI. Make sure your AWS credentials are properly configured as well.

Deployment Steps

Step1: Clone the repo using the command below

git clone https://github.com/aws-samples/crossplane-aws-blueprints.git

Important

The examples in this repository make use of one of the Crossplane AWS providers. For that reason upbound_aws_provider.enable is set to true and aws_provider.enable is set to false. If you use the examples for aws_provider, adjust the terraform main.tf in order install only the necessary CRDs to the Kubernetes cluster.

Step2: Run Terraform INIT

Initialize a working directory with configuration files

cd bootstrap/terraform/
terraform init

Step3: Run Terraform PLAN

Verify the resources created by this execution

export TF_VAR_region=<ENTER YOUR REGION>   # Select your own region
terraform plan

Step4: Finally, Terraform APPLY

to create resources

terraform apply

Enter yes to apply

Configure kubectl and test cluster

EKS Cluster details can be extracted from terraform output or from AWS Console to get the name of cluster. This following command used to update the kubeconfig in your local machine where you run kubectl commands to interact with your EKS Cluster.

Step5: Run update-kubeconfig command

~/.kube/config file gets updated with cluster details and certificate from the below command

aws eks --region <enter-your-region> update-kubeconfig --name <cluster-name>

Step6: List all the worker nodes by running the command below

kubectl get nodes

Step7: Verify the pods running in crossplane-system namespace

kubectl get pods -n crossplane-system

Step8: Verify the names provider and provider configs

Run the following command to get the list of providers:

kubectl get providers

The expected output looks like this:

NAME                  INSTALLED   HEALTHY   PACKAGE                                                         AGE
aws-provider          True        True      xpkg.upbound.io/crossplane-contrib/provider-aws:v0.36.0         36m
kubernetes-provider   True        True      xpkg.upbound.io/crossplane-contrib/provider-kubernetes:v0.6.0   36m
provider-helm         True        True      xpkg.upbound.io/crossplane-contrib/provider-helm:v0.13.0        36m
upbound-aws-provider  True        True      xpkg.upbound.io/upbound/provider-aws:v0.27.0                    36m

Run the following commands to get the list of provider configs:

kubectl get provider

The expected output looks like this:

NAME                                                   AGE
providerconfig.aws.crossplane.io/aws-provider-config   36m

NAME                                        AGE
providerconfig.helm.crossplane.io/default   36m

NAME                                                                 AGE
providerconfig.kubernetes.crossplane.io/kubernetes-provider-config   36m

Step9: Access the ArgoCD UI

Get the load balancer url:

kubectl -n argocd get service argo-cd-argocd-server -o jsonpath="{.status.loadBalancer.ingress[*].hostname}{'\n'}"

Copy and paste the result in your browser.
The initial username is admin. The password is autogenerated and you can get it by running the following command:

echo "$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d)"

Clean up

  1. Delete resources created by Crossplane such as first Claims, then XRDs and Compositions.

  2. Remove crossplane providers by running

terraform apply --var enable_upbound_aws_provider=false --var enable_aws_provider=false --var enable_kubernetes_provider=false --var enable_helm_provider=false
  1. Run kubectl get providers to validate all providers were removed. If any left, remove using kubectl delete providers <provider>

  2. Delete the EKS cluster and it's resources with the following command

./destroy.sh