diff --git a/README.md b/README.md index 92190b58b1..1c21b1c9a5 100644 --- a/README.md +++ b/README.md @@ -2,47 +2,123 @@ Welcome to Amazon EKS Blueprints for Terraform! -This project contains a collection of Amazon EKS cluster patterns implemented in Terraform that demonstrate how fast and easy it is for customers to adopt [Amazon EKS](https://aws.amazon.com/eks/). The patterns can be used by AWS customers, partners, and internal AWS teams to configure and manage complete EKS clusters that are fully bootstrapped with the operational software that is needed to deploy and operate workloads. +This project contains a collection of Amazon EKS cluster patterns implemented in Terraform that +demonstrate how fast and easy it is for customers to adopt [Amazon EKS](https://aws.amazon.com/eks/). +The patterns can be used by AWS customers, partners, and internal AWS teams to configure and manage +complete EKS clusters that are fully bootstrapped with the operational software that is needed to +deploy and operate workloads. ## Motivation -Kubernetes is a powerful and extensible container orchestration technology that allows you to deploy and manage containerized applications at scale. The extensible nature of Kubernetes also allows you to use a wide range of popular open-source tools, commonly referred to as add-ons, in Kubernetes clusters. With such a large number of tooling and design choices available however, building a tailored EKS cluster that meets your application’s specific needs can take a significant amount of time. It involves integrating a wide range of open-source tools and AWS services and requires deep expertise in AWS and Kubernetes. - -AWS customers have asked for examples that demonstrate how to integrate the landscape of Kubernetes tools and make it easy for them to provision complete, opinionated EKS clusters that meet specific application requirements. Customers can use EKS Blueprints to configure and deploy purpose built EKS clusters, and start onboarding workloads in days, rather than months. - -## Core Concepts - -This document provides a high level overview of the Core Concepts that are embedded in EKS Blueprints. For the purposes of this document, we will assume the reader is familiar with Git, Docker, Kubernetes and AWS. - -| Concept | Description | -| --------------------------- | --------------------------------------------------------------------------------------------- | -| [Cluster](#cluster) | An Amazon EKS Cluster and associated worker groups. | -| [Add-on](#add-on) | Operational software that provides key functionality to support your Kubernetes applications. | -| [Team](#team) | A logical grouping of IAM identities that have access to Kubernetes resources. | - -### Cluster - -A `cluster` is simply an EKS cluster. EKS Blueprints provides for customizing the compute options you leverage with your `clusters`. The framework currently supports `EC2`, `Fargate` and `BottleRocket` instances. It also supports managed and self-managed node groups. - -We rely on [`terraform-aws-modules/eks/aws`](https://registry.terraform.io/modules/terraform-aws-modules/eks/aws/latest) to configure `clusters`. See our [examples](getting-started.md) to see how `terraform-aws-modules/eks/aws` is configured for EKS Blueprints. - -### Add-on - -`Add-ons` allow you to configure the operational tools that you would like to deploy into your EKS cluster. When you configure `add-ons` for a `cluster`, the `add-ons` will be provisioned at deploy time by leveraging the Terraform Helm provider. Add-ons can deploy both Kubernetes specific resources and AWS resources needed to support add-on functionality. - -For example, the `metrics-server` add-on only deploys the Kubernetes manifests that are needed to run the Kubernetes Metrics Server. By contrast, the `aws-load-balancer-controller` add-on deploys both Kubernetes YAML, in addition to creating resources via AWS APIs that are needed to support the AWS Load Balancer Controller functionality. - -EKS Blueprints allows you to manage your add-ons directly via Terraform (by leveraging the Terraform Helm provider) or via GitOps with ArgoCD. See our [`Add-ons`](https://aws-ia.github.io/terraform-aws-eks-blueprints-addons/main/) documentation page for detailed information. - -### Team - -`Teams` allow you to configure the logical grouping of users that have access to your EKS clusters, in addition to the access permissions they are granted. - -See our [`Teams`](https://github.com/aws-ia/terraform-aws-eks-blueprints-teams) documentation page for detailed information. +Kubernetes is a powerful and extensible container orchestration technology that allows you to deploy +and manage containerized applications at scale. The extensible nature of Kubernetes also allows you +to use a wide range of popular open-source tools in Kubernetes clusters. However, With the wide array +of tooling and design choices available, configuring an EKS cluster that meets your organization’s +specific needs can take a significant amount of time. It involves integrating a wide range of +open-source tools and AWS services as well as expertise in AWS and Kubernetes. + +AWS customers have asked for patterns that demonstrate how to integrate the landscape of Kubernetes +tools and make it easy for them to provision complete, opinionated EKS clusters that meet specific +application requirements. Customers can utilize EKS Blueprints to configure and deploy purpose built +EKS clusters, and start onboarding workloads in days, rather than months. + +## Consumption + +EKS Blueprints for Terraform has been designed to be consumed in the following manners: + +1. Reference: Users can refer to the patterns and snippets provided to help guide them to their desired +solution. Users will typically view how the pattern or snippet is configured to achieve the desired +end result and then replicate that in their environment. +2. Copy & Paste: Users can copy and paste the patterns and snippets into their own environment, using +EKS Blueprints as the starting point for their implementation. Users can then adapt the initial pattern +to customize it to their specific needs. + +EKS Blueprints for Terraform are not intended to be consumed as-is directly from this project. In +"Terraform speak" - the patterns and snippets provided in this repository are not designed to be consumed +as a Terraform module. Therefore, the patterns provided only contain `variables` when certain information +is required to deploy the pattern (i.e. - a Route53 hosted zone ID, or ACM certificate ARN) and generally +use local variables. If you wish to deploy the patterns into a different region or with other changes, it +is recommended that you make those modifications locally before applying the pattern. EKS Blueprints for +Terraform will not expose variables and outputs in the same manner that Terraform modules follow in +order to avoid confusion around the consumption model. + +However, we do have a number of Terraform modules that were created to support +EKS Blueprints in addition to the community hosted modules. Please see the respective projects for more +details on the modules constructed to support EKS Blueprints for Terraform; those projects are listed +[below](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/#related-projects). + +- [`terraform-aws-eks-blueprint-addon`](https://github.com/aws-ia/terraform-aws-eks-blueprints-addon) - +(Note the singular form) Terraform module which can provision an addon using the Terraform +`helm_release` resource in addition to an IAM role for service account (IRSA). +- [`terraform-aws-eks-blueprint-addons`](https://github.com/aws-ia/terraform-aws-eks-blueprints-addons) - +(Note the plural form) Terraform module which can provision multiple addons; both EKS addons +using the `aws_eks_addon` resource as well as Helm chart based addons using the +[`terraform-aws-eks-blueprint-addon`](https://github.com/aws-ia/terraform-aws-eks-blueprints-addon) module. +- [`terraform-aws-eks-blueprints-teams`](https://github.com/aws-ia/terraform-aws-eks-blueprints-teams) - +Terraform module that creates Kubernetes multi-tenancy resources and configurations, allowing both +administrators and application developers to access only the resources which they are responsible for. + +### Related Projects + +In addition to the supporting EKS Blueprints Terraform modules listed above, there are a number of +related projects that users should be aware of: + +1. GitOps + + - [`terraform-aws-eks-ack-addons`](https://github.com/aws-ia/terraform-aws-eks-ack-addons) - + Terraform module to deploy ACK controllers onto EKS clusters + - [`crossplane-on-eks`](https://github.com/awslabs/crossplane-on-eks) - Crossplane Blueprints + is an open source repo to bootstrap Amazon EKS clusters and provision AWS resources using a + library of Crossplane Compositions (XRs) with Composite Resource Definitions (XRDs). + +2. Data on EKS + + - [`data-on-eks`](https://github.com/awslabs/data-on-eks) - A collection of blueprints intended + for data workloads on Amazon EKS. + - [`terraform-aws-eks-data-addons`](https://github.com/aws-ia/terraform-aws-eks-data-addons) - + Terraform module to deploy multiple addons that are specific to data workloads on EKS clusters. + +3. Observability Accelerator + + - [`terraform-aws-observability-accelerator`](https://github.com/aws-observability/terraform-aws-observability-accelerator) - + A set of opinionated modules to help you set up observability for your AWS environments with + AWS-managed observability services such as Amazon Managed Service for Prometheus, Amazon + Managed Grafana, AWS Distro for OpenTelemetry (ADOT) and Amazon CloudWatch + +## Terraform Caveats + +EKS Blueprints for Terraform does not intend to teach users the recommended practices for Terraform +nor does it offer guidance on how users should structure their Terraform projects. The patterns +provided are intended to show users how they can achieve a defined architecture or configuration +in a way that they can quickly and easily get up and running to start interacting with that pattern. +Therefore, there are a few caveats users should be aware of when using EKS Blueprints for Terraform: + +1. We recognize that most users will already have an existing VPC in a separate Terraform workspace. +However, the patterns provided come complete with a VPC to ensure a stable, deployable example that +has been tested and validated. + +2. Hashicorp [does not recommend providing computed values in provider blocks](https://github.com/hashicorp/terraform/issues/27785#issuecomment-780017326) +, which means that the cluster configuration should be defined in a workspace separate from the resources +deployed onto the cluster (i.e. - addons). However, to simplify the pattern experience, we have defined +everything in one workspace and provided instructions to provision the patterns using a targeted +apply approach. Users are encouraged to investigate a Terraform project structure that suites their needs; +EKS Blueprints for Terraform does not have an opinion in this matter and will defer to Hashicorp's guidance. + +3. Patterns are not intended to be consumed in-place in the same manner that one would consume a module. +Therefore, we do not provide variables and outputs to expose various levels of configuration for the examples. +Users can modify the pattern locally after cloning to suite their requirements. + +4. Please see the [FAQ section](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/faq/#provider-authentication) +on authenticating Kubernetes based providers (`kubernetes`, `helm`, `kubectl`) to Amazon EKS clusters +regarding the use of static tokens versus dynamic tokens using the `awscli`. ## Support & Feedback -EKS Blueprints for Terraform is maintained by AWS Solution Architects. It is not part of an AWS service and support is provided best-effort by the EKS Blueprints community. To post feedback, submit feature ideas, or report bugs, please use the [Issues section](https://github.com/aws-ia/terraform-aws-eks-blueprints/issues) of this GitHub repo. If you are interested in contributing to EKS Blueprints, see the [Contribution guide](https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/main/CONTRIBUTING.md). +EKS Blueprints for Terraform is maintained by AWS Solution Architects. It is not part of an AWS +service and support is provided as a best-effort by the EKS Blueprints community. To provide feedback, +please use the [issues templates](https://github.com/aws-ia/terraform-aws-eks-blueprints/issues) +provided. If you are interested in contributing to EKS Blueprints, see the +[Contribution guide](https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/main/CONTRIBUTING.md). ## Security diff --git a/docs/faq.md b/docs/faq.md index 3af495ea26..a55950217e 100644 --- a/docs/faq.md +++ b/docs/faq.md @@ -132,25 +132,25 @@ For example, with namespaces: 1. Confirm the namespace is hanging in status `Terminating` -```sh -kubectl get namespaces -``` + ```sh + kubectl get namespaces + ``` 2. Check for any orphaned resources in the namespace, make sure to replace with your namespace: -```sh -kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get \ ---show-kind --ignore-not-found -n -``` + ```sh + kubectl api-resources --verbs=list --namespaced -o name | xargs -n 1 kubectl get \ + --show-kind --ignore-not-found -n + ``` 3. For any of the above output, patch the resource finalize: -```sh -kubectl patch RESOURCE NAME -p '{"metadata":{"finalizers":[]}}' --type=merge -``` + ```sh + kubectl patch RESOURCE NAME -p '{"metadata":{"finalizers":[]}}' --type=merge + ``` 4. Check the status of the namespace, if needed you may need to patch the namespace finalizers as-well -```sh -kubectl patch ns -p '{"spec":{"finalizers":null}}' -``` + ```sh + kubectl patch ns -p '{"spec":{"finalizers":null}}' + ``` diff --git a/docs/getting-started.md b/docs/getting-started.md index 5a3153f550..7ea8c1cb11 100644 --- a/docs/getting-started.md +++ b/docs/getting-started.md @@ -1,76 +1,73 @@ # Getting Started -This getting started guide will help you deploy your first EKS environment using EKS Blueprints. +This getting started guide will help you deploy your first pattern using EKS Blueprints. -## Prerequisites: +## Prerequisites -First, ensure that you have installed the following tools locally. +Ensure that you have installed the following tools locally: -1. [aws cli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) -2. [kubectl](https://Kubernetes.io/docs/tasks/tools/) -3. [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli) +- [awscli](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) +- [kubectl](https://Kubernetes.io/docs/tasks/tools/) +- [terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli) -## Examples +## Deploy -Select an example from the [`patterns/`](https://github.com/aws-ia/terraform-aws-eks-blueprints/tree/main/patterns) directory and follow the instructions in its respective README.md file. The deployment steps for examples generally follow the deploy, validate, and clean-up steps shown below. +1. For consuming EKS Blueprints, please see the [Consumption](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/#consumption) section. For exploring and trying out the patterns provided, please +clone the project locally to quickly get up and running with a pattern. After cloning the project locally, `cd` into the pattern +directory of your choice. -### Deploy +2. To provision the pattern, the typical steps of execution are as follows: -To provision this example: + ```sh + terraform init + terraform apply -target="module.vpc" -auto-approve + terraform apply -target="module.eks" -auto-approve + terraform apply -auto-approve + ``` -```sh -terraform init -terraform apply -target module.vpc -terraform apply -target module.eks -terraform apply -``` - -Enter `yes` at command prompt to apply + For patterns that deviate from this general flow, see the pattern's respective `REAMDE.md` for more details. -### Validate + !!! info "Terraform targetted apply" + Please see the [Terraform Caveats](https://aws-ia.github.io/terraform-aws-eks-blueprints/main/#terraform-caveats) section for details on the use of targeted Terraform apply's -The following command will update the `kubeconfig` on your local machine and allow you to interact with your EKS Cluster using `kubectl` to validate the CoreDNS deployment for Fargate. +3. Once all of the resources have successfully been provisioned, the following command can be used to update the `kubeconfig` +on your local machine and allow you to interact with your EKS Cluster using `kubectl`. -1. Run `update-kubeconfig` command: + ```sh + aws eks --region update-kubeconfig --name + ``` -```sh -aws eks --region update-kubeconfig --name -``` + !!! info "Pattern Terraform outputs" + Most examples will output the `aws eks update-kubeconfig ...` command as part of the Terraform apply output to simplify this process for users -3. View the pods that were created: + !!! warning "Private clusters" + Clusters that do not enable the clusters public endpoint will require users to access the cluster from within the VPC. + For these patterns, a sample EC2 or other means are provided to demonstrate how to access those clusters privately + and without exposing the public endpoint. Please see the respective pattern's `README.md` for more details. -```sh -kubectl get pods -A - -# Output should show some pods running -NAMESPACE NAME READY STATUS RESTARTS AGE -kube-system coredns-66b965946d-gd59n 1/1 Running 0 92s -kube-system coredns-66b965946d-tsjrm 1/1 Running 0 92s -kube-system ebs-csi-controller-57cb869486-bcm9z 6/6 Running 0 90s -kube-system ebs-csi-controller-57cb869486-xw4z4 6/6 Running 0 90s -``` +4. Once you have updated your `kubeconfig`, you can verify that you are able to interact with your cluster by running the following command: -3. View the nodes that were created: + ```sh + kubectl get nodes + ``` -```sh -kubectl get nodes - -# Output should show some nodes running -NAME STATUS ROLES AGE VERSION -fargate-ip-10-0-10-11.us-west-2.compute.internal Ready 8m7s v1.24.8-eks-a1bebd3 -fargate-ip-10-0-10-210.us-west-2.compute.internal Ready 2m50s v1.24.8-eks-a1bebd3 -fargate-ip-10-0-10-218.us-west-2.compute.internal Ready 8m6s v1.24.8-eks-a1bebd3 -fargate-ip-10-0-10-227.us-west-2.compute.internal Ready 8m8s v1.24.8-eks-a1bebd3 -fargate-ip-10-0-10-42.us-west-2.compute.internal Ready 8m6s v1.24.8-eks-a1bebd3 -fargate-ip-10-0-10-71.us-west-2.compute.internal Ready 2m48s v1.24.8-eks-a1bebd3 -``` + This should return a list of the node(s) running in the cluster created. If any errors are encountered, please re-trace the steps above + and consult the pattern's `README.md` for more details on any additional/specific steps that may be required. -### Destroy +## Destroy -To teardown and remove the resources created in this example: +To teardown and remove the resources created in the pattern, the typical steps of execution are as follows: ```sh terraform destroy -target="module.eks_blueprints_addons" -auto-approve terraform destroy -target="module.eks" -auto-approve terraform destroy -auto-approve ``` + +!!! danger "Resources created outside of Terraform" + Depending on the pattern, some resources may have been created that Terraform is not aware of that will cause issues + when attempting to clean up the pattern. For example, Karpenter is responsible for creating additional EC2 instances + to satisfy the pod scheduling requirements. These instances will not be cleaned up by Terraform and will need to be + de-provisioned *BEFORE* attempting to `terraform destroy`. This is why it is important that the addons, or any resources + provisioned onto the cluster are cleaned up first. Please see the respective pattern's `README.md` for more + details.