Skip to content

Commit

Permalink
Merge pull request #2 from vishalsaugat/add-policy-tags
Browse files Browse the repository at this point in the history
Add policy tags
  • Loading branch information
vishalsaugat authored Jun 7, 2023
2 parents 02131ca + 4888cf3 commit 3c8fa09
Show file tree
Hide file tree
Showing 40 changed files with 1,503 additions and 31 deletions.
1 change: 1 addition & 0 deletions .github/workflows/plan-examples.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ on:
pull_request_target:
branches:
- main
- v4
workflow_dispatch:

concurrency:
Expand Down
1 change: 1 addition & 0 deletions .github/workflows/pre-commit.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@ on:
pull_request:
branches:
- main
- v4
paths:
- '**.tf'
- '**.yml'
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ repos:
- id: detect-aws-credentials
args: ['--allow-missing-credentials']
- repo: https://github.com/antonbabenko/pre-commit-terraform
rev: v1.77.2
rev: v1.80.0
hooks:
- id: terraform_fmt
- id: terraform_docs
Expand Down
1 change: 1 addition & 0 deletions ADOPTERS.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,5 +13,6 @@ If you are using EKS Blueprints for Terraform please consider adding yourself as

| Organization | Description | Contacts | Link |
| --- | --- | --- | --- |
| PITS Global Data Recovery Services | PITS Global Data Recovery Services is a leading global recovery company based in the United States. With a team of highly skilled and experienced data recovery experts, we are dedicated to providing top-notch data recovery services to businesses and individuals across the country. Our team understands how critical data is and takes pride in recovering lost, corrupted, or deleted files quickly and effectively. Regardless of the data loss scenario, our team of experts has the tools and expertise to get your data back.| [email protected] | https://www.pitsdatarecovery.net/ |
| AlgoDx AB | At AlgoDx, we are dedicated to developing and implementing clinically validated ML-based clinical decision support software that save lives and reduce costs in healthcare, primarily targeting intensive and critical care. We achieve this through the integration of our state-of-the-art AI precision medicine platform with health systems’ electronic medical records. | [email protected], [email protected] | https://www.algodx.com/ |
| Swyft Logistics | Swyft is an award-winning, tech-enabled last-mile e-Parcel delivery service provider, primarily operating in the fashion, electronics, and personal care industries. Swyft aspires to reinvent logistics in Pakistan and connecting people beyond possibilities. Our focus is on innovation, development of technologies and processes. | [email protected], [email protected], [email protected], | https://www.swyftlogistics.com/ |
102 changes: 102 additions & 0 deletions DIRECTION_v5.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
# Direction for v5 of Terraform EKS Blueprints

## What Has Worked

- EKS Blueprints was started to [make it easier for customers to adopt Amazon Elastic Kubernetes Service (EKS)](https://aws.amazon.com/blogs/containers/bootstrapping-clusters-with-eks-blueprints/) in a shorter period of time. The project has been quite successful in this regard - hearing from customers stating that EKS Blueprints has helped them get from zero to one or more clusters running with applications in less than 1-2 weeks.

- EKS Blueprints has also been successful in providing working examples to users that demonstrate common architectural patterns and workload solutions. Some popular examples include:
- Spark on EKS
- Karpenter on EKS Fargate
- Transparent encryption with Wireguard and Cilium
- Fully serverless cluster with EKS Fargate

## What Has Not

- Scaling and managing addons that are created through EKS Blueprints. With almost [1,200 projects on the CNCF roadmap](https://landscape.cncf.io/), the number of various ways and methods that a project allows for deploying onto a cluster (i.e. - Datadog offers 5 different Helm charts for its service, Prometheus hosts over 30 Helm charts for its services), as well as the number of different tools used to provision addons (i.e. - Terraform, ArgoCD, FluxCD, etc.), supporting both the number of addons and their different forms has been extremely challenging for the team. In addition to managing just the sheer number of addons, supporting the different configurations that users wish to have exposed in conjunction with testing and validating those various configurations is only compounded by the number of addons and their methods of creation.

- Managing resources provisioned on the cluster using Terraform. Terraform is a fantastic tool for provisioning infrastructure and it is the tool of choice for many customers when it comes to creating resources in AWS. However, there are a number of downsides with Terraform when it comes to provisioning resources on a Kubernetes cluster. These include:

- Ordering of dependencies when relationships live outside of Terraform's HCL syntax. Terraform wants to evaluate the current state of what it controls and be able to plan a series of actions to align the current state with the desired state *in one action*. It does this once for each `terraform plan` or `terraform apply`, and if any issues are encountered, it simply fails and halts execution. When Terraform cannot infer the ordering of dependencies across resources (i.e. - through passing outputs of parent resources to arguments of child resources using the Terraform `<resource>.<name>.<attribute>` syntax), it will view this as no relationship between the resources and attempt to execute their provisioning in parallel and asynchronously. Any resources that are left waiting for a dependency will eventually timeout and fail, causing Terraform itself to timeout and fail the apply. This is where the reconciliation loop of a Kubernetes controller or operator on the cluster is better suited - continuously trying to reconcile the state over and over again as dependencies are eventually resolved. (To be clear - the issue of dependency ordering still exists, but the controller/operator will keep retrying and on each retry, some resources will succeed which will move the execution along with each cycle until everything is fully deployed. Terraform could do this if it kept re-trying, but it does not do this today)

- Publicly exposing access to the EKS endpoints in order to provision resources defined outside of the VPC onto the cluster. When using Terraform, the resource provisioning operation is a "push" model where Terraform will send requests to the EKS API Server to create resources. Coupled with the fact that the Terraform operation typically resides outside of the VPC where the cluster is running, this results in users enabling public access to the EKS endpoints to provision resources. However, the more widely accepted approach by the Kubernetes community has been the adoption of GitOps which uses a "pull" based model, where an operator or controller running on the cluster will pull the resource definitions from a Git repository and reconcile state from within the cluster itself. This approach is more secure as it does not require public access to the EKS endpoints and instead relies on the cluster's internal network to communicate with the EKS API Server.

- The nesting of multiple sub-modules in conjunction with the necessity to even require a module to be able to support an addon. When we compare and contrast the Terraform approach to addons versus the GitOps approach, the Terraform approach has a glaring disadvantage - the need to create a module that wraps the addon's Helm chart in order to provision the addon via Terraform. As opposed to the GitOps approach, where users simply consume the charts from where they are stored as needed. This creates a bottleneck on the team to review, test, and validate each new addon as well as the overhead then added for maintaining and updating those addons going forward. This also opens up more areas where breaking changes are encountered which is compounded by the fact that Terraform addons are grouped under an "umbrella" module which obfuscates versioning.

- Being able to support a combination of various tools, modules, frameworks, etc., to meet the needs of customers. The [`terraform-aws-eks`](https://github.com/terraform-aws-modules/terraform-aws-eks) was created long before EKS Blueprints, and many customers had already adopted this module for creating their clusters. In addition, Amazon has since adopted the [`eksctl`](https://github.com/weaveworks/eksctl) as the official CLI for Amazon EKS. When EKS Blueprints was first announced, many customers raised questions asking if they needed to abandon their current clusters created through those other tools in order to adopt EKS Blueprints. The answer is no - users can and should be able to use their existing clusters while EKS Blueprints can help augment that process through its supporting modules (addons, teams, etc.). This left the team with the question - why create a Terraform module for creating an EKS cluster when the [`terraform-aws-eks`](https://github.com/terraform-aws-modules/terraform-aws-eks) already exists and the EKS Blueprints implementation already uses that module for creating the control plane and security groups?

## What Is Changing

The direction for EKS Blueprints in v5 will shift from providing an all-encompassing, monolithic "framework" and instead focus more on how users can organize a set of modular components to create the desired solution on Amazon EKS. This will allow customers to use the components of their choosing in a way that is more familiar to them and their organization instead of having to adopt and conform to a framework.

With this shift in direction, the cluster definition will be removed from the project and instead examples will reference the [`terraform-aws-eks`](https://github.com/terraform-aws-modules/terraform-aws-eks) module for cluster creation. The remaining modules will be moved out to their own respective repositories as standalone projects. This leaves the EKS Blueprint project as the canonical place where users can receive guidance on how to configure their clusters to meet a desired architecture, how best to setup their clusters following well-architected practices, as well as references on the various ways that different workloads can be deployed on Amazon EKS.

### Notable Changes

1. EKS Blueprints will remove its Amazon EKS cluster Terraform module components (control plane, EKS managed node group, self-managed node group, and Fargate profile modules) from the project. In its place, users are encouraged to utilize the [`terraform-aws-eks`](https://github.com/terraform-aws-modules/terraform-aws-eks) module which meets or exceeds nearly all of the functionality of the EKS Blueprints v4.x cluster module. This includes the Terraform code contained at the root of the project as well as the `aws-eks-fargate-profiles`, `aws-eks-managed-node-groups`, `aws-eks-self-managed-node-groups`, and `launch-templates` modules which will all be removed from the project.
2. The `aws-kms` module will be removed entirely. This was consumed in the root project module for cluster secret encryption. In its place, users can utilize the KMS key creation functionality of the [`terraform-aws-eks`](https://github.com/terraform-aws-modules/terraform-aws-eks) module or the [`terraform-aws-kms`](https://github.com/terraform-aws-modules/terraform-aws-kms) module if they wish to control the key separately from the cluster itself.
3. The `emr-on-eks` module will be removed entirely; its replacement can be found in the new external module [`terraform-aws-emr`](https://github.com/terraform-aws-modules/terraform-aws-emr/tree/master/modules/serverless).
4. The `irsa` and `helm-addon` modules will be removed entirely; we have released a new external module [`terraform-aws-eks-blueprints-addon`](https://github.com/aws-ia/terraform-aws-eks-blueprints-addon) that is available on the Terraform registry that replicates/replaces the functionality of these two modules. This will now allow users, as well as partners, to create their own addons that are not natively supported by EKS Blueprints more easily and following the same process as EKS Blueprints.
5. The `aws-eks-teams` module will be removed entirely; its replacement will be the new external module [`terraform-aws-eks-blueprints-teams`](https://github.com/aws-ia/terraform-aws-eks-blueprints-teams) that incorporates the changes customers have been asking for in https://github.com/aws-ia/terraform-aws-eks-blueprints/issues/842
6. The integration between Terraform and ArgoCD has been removed in the initial release of v5. The team is currently investigating better patterns and solutions in conjunction with the ArgoCD and FluxCD teams that will provide a better, more integrated experience when using a GitOps based approach for cluster management. This will be released in a future version of EKS Blueprints v5 and is tracked [here](https://github.com/aws-ia/terraform-aws-eks-blueprints-addons/issues/114)

### Resulting Project Structure

Previously under the v4.x structure, the EKS Blueprint project was comprised of various repositories across multiple AWS organizations that looked roughly like the following:

#### v4.x Structure

```
├── aws-ia/
| ├── terraform-aws-eks-ack-addons/
| └── terraform-aws-eks-blueprints/
| ├── aws-auth-configmap.tf
| ├── data.tf
| ├── eks-worker.tf
| ├── locals.tf
| ├── main.tf
| ├── outputs.tf
| ├── variables.tf
| ├── versions.tf
| ├── examples/
| └── modules
| ├── aws-eks-fargate-profiles/
| ├── aws-eks-managed-node-groups/
| ├── aws-eks-self-managed-node-groups/
| ├── aws-eks-teams/
| ├── aws-kms/
| ├── emr-on-eks/
| ├── irsa/
| ├── kubernetes-addons/
| └── launch-templates/
├── awslabs/
| ├── crossplane-on-eks/
| └── data-on-eks/
└── aws-samples/
├── eks-blueprints-add-ons/ # Previously shared with the CDK based EKS Blueprints project
└── eks-blueprints-workloads/ # Previously shared with the CDK based EKS Blueprints project
```

Under th new v5.x structure, the Terraform based EKS Blueprints project will be comprised of the following repositories:

#### v5.x Structure

```
├── aws-ia/
| ├── terraform-aws-eks-ack-addons/
| ├── terraform-aws-eks-blueprints/ # Will contain only example/blueprint implementations; no modules
| ├── terraform-aws-eks-blueprints-addon # Module for creating Terraform based addon (IRSA + Helm chart)
| ├── terraform-aws-eks-blueprints-addons # Will contain a select set of addons supported by the EKS Blueprints
| └── terraform-aws-eks-blueprints-teams # Was previously `aws-eks-teams/` EKS Blueprint sub-module; updated based on customer feedback
└── awslabs/
├── crossplane-on-eks/
└── data-on-eks/ # Data related patterns that used to be located in `terraform-aws-eks-blueprints/` are now located here
```

## What Can Users Expect

With these changes, the team intends to provide a better experience for users of the Terraform EKS Blueprints project as well as new and improved reference architectures. Following the v5 changes, the team intends to:

1. Improved quality of the examples provided - more information on the intent of the example, why it might be useful for users, what scenarios is the pattern applicable, etc. Where applicable, architectural diagrams and supporting material will be provided to highlight the intent of the example and how its constructed.
2. A more clear distinction between a blueprint and a usage reference. For example - the Karpenter on EKS Fargate blueprint should demonstrate all of the various aspects that users should be aware of and consider in order to take full advantage of this pattern (recommended practices, observability, logging, monitoring, security, day 2 operations, etc.); this is what makes it a blueprint. In contrast, a usage reference would be an example that shows how users can pass configuration values to the Karpenter provisioner. This example is less focused on the holistic architecture and more focused on how one might configure Karpenter using the implementation. The EKS Blueprints repository will focus mostly on holistic architecture and patterns, and any usage references should be saved for the repository that contains that implementation definition (i.e. - the `terraform-aws-eks-blueprints-addons` repository where the addon implementation is defined).
3. Faster, and more responsive feedback. The first part of this is going to be improved documentation on how to contribute which should help clarify whether a contribution is worthy and willing to be accepted by the team before any effort is spent by the contributor. However, the goal of v5 is to focus more on the value added benefits that EKS Blueprints was created to provide as opposed to simply mass producing Helm chart wrappers (addons) and trying to keep up with that operationally intensive process.
4. Lastly, more examples and blueprints that demonstrate various architectures and workloads that run on top of Amazon EKS as well as integrations into other AWS services.
2 changes: 1 addition & 1 deletion docs/add-ons/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ If you would like to use private repositories, you can download Docker images fo

To indicate that you would like to manage add-ons via ArgoCD, you must do the following:

1. Enable the ArgoCD add-on by setting `argocd_enable` to `true`.
1. Enable the ArgoCD add-on by setting `enable_argocd` to `true`.
2. Specify you would like ArgoCD to be responsible for deploying your add-ons by setting `argocd_manage_add_ons` to `true`. This will prevent the individual Terraform add-on modules from deploying Helm charts.
3. Pass Application configuration for your add-ons repository via the `argocd_applications` property.

Expand Down
2 changes: 1 addition & 1 deletion docs/add-ons/kubecost.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ Deploy Kubecost with custom `values.yaml`
name = "kubecost" # (Required) Release name.
repository = "oci://public.ecr.aws/kubecost" # (Optional) Repository URL where to locate the requested chart.
chart = "cost-analyzer" # (Required) Chart name to be installed.
version = "1.96.0" # (Optional) Specify the exact chart version to install. If this is not specified, it defaults to the version set within default_helm_config: https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/main/modules/kubernetes-addons/kubecost/locals.tf
version = "1.103.3" # (Optional) Specify the exact chart version to install. If this is not specified, it defaults to the version set within default_helm_config: https://github.com/aws-ia/terraform-aws-eks-blueprints/blob/main/modules/kubernetes-addons/kubecost/locals.tf
namespace = "kubecost" # (Optional) The namespace to install the release into.
values = [templatefile("${path.module}/kubecost-values.yaml", {})]
}
Expand Down
6 changes: 6 additions & 0 deletions examples/eks-efa/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
tfplan
*.tfstate
*.backup
TODO*.*
.terraform
*.hcl
Loading

0 comments on commit 3c8fa09

Please sign in to comment.