-
-
Notifications
You must be signed in to change notification settings - Fork 354
/
README.yaml
341 lines (288 loc) · 16.4 KB
/
README.yaml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
name: terraform-aws-eks-cluster
license: APACHE2
github_repo: cloudposse/terraform-aws-eks-cluster
badges:
- name: Latest Release
image: https://img.shields.io/github/release/cloudposse/terraform-aws-eks-cluster.svg?style=for-the-badge
url: https://github.com/cloudposse/terraform-aws-eks-cluster/releases/latest
- name: Last Updated
image: https://img.shields.io/github/last-commit/cloudposse/terraform-aws-eks-cluster.svg?style=for-the-badge
url: https://github.com/cloudposse/terraform-aws-eks-cluster/commits
- name: Slack Community
image: https://slack.cloudposse.com/for-the-badge.svg
url: https://slack.cloudposse.com
# List any related terraform modules that this module may be used with or that this module depends on.
related:
- name: terraform-aws-components eks/clusters
description: Cloud Posse's component (root module) using this module to provision an EKS cluster
url: https://github.com/cloudposse/terraform-aws-components/tree/main/modules/eks/cluster
- name: terraform-aws-components eks/karpenter and eks/karpenter-provisioner
description: Cloud Posse's components (root modules) deploying Karpenter to manage auto-scaling of EKS node groups
url: https://github.com/cloudposse/terraform-aws-components/tree/main/modules/eks/karpenter
- name: terraform-aws-eks-workers
description: Terraform module to provision an AWS AutoScaling Group, IAM Role, and Security Group for EKS Workers
url: https://github.com/cloudposse/terraform-aws-eks-workers
- name: terraform-aws-ec2-autoscale-group
description: Terraform module to provision Auto Scaling Group and Launch Template on AWS
url: https://github.com/cloudposse/terraform-aws-ec2-autoscale-group
- name: terraform-aws-ecs-container-definition
description: Terraform module to generate well-formed JSON documents (container definitions) that are passed to the aws_ecs_task_definition Terraform resource
url: https://github.com/cloudposse/terraform-aws-ecs-container-definition
- name: terraform-aws-ecs-alb-service-task
description: Terraform module which implements an ECS service which exposes a web service via ALB
url: https://github.com/cloudposse/terraform-aws-ecs-alb-service-task
- name: terraform-aws-ecs-web-app
description: Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more
url: https://github.com/cloudposse/terraform-aws-ecs-web-app
- name: terraform-aws-ecs-codepipeline
description: Terraform module for CI/CD with AWS Code Pipeline and Code Build for ECS
url: https://github.com/cloudposse/terraform-aws-ecs-codepipeline
- name: terraform-aws-ecs-cloudwatch-autoscaling
description: Terraform module to autoscale ECS Service based on CloudWatch metrics
url: https://github.com/cloudposse/terraform-aws-ecs-cloudwatch-autoscaling
- name: terraform-aws-ecs-cloudwatch-sns-alarms
description: Terraform module to create CloudWatch Alarms on ECS Service level metrics
url: https://github.com/cloudposse/terraform-aws-ecs-cloudwatch-sns-alarms
- name: terraform-aws-ec2-instance
description: Terraform module for providing a general purpose EC2 instance
url: https://github.com/cloudposse/terraform-aws-ec2-instance
- name: terraform-aws-ec2-instance-group
description: Terraform module for provisioning multiple general purpose EC2 hosts for stateful applications
url: https://github.com/cloudposse/terraform-aws-ec2-instance-group
description: |-
Terraform module to provision an [EKS](https://aws.amazon.com/eks/) cluster on AWS.
<br/><br/>
This Terraform module provisions a fully-configured AWS [EKS](https://aws.amazon.com/eks/) (Elastic Kubernetes Service) cluster.
It's engineered to integrate smoothly with [Karpenter](https://karpenter.sh/) and [EKS addons](https://docs.aws.amazon.com/eks/latest/userguide/eks-add-ons.html),
forming a critical part of [Cloud Posse's reference architecture](https://cloudposse.com/reference-architecture).
Ideal for teams looking to deploy scalable and manageable Kubernetes clusters on AWS with minimal fuss.
introduction: |-
The module provisions the following resources:
- EKS cluster of master nodes that can be used together with the
[terraform-aws-eks-node-group](https://github.com/cloudposse/terraform-aws-eks-node-group) and
[terraform-aws-eks-fargate-profile](https://github.com/cloudposse/terraform-aws-eks-fargate-profile)
modules to create a full-blown EKS/Kubernetes cluster. You can also use the [terraform-aws-eks-workers](https://github.com/cloudposse/terraform-aws-eks-workers)
module to provision worker nodes for the cluster, but it is now rare for that to be a better choice than to use `terraform-aws-eks-node-group`.
- IAM Role to allow the cluster to access other AWS services
- EKS access entries to allow IAM users to access and administer the cluster
usage: |-
For a complete example, see [examples/complete](examples/complete).
For automated tests of the complete example using [bats](https://github.com/bats-core/bats-core) and [Terratest](https://github.com/gruntwork-io/terratest) (which tests and deploys the example on AWS), see [test/src](test/scc).
Other examples:
- [terraform-aws-components/eks/cluster](https://github.com/cloudposse/terraform-aws-components/tree/master/modules/eks/cluster) - Cloud Posse's service catalog of "root module" invocations for provisioning reference architectures
```hcl
provider "aws" {
region = var.region
}
# Note: This example creates an explicit access entry for the current user,
# but in practice, you should use a static map of IAM users or roles that should have access to the cluster.
# Granting access to the current user in this way is not recommended for production use.
data "aws_caller_identity" "current" {}
# IAM session context converts an assumed role ARN into an IAM Role ARN.
# Again, this is primarily to simplify the example, and in practice, you should use a static map of IAM users or roles.
data "aws_iam_session_context" "current" {
arn = data.aws_caller_identity.current.arn
}
locals {
# The usage of the specific kubernetes.io/cluster/* resource tags below are required
# for EKS and Kubernetes to discover and manage networking resources
# https://aws.amazon.com/premiumsupport/knowledge-center/eks-vpc-subnet-discovery/
# https://github.com/kubernetes-sigs/aws-load-balancer-controller/blob/main/docs/deploy/subnet_discovery.md
tags = { "kubernetes.io/cluster/${module.label.id}" = "shared" }
# required tags to make ALB ingress work https://docs.aws.amazon.com/eks/latest/userguide/alb-ingress.html
public_subnets_additional_tags = {
"kubernetes.io/role/elb" : 1
}
private_subnets_additional_tags = {
"kubernetes.io/role/internal-elb" : 1
}
# Enable the IAM user creating the cluster to administer it,
# without using the bootstrap_cluster_creator_admin_permissions option,
# as an example of how to use the access_entry_map feature.
# In practice, this should be replaced with a static map of IAM users or roles
# that should have access to the cluster, but we use the current user
# to simplify the example.
access_entry_map = {
(data.aws_iam_session_context.current.issuer_arn) = {
access_policy_associations = {
ClusterAdmin = {}
}
}
}
}
module "label" {
source = "cloudposse/label/null"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
namespace = var.namespace
name = var.name
stage = var.stage
delimiter = var.delimiter
tags = var.tags
}
module "vpc" {
source = "cloudposse/vpc/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
ipv4_primary_cidr_block = "172.16.0.0/16"
tags = local.tags
context = module.label.context
}
module "subnets" {
source = "cloudposse/dynamic-subnets/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
availability_zones = var.availability_zones
vpc_id = module.vpc.vpc_id
igw_id = [module.vpc.igw_id]
ipv4_cidr_block = [module.vpc.vpc_cidr_block]
nat_gateway_enabled = true
nat_instance_enabled = false
public_subnets_additional_tags = local.public_subnets_additional_tags
private_subnets_additional_tags = local.private_subnets_additional_tags
tags = local.tags
context = module.label.context
}
module "eks_node_group" {
source = "cloudposse/eks-node-group/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
instance_types = [var.instance_type]
subnet_ids = module.subnets.private_subnet_ids
health_check_type = var.health_check_type
min_size = var.min_size
max_size = var.max_size
cluster_name = module.eks_cluster.eks_cluster_id
# Enable the Kubernetes cluster auto-scaler to find the auto-scaling group
cluster_autoscaler_enabled = var.autoscaling_policies_enabled
context = module.label.context
}
module "eks_cluster" {
source = "cloudposse/eks-cluster/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
subnet_ids = concat(module.subnets.private_subnet_ids, module.subnets.public_subnet_ids)
kubernetes_version = var.kubernetes_version
oidc_provider_enabled = true
addons = [
# https://docs.aws.amazon.com/eks/latest/userguide/managing-vpc-cni.html#vpc-cni-latest-available-version
{
addon_name = "vpc-cni"
addon_version = var.vpc_cni_version
resolve_conflicts_on_create = "OVERWRITE"
resolve_conflicts_on_update = "OVERWRITE"
service_account_role_arn = var.vpc_cni_service_account_role_arn # Creating this role is outside the scope of this example
},
# https://docs.aws.amazon.com/eks/latest/userguide/managing-kube-proxy.html
{
addon_name = "kube-proxy"
addon_version = var.kube_proxy_version
resolve_conflicts_on_create = "OVERWRITE"
resolve_conflicts_on_update = "OVERWRITE"
service_account_role_arn = null
},
# https://docs.aws.amazon.com/eks/latest/userguide/managing-coredns.html
{
addon_name = "coredns"
addon_version = var.coredns_version
resolve_conflicts_on_create = "OVERWRITE"
resolve_conflicts_on_update = "OVERWRITE"
service_account_role_arn = null
},
]
addons_depends_on = [module.eks_node_group]
context = module.label.context
cluster_depends_on = [module.subnets]
}
```
Module usage with two unmanaged worker groups:
```hcl
locals {
# Unfortunately, the `aws_ami` data source attribute `most_recent` (https://github.com/cloudposse/terraform-aws-eks-workers/blob/34a43c25624a6efb3ba5d2770a601d7cb3c0d391/main.tf#L141)
# does not work as you might expect. If you are not going to use a custom AMI you should
# use the `eks_worker_ami_name_filter` variable to set the right kubernetes version for EKS workers,
# otherwise the first version of Kubernetes supported by AWS (v1.11) for EKS workers will be selected, but the
# EKS control plane will ignore it to use one that matches the version specified by the `kubernetes_version` variable.
eks_worker_ami_name_filter = "amazon-eks-node-${var.kubernetes_version}*"
}
module "eks_workers" {
source = "cloudposse/eks-workers/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
attributes = ["small"]
instance_type = "t3.small"
eks_worker_ami_name_filter = local.eks_worker_ami_name_filter
vpc_id = module.vpc.vpc_id
subnet_ids = module.subnets.public_subnet_ids
health_check_type = var.health_check_type
min_size = var.min_size
max_size = var.max_size
wait_for_capacity_timeout = var.wait_for_capacity_timeout
cluster_name = module.label.id
cluster_endpoint = module.eks_cluster.eks_cluster_endpoint
cluster_certificate_authority_data = module.eks_cluster.eks_cluster_certificate_authority_data
cluster_security_group_id = module.eks_cluster.eks_cluster_managed_security_group_id
# Auto-scaling policies and CloudWatch metric alarms
autoscaling_policies_enabled = var.autoscaling_policies_enabled
cpu_utilization_high_threshold_percent = var.cpu_utilization_high_threshold_percent
cpu_utilization_low_threshold_percent = var.cpu_utilization_low_threshold_percent
context = module.label.context
}
module "eks_workers_2" {
source = "cloudposse/eks-workers/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
attributes = ["medium"]
instance_type = "t3.medium"
eks_worker_ami_name_filter = local.eks_worker_ami_name_filter
vpc_id = module.vpc.vpc_id
subnet_ids = module.subnets.public_subnet_ids
health_check_type = var.health_check_type
min_size = var.min_size
max_size = var.max_size
wait_for_capacity_timeout = var.wait_for_capacity_timeout
cluster_name = module.label.id
cluster_endpoint = module.eks_cluster.eks_cluster_endpoint
cluster_certificate_authority_data = module.eks_cluster.eks_cluster_certificate_authority_data
cluster_security_group_id = module.eks_cluster.eks_cluster_managed_security_group_id
# Auto-scaling policies and CloudWatch metric alarms
autoscaling_policies_enabled = var.autoscaling_policies_enabled
cpu_utilization_high_threshold_percent = var.cpu_utilization_high_threshold_percent
cpu_utilization_low_threshold_percent = var.cpu_utilization_low_threshold_percent
context = module.label.context
}
module "eks_cluster" {
source = "cloudposse/eks-cluster/aws"
# Cloud Posse recommends pinning every module to a specific version
# version = "x.x.x"
subnet_ids = concat(module.subnets.private_subnet_ids, module.subnets.public_subnet_ids)
kubernetes_version = var.kubernetes_version
oidc_provider_enabled = true # needed for VPC CNI
access_entries_for_nodes = {
EC2_LINUX = [module.eks_workers.workers_role_arn, module.eks_workers_2.workers_role_arn]
}
context = module.label.context
}
```
> [!WARNING]
> Release `4.0.0` contains major breaking changes that will require you to update your existing EKS cluster
> and configuration to use this module. Please see the [v3 to v4 migration path](./docs/migration-v3-v4.md) for more information.
> Release `2.0.0` (previously released as version `0.45.0`) contains some changes that,
> if applied to a cluster created with an earlier version of this module,
> could result in your existing EKS cluster being replaced (destroyed and recreated).
> To prevent this, follow the instructions in the [v1 to v2 migration path](./docs/migration-v1-v2.md).
> [!NOTE]
> Prior to v4 of this module, AWS did not provide an API to manage access to the EKS cluster,
> causing numerous challenges. With v4 of this module, it exclusively uses the AWS API, resolving
> many issues you may read about that had affected prior versions. See the version 2 README and release notes
> for more information on the challenges and workarounds that were required prior to v3.
include:
- docs/targets.md
- docs/terraform.md
contributors:
- name: "Erik Osterman"
github: "osterman"
- name: "Andriy Knysh"
github: "aknysh"
- name: "Nuru"
github: "Nuru"