Skip to content

Commit

Permalink
feat: examples for terraform doc (#166)
Browse files Browse the repository at this point in the history
* indicates how to retrieve the role arn

* align instances count with number of AZs

* add os fgac

* introduce multi roles

* implement resourceidentifier

* add directions for password strength
  • Loading branch information
leiicamundi authored Oct 28, 2024
1 parent 51cf7f9 commit f4b64f4
Show file tree
Hide file tree
Showing 92 changed files with 3,639 additions and 300 deletions.
70 changes: 54 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,6 @@ module "opensearch_domain" {
domain_name = "my-opensearch-domain"
subnet_ids = module.eks_cluster.private_subnet_ids
security_group_ids = module.eks_cluster.security_group_ids
vpc_id = module.eks_cluster.vpc_id
cidr_blocks = concat(module.eks_cluster.private_vpc_cidr_blocks, module.eks_cluster.public_vpc_cidr_blocks)
Expand All @@ -74,6 +73,34 @@ module "opensearch_domain" {
}
```

#### Deletion Known Issues

During the deletion process (`terraform destroy`) of the EKS Cluster, you may encounter an error message related to the `kubernetes_storage_class`:

````
Error: Get "http://localhost/apis/storage.k8s.io/v1/storageclasses/ebs-sc": dial tcp [::1]:80: connect: connection refused
│ with module.eks_cluster.kubernetes_storage_class_v1.ebs_sc,
│ on .terraform/modules/eks_cluster/modules/eks-cluster/cluster.tf line 156, in resource "kubernetes_storage_class_v1" "ebs_sc":
│ 156: resource "kubernetes_storage_class_v1" "ebs_sc" {
````

To resolve this issue, you can set the variable `create_ebs_gp3_default_storage_class` to `false`, which skips the creation of the `kubernetes_storage_class` resource. This helps to avoid dependency issues during deletion. Run the following command:

```bash
terraform destroy -var="create_ebs_gp3_default_storage_class=false"
```

If you still encounter the issue, you may need to manually remove the state for the storage class:

```bash
terraform state rm module.eks_cluster.kubernetes_storage_class_v1.ebs_sc
```

After performing these steps, re-run `terraform destroy` to complete the deletion process without further interruptions.

#### GitHub Actions

You can automate the deployment and deletion of the EKS cluster and Aurora database using GitHub Actions.
Expand Down Expand Up @@ -102,8 +129,7 @@ The Aurora module uses the following outputs from the EKS cluster module to defi
- `module.eks_cluster.oidc_provider_arn`: The ARN of the OIDC provider for the EKS cluster.
- `module.eks_cluster.oidc_provider_id`: The ID of the OIDC provider for the EKS cluster.
- `var.account_id`: Your AWS account id
- `var.aurora_cluster_name`: The name of the Aurora cluster to access
Here is the corrected version:
- `var.aurora_region`: Your Aurora AWS Region
- `var.aurora_irsa_username`: The username used to access AuroraDB. This username is different from the superuser. The user must also be created manually in the database to enable the IRSA connection, as described in [the steps below](#create-irsa-user-on-the-database).
- `var.aurora_namespace`: The kubernetes namespace to allow access
- `var.aurora_service_account`: The kubernetes ServiceAccount to allow access
Expand All @@ -113,7 +139,15 @@ You need to define the IAM role trust policy and access policy for Aurora. Here'
```hcl
module "postgresql" {
# ...
iam_aurora_access_policy = <<EOF
iam_roles_with_policies = [
{
role_name = "AuroraRole-your-cluster" # ensure uniqueness of this one
# Source: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html
# Since the DbiResourceId may be unknown during the apply process, and because each instance of the RDS cluster contains its own DbiResourceId,
# we use the wildcard `dbuser:*` to apply to all database instances.
access_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
Expand All @@ -122,13 +156,13 @@ module "postgresql" {
"Action": [
"rds-db:connect"
],
"Resource": "arn:aws:rds-db:${module.eks_cluster.region}:${var.account_id}:dbuser:${var.aurora_cluster_name}/${var.aurora_irsa_username}"
"Resource": "arn:aws:rds-db:${var.aurora_region}:${var.account_id}:dbuser:*/${var.aurora_irsa_username}"
}
]
}
EOF
iam_role_trust_policy = <<EOF
trust_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
Expand All @@ -147,9 +181,9 @@ EOF
]
}
EOF
}
]
iam_aurora_role_name = "AuroraRole-your-cluster" # ensure uniqueness of this one
iam_create_aurora_role = true
iam_auth_enabled = true
# ...
}
Expand All @@ -164,7 +198,6 @@ echo "Creating IRSA DB user using admin user"
psql -h $AURORA_ENDPOINT -p $AURORA_PORT "sslmode=require dbname=$AURORA_DB_NAME user=$AURORA_USERNAME password=$AURORA_PASSWORD" \
-c "CREATE USER \"${AURORA_USERNAME_IRSA}\" WITH LOGIN;" \
-c "GRANT rds_iam TO \"${AURORA_USERNAME_IRSA}\";" \
-c "GRANT rds_superuser TO \"${AURORA_USERNAME_IRSA}\";" \
-c "GRANT ALL PRIVILEGES ON DATABASE \"${AURORA_DB_NAME}\" TO \"${AURORA_USERNAME_IRSA}\";" \
-c "SELECT aurora_version();" \
-c "SELECT version();" -c "\du"
Expand All @@ -181,16 +214,18 @@ The OpenSearch module uses the following outputs from the EKS cluster module to
- `module.eks_cluster.oidc_provider_arn`: The ARN of the OIDC provider for the EKS cluster.
- `module.eks_cluster.oidc_provider_id`: The ID of the OIDC provider for the EKS cluster.
- `var.account_id`: Your AWS account id
- `var.opensearch_region`: Your OpenSearch AWS Region
- `var.opensearch_domain_name`: The name of the OpenSearch domain to access
- `var.opensearch_namespace`: The kubernetes namespace to allow access
- `var.opensearch_service_account`: The kubernetes ServiceAccount to allow access

```hcl
module "opensearch_domain" {
# ...
iam_create_opensearch_role = true
iam_opensearch_role_name = "OpenSearchRole-your-cluster" # ensure uniqueness of this one
iam_opensearch_access_policy = <<EOF
iam_roles_with_policies = [
{
role_name = "OpenSearchRole-your-cluster" # ensure uniqueness of this one
access_policy =<<EOF
{
"Version": "2012-10-17",
"Statement": [
Expand All @@ -201,13 +236,13 @@ module "opensearch_domain" {
"es:ESHttpPut",
"es:ESHttpPost"
],
"Resource": "arn:aws:es:${module.eks_cluster.region}:${var.account_id}:domain/${var.opensearch_domain_name}/*"
"Resource": "arn:aws:es:${var.opensearch_region}:${var.account_id}:domain/${var.opensearch_domain_name}/*"
}
]
}
EOF
iam_role_trust_policy = <<EOF
trust_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
Expand All @@ -226,6 +261,9 @@ EOF
]
}
EOF
}
]
# ...
}
```
Expand All @@ -245,7 +283,7 @@ metadata:
annotations:
eks.amazonaws.com/role-arn: <arn:aws:iam:<YOUR-ACCOUNT-ID>:role/AuroraRole>
```
You can retrieve the role ARN from the module output: `aurora_role_arn`.
You can retrieve the role ARN from the module output: `aurora_iam_role_arns['Aurora-your-cluster']`.

**OpenSearch Service Account**

Expand All @@ -258,7 +296,7 @@ metadata:
annotations:
eks.amazonaws.com/role-arn: <arn:aws:iam:<YOUR-ACCOUNT-ID>:role/OpenSearchRole>
```
You can retrieve the role ARN from the module output: `opensearch_role_arn`.
You can retrieve the role ARN from the module output: `opensearch_iam_role_arns['OpenSearch-your-cluster']`.

## Support

Expand Down
4 changes: 4 additions & 0 deletions examples/camunda-8.6-irsa/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Camunda 8.6 on AWS EKS with IRSA

This folder describes the IaC of Camunda 8.6 on AWS EKS with IRSA.
Instructions can be found on the official documentation: https://docs.camunda.io/docs/self-managed/setup/deploy/amazon/amazon-eks/eks-terraform/
29 changes: 29 additions & 0 deletions examples/camunda-8.6-irsa/cluster.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
locals {
eks_cluster_name = "cluster-name-irsa" # Change this to a name of your choice
eks_cluster_region = "eu-west-2" # Change this to your desired AWS region
}

module "eks_cluster" {
source = "git::https://github.com/camunda/camunda-tf-eks-module//modules/eks-cluster?ref=3.0.0"

name = local.eks_cluster_name
region = local.eks_cluster_region

# Set CIDR ranges or use the defaults
cluster_service_ipv4_cidr = "10.190.0.0/16"
cluster_node_ipv4_cidr = "10.192.0.0/16"

# Default node type for the Kubernetes cluster
np_instance_types = ["m6i.xlarge"]
np_desired_node_count = 4
}

output "cert_manager_arn" {
value = module.eks_cluster.cert_manager_arn
description = "The Amazon Resource Name (ARN) of the AWS IAM Roles for Service Account mapping for the cert-manager"
}

output "external_dns_arn" {
value = module.eks_cluster.external_dns_arn
description = "The Amazon Resource Name (ARN) of the AWS IAM Roles for Service Account mapping for the external-dns"
}
17 changes: 17 additions & 0 deletions examples/camunda-8.6-irsa/config.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
terraform {
required_version = ">= 1.0"

# You can override the backend configuration; this is given as an example.
backend "s3" {
encrypt = true
}

required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.69"
}
}
}

provider "aws" {}
Loading

0 comments on commit f4b64f4

Please sign in to comment.