diff --git a/bootstrap/terraform-fully-private/README.md b/bootstrap/terraform-fully-private/README.md new file mode 100644 index 00000000..1993443a --- /dev/null +++ b/bootstrap/terraform-fully-private/README.md @@ -0,0 +1,198 @@ +# EKS Cluster bootstrap with Terraform for Crossplane + +This example deploys the following components +- Creates a new sample VPC with Two Private Subnets and the required VPC Endpoints +- Creates EKS Cluster Control plane with one managed node group +- Crossplane Add-on to EKS Cluster +- Upbound AWS Provider for Crossplane +- AWS Provider for Crossplane +- Kubernetes Provider for Crossplane +- Helm Provider for Crossplane + +> [!IMPORTANT] +> Some AWS services, such as IAM and WAFv2, do not have VPC endpoints. To ensure that these services work correctly with Crossplane providers, you need to add a proxy. See an example [here](https://github.com/awslabs/crossplane-on-eks/blob/main/bootstrap/terraform-fully-private/providers/upjet-aws/runtime-config.yaml). + +## Crossplane Deployment Design + +```mermaid +graph TD; + subgraph AWS Cloud + id1(VPC)-->Private-Subnet1; + id1(VPC)-->Private-Subnet2; + id1(VPC)-->Private-Subnet3; + Private-Subnet1-->EKS{{"EKS #9829;"}} + Private-Subnet2-->EKS{{"EKS #9829;"}} + Private-Subnet3-->EKS{{"EKS #9829;"}} + EKS==>ManagedNodeGroup; + ManagedNodeGroup-->|enable_crossplane=true|id2([Crossplane]); + subgraph Kubernetes Add-ons + id2([Crossplane])-.->|enable_aws_provider=false|id3([AWS-Provider]); + id2([Crossplane])-.->|enable_upjet_aws_provider=true|id4([Upbound-AWS-Provider]); + id2([Crossplane])-.->|enable_kubernetes_provider=true|id5([Kubernetes-Provider]); + id2([Crossplane])-.->|enable_helm_provider=true|id6([Helm-Provider]); + end + end +``` + +## How to Deploy + +### Prerequisites: +Ensure that you have installed the following tools on your laptop before starting to work with this module and run Terraform Plan and Apply: + +1. [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html) +2. [Kubectl](https://kubernetes.io/docs/tasks/tools/) +3. [Terraform >= v1.0.0](https://learn.hashicorp.com/tutorials/terraform/install-cli) +4. [jq](https://stedolan.github.io/jq/download/) - Command-line JSON processor +5. [crane](https://github.com/google/go-containerregistry/blob/main/cmd/crane/README.md) - Tool for interacting with container registries + +These tools are necessary to execute the scripts and manage your crossplane images efficiently. Make sure you follow the installation instructions provided in the links for each tool. + + +### Troubleshooting +1. If `terraform apply` errors out after creating the cluster when trying to apply the helm charts, try running the command: +```shell +aws eks --region update-kubeconfig --name --alias +``` +and executing terraform apply again. + +1. Make sure you have upgraded to the latest version of AWS CLI. Make sure your AWS credentials are properly configured as well. + +### Deployment Steps +#### Step 0: Clone the repo using the command below + +```shell script +git clone https://github.com/awslabs/crossplane-on-eks.git +``` + +> [!IMPORTANT] +> The examples in this repository make use of one of the Crossplane AWS providers. +For that reason `upbound_aws_provider.enable` is set to `true` and `aws_provider.enable` is set to `false`. If you use the examples for `aws_provider`, adjust the terraform [main.tf](https://github.com/awslabs/crossplane-on-eks/blob/main/bootstrap/terraform/main.tf) in order install only the necessary CRDs to the Kubernetes cluster. + +#### Step 1: ECR settings + +> [!IMPORTANT] +> You need to use the same org/repository for the Crossplane images. Crossplane only allows packages in the same OCI registry and org to be part of the same family. This prevents a malicious provider from declaring itself part of a family and thus being granted RBAC access to its types. [Source Code Reference.](https://github.com/crossplane/crossplane/blob/master/internal/controller/rbac/provider/roles/reconciler.go#L300-L307) + +Note: You can change the default `us-east-1` region in the following scripts before executing them. + +To Create Crossplane private ECR repos, run the following script: + +``` +./scripts/create-crossplane-ecr-repos.sh +``` + +> [!IMPORTANT] +> There is currently a bug when using `docker pull`, `docker tag`, and `docker push` where the annotated layer information may be dropped. To avoid this issue, we need to use `crane` instead. For more details, you can refer to this issue: [crossplane/crossplane#5785](https://github.com/crossplane/crossplane/issues/5785). + +To copy Crossplane images to a private ECR, run the following script: + +```shell script +./scripts/copy-images-to-ecr.sh +``` + +#### Step 2: Run Terraform INIT +Initialize a working directory with configuration files + +```shell script +cd bootstrap/terraform-fully-private/ +terraform init +``` + +#### Step 3: Run Terraform PLAN +If your ECR repo is in different account or region than where the Terraform is pointing to, you can adjust the variables.tf file: + +``` +variable "ecr_account_id" { + type = string + description = "ECR repository AWS Account ID" + default = "" #defaults to var.region +} + +variable "ecr_region" { + type = string + description = "ECR repository AWS Region" + default = "" #defaults to current account +} +``` + +Run Terraform plan: +```shell script +export TF_VAR_region= # if ommited, defaults to var.region +terraform plan +``` + +#### Step 4: Finally, Terraform APPLY +To create resources: + +```shell script +terraform apply -var='docker_secret={"username":"your-docker-username", "accessToken":"your-docker-password"}' --auto-approve +``` + +### Configure `kubectl` and test cluster +EKS Cluster details can be extracted from terraform output or from AWS Console to get the name of cluster. +This following command used to update the `kubeconfig` in your local machine where you run kubectl commands to interact with your EKS Cluster. + +#### Step 5: Run `update-kubeconfig` command + +`~/.kube/config` file gets updated with cluster details and certificate from the below command +```shell script +aws eks --region update-kubeconfig --name --alias +``` +#### Step 6: List all the worker nodes by running the command below +```shell script +kubectl get nodes +``` +#### Step 7: Verify the pods running in `crossplane-system` namespace +```shell script +kubectl get pods -n crossplane-system +``` +#### Step 8: Verify the names provider and provider configs +Run the following command to get the list of providers: +```shell script +kubectl get providers +``` +The expected output looks like this: +``` +NAME INSTALLED HEALTHY PACKAGE AGE +aws-provider True True xpkg.upbound.io/crossplane-contrib/provider-aws:v0.36.0 36m +kubernetes-provider True True xpkg.upbound.io/crossplane-contrib/provider-kubernetes:v0.6.0 36m +provider-helm True True xpkg.upbound.io/crossplane-contrib/provider-helm:v0.13.0 36m +upbound-aws-provider True True xpkg.upbound.io/upbound/provider-aws:v0.27.0 36m +``` +Run the following commands to get the list of provider configs: +```shell script +kubectl get provider +``` +The expected output looks like this: +``` +NAME AGE +providerconfig.aws.crossplane.io/aws-provider-config 36m + +NAME AGE +providerconfig.helm.crossplane.io/default 36m + +NAME AGE +providerconfig.kubernetes.crossplane.io/kubernetes-provider-config 36m +``` + +#### Step 9: Access the ArgoCD UI +Get the load balancer url: +``` +kubectl -n argocd get service argo-cd-argocd-server -o jsonpath="{.status.loadBalancer.ingress[*].hostname}{'\n'}" +``` +Copy and paste the result in your browser.
+The initial username is `admin`. The password is autogenerated and you can get it by running the following command: +``` +echo "$(kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d)" +``` + +## Clean up +1. Delete resources created by Crossplane such as first Claims, then XRDs and Compositions. + +2. See the faq.md for more details on cleaning up the resources created. + +3. Delete the EKS cluster resources and ECR repositories with the following script: + +```bash +./scripts/cleanup.sh +``` diff --git a/bootstrap/terraform-fully-private/config/environmentconfig.yaml b/bootstrap/terraform-fully-private/config/environmentconfig.yaml new file mode 100644 index 00000000..fa838164 --- /dev/null +++ b/bootstrap/terraform-fully-private/config/environmentconfig.yaml @@ -0,0 +1,8 @@ +apiVersion: apiextensions.crossplane.io/v1alpha1 +kind: EnvironmentConfig +metadata: + name: cluster +data: + awsAccountID: "${awsAccountID}" + eksOIDC: ${eksOIDC} + vpcID: ${vpcID} diff --git a/bootstrap/terraform-fully-private/ecr.tf b/bootstrap/terraform-fully-private/ecr.tf new file mode 100644 index 00000000..4b700f5f --- /dev/null +++ b/bootstrap/terraform-fully-private/ecr.tf @@ -0,0 +1,53 @@ +locals { + ecr_account_id = var.ecr_account_id != "" ? var.ecr_account_id : data.aws_caller_identity.current.account_id + ecr_region = var.ecr_region != "" ? var.ecr_region : local.region +} + +module "secrets-manager" { + source = "terraform-aws-modules/secrets-manager/aws" + version = "1.1.2" + + name = "ecr-pullthroughcache/docker" + secret_string = jsonencode(var.docker_secret) +} + +module "ecr" { + source = "terraform-aws-modules/ecr/aws" + version = "2.2.1" + + create_repository = false + + registry_pull_through_cache_rules = { + ecr = { + ecr_repository_prefix = "ecr" + upstream_registry_url = "public.ecr.aws" + } + k8s = { + ecr_repository_prefix = "k8s" + upstream_registry_url = "registry.k8s.io" + } + quay = { + ecr_repository_prefix = "quay" + upstream_registry_url = "quay.io" + } + dockerhub = { + ecr_repository_prefix = "docker-hub" + upstream_registry_url = "registry-1.docker.io" + credential_arn = module.secrets-manager.secret_arn + } + } + + manage_registry_scanning_configuration = true + registry_scan_type = "BASIC" + registry_scan_rules = [ + { + scan_frequency = "SCAN_ON_PUSH" + filter = [ + { + filter = "*" + filter_type = "WILDCARD" + }, + ] + } + ] +} \ No newline at end of file diff --git a/bootstrap/terraform-fully-private/main.tf b/bootstrap/terraform-fully-private/main.tf new file mode 100644 index 00000000..bfd069d5 --- /dev/null +++ b/bootstrap/terraform-fully-private/main.tf @@ -0,0 +1,728 @@ +# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +# SPDX-License-Identifier: Apache-2.0 + +provider "aws" { + region = local.region +} + +provider "kubernetes" { + host = module.eks.cluster_endpoint + cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) + exec { + api_version = "client.authentication.k8s.io/v1beta1" + args = ["eks", "get-token", "--cluster-name", local.name, "--region", local.region] + command = "aws" + } +} + +provider "helm" { + kubernetes { + host = module.eks.cluster_endpoint + cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) + exec { + api_version = "client.authentication.k8s.io/v1beta1" + args = ["eks", "get-token", "--cluster-name", local.name, "--region", local.region] + command = "aws" + } + } +} + +provider "kubectl" { + host = module.eks.cluster_endpoint + cluster_ca_certificate = base64decode(module.eks.cluster_certificate_authority_data) + exec { + api_version = "client.authentication.k8s.io/v1beta1" + args = ["eks", "get-token", "--cluster-name", local.name, "--region", local.region] + command = "aws" + } + load_config_file = false + apply_retry_count = 15 +} + +data "aws_caller_identity" "current" {} +data "aws_availability_zones" "available" {} + +locals { + name = var.name + region = var.region + + cluster_version = var.cluster_version + cluster_name = local.name + + vpc_name = local.name + vpc_cidr = "10.0.0.0/16" + azs = slice(data.aws_availability_zones.available.names, 0, 3) + + # Ensure to adapt these security group rules based on your network configuration and security requirements + # for maintaining a fully private EKS Cluster environment. + eks_security_group_rules = [ + { + description = "Fully private EKS Cluster - Allow port 443 to 443 from 10.0.0.0/8" + protocol = "tcp" + from_port = 443 + to_port = 443 + type = "ingress" + cidr_blocks = ["10.0.0.0/8"] # This CIDR range is defined in RFC 1918 and is used for private network communication. + }, + { + description = "Fully private EKS Cluster - Allow port 443 to 443 from 172.16.0.0/12" + protocol = "tcp" + from_port = 443 + to_port = 443 + type = "ingress" + cidr_blocks = ["172.16.0.0/12"] # This CIDR range is defined in RFC 1918 and is used for private network communication. + }, + { + description = "Fully private EKS Cluster - Allow port 443 to 443 from 192.168.0.0/16" + protocol = "tcp" + from_port = 443 + to_port = 443 + type = "ingress" + cidr_blocks = ["192.168.0.0/16"] # This CIDR range is defined in RFC 1918 and is used for private network communication. + } + ] + + tags = { + Blueprint = local.name + GithubRepo = "github.com/awslabs/crossplane-on-eks" + } +} + +#--------------------------------------------------------------- +# EBS CSI Driver Role +#--------------------------------------------------------------- + +module "ebs_csi_driver_irsa" { + source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks" + version = "~> 5.30" + + role_name = "${local.name}-ebs-csi-driver" + + attach_ebs_csi_policy = true + + oidc_providers = { + main = { + provider_arn = module.eks.oidc_provider_arn + namespace_service_accounts = ["kube-system:ebs-csi-controller-sa"] + } + } + + tags = local.tags +} + +#--------------------------------------------------------------- +# EKS Cluster +#--------------------------------------------------------------- + +module "eks" { + source = "terraform-aws-modules/eks/aws" + version = "~> 20.0" + + cluster_name = local.name + cluster_version = local.cluster_version + cluster_endpoint_public_access = true + cluster_endpoint_private_access = true + kms_key_enable_default_policy = true + + # Give the Terraform identity admin access to the cluster + # which will allow resources to be deployed into the cluster + enable_cluster_creator_admin_permissions = true + + vpc_id = module.vpc.vpc_id + subnet_ids = module.vpc.private_subnets + + cluster_security_group_additional_rules = { + for k, v in local.eks_security_group_rules : + k => { + protocol = try(v.protocol) + from_port = try(v.from_port) + to_port = try(v.to_port) + type = try(v.type) + cidr_blocks = try(v.cidr_blocks) + description = try(v.description) + } + } + + cluster_addons = { + aws-ebs-csi-driver = { + most_recent = true + service_account_role_arn = module.ebs_csi_driver_irsa.iam_role_arn + } + coredns = { + most_recent = true + } + kube-proxy = { + most_recent = true + } + vpc-cni = { + before_compute = true # Ensure the addon is configured before compute resources are created + most_recent = true + } + } + + # for production cluster, add a node group for add-ons that should not be inerrupted such as coredns + eks_managed_node_groups = { + initial = { + instance_types = ["m6i.large", "m5.large", "m5n.large", "m5zn.large"] + capacity_type = var.capacity_type # defaults to SPOT + min_size = 1 + max_size = 5 + desired_size = 3 + subnet_ids = module.vpc.private_subnets + iam_role_additional_policies = { + AmazonEC2ContainerRegistryReadOnly = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly" + additional = aws_iam_policy.ecrpullthroughcache.arn + } + } + } + + tags = local.tags + + depends_on = [module.vpc] +} + +resource "aws_iam_policy" "ecrpullthroughcache" { + name = "ECRPullThroughCache" + + policy = jsonencode({ + Version = "2012-10-17" + Statement = [ + { + Action = [ + "ecr:CreateRepository", + "ecr:BatchImportUpstreamImage", + "ecr:TagResource" + ] + Effect = "Allow" + Resource = "*" + }, + ] + }) + + tags = local.tags +} + +#--------------------------------------------------------------- +# EKS Addons +#--------------------------------------------------------------- + +module "eks_blueprints_addons" { + source = "aws-ia/eks-blueprints-addons/aws" + version = "~> 1.16" + + cluster_name = module.eks.cluster_name + cluster_endpoint = module.eks.cluster_endpoint + cluster_version = module.eks.cluster_version + oidc_provider_arn = module.eks.oidc_provider_arn + + enable_argocd = true + argocd = { + namespace = "argocd" + chart_version = "7.1.0" # ArgoCD v2.11.2 + values = [ + templatefile("${path.module}/values/argocd.yaml", { + crossplane_aws_provider_enable = local.aws_provider.enable + crossplane_upjet_aws_provider_enable = local.upjet_aws_provider.enable + crossplane_kubernetes_provider_enable = local.kubernetes_provider.enable + ecr_account_id = local.ecr_account_id + ecr_region = local.ecr_region + })] + } + + enable_metrics_server = true + metrics_server = { + values = [ + templatefile("${path.module}/values/metrics-server.yaml", { + ecr_account_id = local.ecr_account_id + ecr_region = local.ecr_region + })] + } + + enable_aws_load_balancer_controller = true + aws_load_balancer_controller = { + values = [ + templatefile("${path.module}/values/aws-load-balancer-controller.yaml", { + ecr_account_id = local.ecr_account_id + ecr_region = local.ecr_region + })] + } + + enable_kube_prometheus_stack = true + kube_prometheus_stack = { + values = [ + templatefile("${path.module}/values/prometheus.yaml", { + ecr_account_id = local.ecr_account_id + ecr_region = local.ecr_region + })] + } + + depends_on = [module.eks.cluster_addons] +} + +resource "time_sleep" "addons_wait_60_seconds" { + create_duration = "60s" + depends_on = [module.eks_blueprints_addons] +} + +#--------------------------------------------------------------- +# Gatekeeper +#--------------------------------------------------------------- +module "gatekeeper" { + source = "aws-ia/eks-blueprints-addon/aws" + version = "1.1.1" + + name = "gatekeeper" + description = "A Helm chart to deploy gatekeeper project" + namespace = "gatekeeper-system" + create_namespace = true + chart = "gatekeeper" + chart_version = "3.16.3" + repository = "https://open-policy-agent.github.io/gatekeeper/charts" + values = [ + templatefile("${path.module}/values/gatekeeper.yaml", { + ecr_account_id = local.ecr_account_id + ecr_region = local.ecr_region + })] + + depends_on = [time_sleep.addons_wait_60_seconds] +} + +#--------------------------------------------------------------- +# Crossplane +#--------------------------------------------------------------- +module "crossplane" { + source = "aws-ia/eks-blueprints-addon/aws" + version = "1.1.1" + + name = "crossplane" + description = "A Helm chart to deploy crossplane project" + namespace = "crossplane-system" + create_namespace = true + chart = "crossplane" + chart_version = "1.16.0" + repository = "https://charts.crossplane.io/stable/" + timeout = "600" + values = [ + templatefile("${path.module}/values/crossplane.yaml", { + ecr_account_id = local.ecr_account_id + ecr_region = local.ecr_region + })] + + depends_on = [time_sleep.addons_wait_60_seconds] +} + +resource "kubectl_manifest" "environmentconfig" { + yaml_body = templatefile("${path.module}/config/environmentconfig.yaml", { + awsAccountID = data.aws_caller_identity.current.account_id + eksOIDC = module.eks.oidc_provider + vpcID = module.vpc.vpc_id + }) + + depends_on = [module.crossplane] +} + +#--------------------------------------------------------------- +# Crossplane Providers Settings +#--------------------------------------------------------------- +locals { + crossplane_namespace = "crossplane-system" + + upjet_aws_provider = { + enable = var.enable_upjet_aws_provider # defaults to true + version = "v1.6.0" + runtime_config = "upjet-aws-runtime-config" + provider_config_name = "aws-provider-config" #this is the providerConfigName used in all the examples in this repo + families = [ + "dynamodb", + "ec2", + "elasticache", + "iam", + "kms", + "lambda", + "rds", + "s3", + "sns", + "sqs", + "vpc", + "apigateway", + "cloudwatch", + "cloudwatchlogs" + ] + } + + aws_provider = { + enable = var.enable_aws_provider # defaults to false + version = "v0.48.0" + name = "aws-provider" + runtime_config = "aws-runtime-config" + provider_config_name = "aws-provider-config" #this is the providerConfigName used in all the examples in this repo + } + + kubernetes_provider = { + enable = var.enable_kubernetes_provider # defaults to true + version = "v0.13.0" + service_account = "kubernetes-provider" + name = "kubernetes-provider" + runtime_config = "kubernetes-runtime-config" + provider_config_name = "default" + cluster_role = "cluster-admin" + } + + helm_provider = { + enable = var.enable_helm_provider # defaults to true + version = "v0.18.1" + service_account = "helm-provider" + name = "helm-provider" + runtime_config = "helm-runtime-config" + provider_config_name = "default" + cluster_role = "cluster-admin" + } + +} + +#--------------------------------------------------------------- +# Crossplane Upjet AWS Provider +#--------------------------------------------------------------- +module "upjet_irsa_aws" { + count = local.upjet_aws_provider.enable == true ? 1 : 0 + source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks" + version = "~> 5.30" + + role_name_prefix = "${local.name}-upjet-aws-" + assume_role_condition_test = "StringLike" + + role_policy_arns = { + policy = "arn:aws:iam::aws:policy/AdministratorAccess" + } + + oidc_providers = { + main = { + provider_arn = module.eks.oidc_provider_arn + namespace_service_accounts = ["${local.crossplane_namespace}:provider-upjet-aws-*"] + } + } + + tags = local.tags + + depends_on = [module.crossplane] +} + +resource "kubectl_manifest" "upjet_aws_runtime_config" { + count = local.upjet_aws_provider.enable == true ? 1 : 0 + yaml_body = templatefile("${path.module}/providers/upjet-aws/runtime-config.yaml", { + iam-role-arn = module.upjet_irsa_aws[0].iam_role_arn + runtime-config = local.upjet_aws_provider.runtime_config + }) + + depends_on = [module.crossplane] +} + +resource "kubectl_manifest" "upjet_provider_family_aws" { + count = local.upjet_aws_provider.enable == true ? 1 : 0 + yaml_body = templatefile("${path.module}/providers/upjet-aws/provider-family-aws.yaml", { + version = local.upjet_aws_provider.version + runtime-config = local.upjet_aws_provider.runtime_config + ecr_account_id = local.ecr_account_id + ecr_region = local.ecr_region + }) + + depends_on = [kubectl_manifest.upjet_aws_runtime_config, module.crossplane] +} + +# Wait for the Upbound AWS Family Provider to be fully created. +resource "time_sleep" "upjet_family_wait_60_seconds" { + count = local.upjet_aws_provider.enable == true ? 1 : 0 + create_duration = "60s" + + depends_on = [kubectl_manifest.upjet_provider_family_aws, module.crossplane] +} + +resource "kubectl_manifest" "upjet_aws_provider" { + for_each = local.upjet_aws_provider.enable ? toset(local.upjet_aws_provider.families) : toset([]) + yaml_body = templatefile("${path.module}/providers/upjet-aws/provider.yaml", { + family = each.key + version = local.upjet_aws_provider.version + runtime-config = local.upjet_aws_provider.runtime_config + ecr_account_id = local.ecr_account_id + ecr_region = local.ecr_region + }) + + depends_on = [time_sleep.upjet_family_wait_60_seconds, module.crossplane] +} + +# Wait for the Upbound AWS Provider CRDs to be fully created before initiating upjet_aws_provider_config +resource "time_sleep" "upjet_wait_60_seconds" { + count = local.upjet_aws_provider.enable == true ? 1 : 0 + create_duration = "60s" + + depends_on = [kubectl_manifest.upjet_aws_provider, module.crossplane] +} + +resource "kubectl_manifest" "upjet_aws_provider_config" { + count = local.upjet_aws_provider.enable == true ? 1 : 0 + yaml_body = templatefile("${path.module}/providers/upjet-aws/provider-config.yaml", { + provider-config-name = local.upjet_aws_provider.provider_config_name + }) + + depends_on = [kubectl_manifest.upjet_aws_provider, time_sleep.upjet_wait_60_seconds, module.crossplane] +} + +#--------------------------------------------------------------- +# Crossplane AWS Provider +#--------------------------------------------------------------- +module "irsa_aws_provider" { + count = local.aws_provider.enable == true ? 1 : 0 + source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks" + version = "~> 5.30" + + role_name_prefix = "${local.name}-aws-provider-" + assume_role_condition_test = "StringLike" + + role_policy_arns = { + policy = "arn:aws:iam::aws:policy/AdministratorAccess" + } + + oidc_providers = { + main = { + provider_arn = module.eks.oidc_provider_arn + namespace_service_accounts = ["${local.crossplane_namespace}:aws-provider-*"] + } + } + + tags = local.tags + + depends_on = [module.crossplane] +} + +resource "kubectl_manifest" "aws_runtime_config" { + count = local.aws_provider.enable == true ? 1 : 0 + yaml_body = templatefile("${path.module}/providers/aws/runtime-config.yaml", { + iam-role-arn = module.irsa_aws_provider[0].iam_role_arn + runtime-config = local.aws_provider.runtime_config + }) + + depends_on = [module.crossplane] +} + +resource "kubectl_manifest" "aws_provider" { + count = local.aws_provider.enable == true ? 1 : 0 + yaml_body = templatefile("${path.module}/providers/aws/provider.yaml", { + aws-provider-name = local.aws_provider.name + version = local.aws_provider.version + runtime-config = local.aws_provider.runtime_config + ecr_account_id = local.ecr_account_id + ecr_region = local.ecr_region + }) + + depends_on = [kubectl_manifest.aws_runtime_config, module.crossplane] +} + +# Wait for the Upbound AWS Provider CRDs to be fully created before initiating aws_provider_config +resource "time_sleep" "aws_wait_60_seconds" { + count = local.aws_provider.enable == true ? 1 : 0 + create_duration = "60s" + + depends_on = [kubectl_manifest.aws_provider, module.crossplane] +} + +resource "kubectl_manifest" "aws_provider_config" { + count = local.aws_provider.enable == true ? 1 : 0 + yaml_body = templatefile("${path.module}/providers/aws/provider-config.yaml", { + provider-config-name = local.aws_provider.provider_config_name + }) + + depends_on = [kubectl_manifest.aws_provider, time_sleep.aws_wait_60_seconds, module.crossplane] +} + +#--------------------------------------------------------------- +# Crossplane Kubernetes Provider +#--------------------------------------------------------------- +resource "kubernetes_service_account_v1" "kubernetes_runtime" { + count = local.kubernetes_provider.enable == true ? 1 : 0 + metadata { + name = local.kubernetes_provider.service_account + namespace = local.crossplane_namespace + } + + depends_on = [module.crossplane] +} + +resource "kubectl_manifest" "kubernetes_provider_clusterolebinding" { + count = local.kubernetes_provider.enable == true ? 1 : 0 + yaml_body = templatefile("${path.module}/providers/kubernetes/clusterrolebinding.yaml", { + namespace = local.crossplane_namespace + cluster-role = local.kubernetes_provider.cluster_role + sa-name = kubernetes_service_account_v1.kubernetes_runtime[0].metadata[0].name + }) + + depends_on = [kubernetes_service_account_v1.kubernetes_runtime, module.crossplane] +} + +resource "kubectl_manifest" "kubernetes_runtime_config" { + count = local.kubernetes_provider.enable == true ? 1 : 0 + yaml_body = templatefile("${path.module}/providers/kubernetes/runtime-config.yaml", { + sa-name = kubernetes_service_account_v1.kubernetes_runtime[0].metadata[0].name + runtime-config = local.kubernetes_provider.runtime_config + }) + + depends_on = [kubectl_manifest.kubernetes_provider_clusterolebinding, module.crossplane] +} + +resource "kubectl_manifest" "kubernetes_provider" { + count = local.kubernetes_provider.enable == true ? 1 : 0 + yaml_body = templatefile("${path.module}/providers/kubernetes/provider.yaml", { + version = local.kubernetes_provider.version + kubernetes-provider-name = local.kubernetes_provider.name + runtime-config = local.kubernetes_provider.runtime_config + ecr_account_id = local.ecr_account_id + ecr_region = local.ecr_region + }) + + depends_on = [module.crossplane, kubectl_manifest.kubernetes_runtime_config] +} + +# Wait for the AWS Provider CRDs to be fully created before initiating provider_config deployment +resource "time_sleep" "wait_60_seconds_kubernetes" { + create_duration = "60s" + + depends_on = [module.crossplane, kubectl_manifest.kubernetes_provider] +} + +resource "kubectl_manifest" "kubernetes_provider_config" { + count = local.kubernetes_provider.enable == true ? 1 : 0 + yaml_body = templatefile("${path.module}/providers/kubernetes/provider-config.yaml", { + provider-config-name = local.kubernetes_provider.provider_config_name + }) + + depends_on = [module.crossplane, kubectl_manifest.kubernetes_provider, time_sleep.wait_60_seconds_kubernetes] +} + +#--------------------------------------------------------------- +# Crossplane Helm Provider +#--------------------------------------------------------------- +resource "kubernetes_service_account_v1" "helm_runtime" { + count = local.helm_provider.enable == true ? 1 : 0 + metadata { + name = local.helm_provider.service_account + namespace = local.crossplane_namespace + } + + depends_on = [module.crossplane] +} + +resource "kubectl_manifest" "helm_runtime_clusterolebinding" { + count = local.helm_provider.enable == true ? 1 : 0 + yaml_body = templatefile("${path.module}/providers/helm/clusterrolebinding.yaml", { + namespace = local.crossplane_namespace + cluster-role = local.helm_provider.cluster_role + sa-name = kubernetes_service_account_v1.helm_runtime[0].metadata[0].name + }) + + depends_on = [kubernetes_service_account_v1.helm_runtime, module.crossplane] +} + +resource "kubectl_manifest" "helm_runtime_config" { + count = local.helm_provider.enable == true ? 1 : 0 + yaml_body = templatefile("${path.module}/providers/helm/runtime-config.yaml", { + sa-name = kubernetes_service_account_v1.helm_runtime[0].metadata[0].name + runtime-config = local.helm_provider.runtime_config + }) + + depends_on = [kubectl_manifest.helm_runtime_clusterolebinding, module.crossplane] +} + +resource "kubectl_manifest" "helm_provider" { + count = local.helm_provider.enable == true ? 1 : 0 + yaml_body = templatefile("${path.module}/providers/helm/provider.yaml", { + version = local.helm_provider.version + helm-provider-name = local.helm_provider.name + runtime-config = local.helm_provider.runtime_config + ecr_account_id = local.ecr_account_id + ecr_region = local.ecr_region + }) + + depends_on = [kubectl_manifest.helm_runtime_config, module.crossplane] +} + +# Wait for the AWS Provider CRDs to be fully created before initiating provider_config deployment +resource "time_sleep" "wait_60_seconds_helm" { + create_duration = "60s" + + depends_on = [kubectl_manifest.helm_provider, module.crossplane] +} + +resource "kubectl_manifest" "helm_provider_config" { + count = local.helm_provider.enable == true ? 1 : 0 + yaml_body = templatefile("${path.module}/providers/helm/provider-config.yaml", { + provider-config-name = local.helm_provider.provider_config_name + }) + + depends_on = [kubectl_manifest.helm_provider, time_sleep.wait_60_seconds_helm, module.crossplane] +} + +#--------------------------------------------------------------- +# Supporting Resources +#--------------------------------------------------------------- + +module "vpc" { + source = "terraform-aws-modules/vpc/aws" + version = "~> 5.0" + + manage_default_vpc = true + + name = local.name + cidr = local.vpc_cidr + + azs = local.azs + private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 4, k)] + + enable_nat_gateway = false + + private_subnet_tags = { + "kubernetes.io/role/internal-elb" = 1 + } + + tags = local.tags +} + +module "vpc_endpoints" { + source = "terraform-aws-modules/vpc/aws//modules/vpc-endpoints" + version = "~> 5.1" + + vpc_id = module.vpc.vpc_id + + # Security group + create_security_group = true + security_group_name_prefix = "${local.name}-vpc-endpoints-" + security_group_description = "VPC endpoint security group" + security_group_rules = { + ingress_https = { + description = "HTTPS from VPC" + cidr_blocks = [module.vpc.vpc_cidr_block] + } + } + + endpoints = merge({ + s3 = { + service = "s3" + service_type = "Gateway" + route_table_ids = module.vpc.private_route_table_ids + tags = { + Name = "${local.name}-s3" + } + } + }, + { for service in toset(["autoscaling", "ecr.api", "ecr.dkr", "ec2", "ec2messages", "elasticloadbalancing", "sts", "kms", "logs", "ssm", "ssmmessages"]) : + replace(service, ".", "_") => + { + service = service + subnet_ids = module.vpc.private_subnets + private_dns_enabled = true + tags = { Name = "${local.name}-${service}" } + } + }) + + tags = local.tags + + depends_on = [module.vpc] +} diff --git a/bootstrap/terraform-fully-private/outputs.tf b/bootstrap/terraform-fully-private/outputs.tf new file mode 100644 index 00000000..97dffc31 --- /dev/null +++ b/bootstrap/terraform-fully-private/outputs.tf @@ -0,0 +1,8 @@ +output "eks_cluster_id" { + description = "Kubernetes Cluster Name" + value = module.eks.cluster_id +} +output "configure_kubectl" { + description = "Configure kubectl: make sure you're logged in with the correct AWS profile and run the following command to update your kubeconfig" + value = "aws eks update-kubeconfig --name ${module.eks.cluster_name} --alias ${local.name} --region ${local.region}" +} diff --git a/bootstrap/terraform-fully-private/providers/aws/provider-config.yaml b/bootstrap/terraform-fully-private/providers/aws/provider-config.yaml new file mode 100644 index 00000000..fdb6c30e --- /dev/null +++ b/bootstrap/terraform-fully-private/providers/aws/provider-config.yaml @@ -0,0 +1,7 @@ +apiVersion: aws.crossplane.io/v1beta1 +kind: ProviderConfig +metadata: + name: ${provider-config-name} +spec: + credentials: + source: InjectedIdentity diff --git a/bootstrap/terraform-fully-private/providers/aws/provider.yaml b/bootstrap/terraform-fully-private/providers/aws/provider.yaml new file mode 100644 index 00000000..1fb30141 --- /dev/null +++ b/bootstrap/terraform-fully-private/providers/aws/provider.yaml @@ -0,0 +1,8 @@ +apiVersion: pkg.crossplane.io/v1 +kind: Provider +metadata: + name: ${aws-provider-name} +spec: + package: xpkg.upbound.io/crossplane-contrib/provider-aws:${version} + runtimeConfigRef: + name: ${runtime-config} diff --git a/bootstrap/terraform-fully-private/providers/aws/runtime-config.yaml b/bootstrap/terraform-fully-private/providers/aws/runtime-config.yaml new file mode 100644 index 00000000..e0a8482a --- /dev/null +++ b/bootstrap/terraform-fully-private/providers/aws/runtime-config.yaml @@ -0,0 +1,21 @@ +apiVersion: pkg.crossplane.io/v1beta1 +kind: DeploymentRuntimeConfig +metadata: + name: ${runtime-config} +spec: + deploymentTemplate: + spec: + replicas: 1 + selector: {} + template: + spec: + containers: + - name: package-runtime + args: + - --debug + securityContext: + fsGroup: 2000 + serviceAccountTemplate: + metadata: + annotations: + eks.amazonaws.com/role-arn: ${iam-role-arn} diff --git a/bootstrap/terraform-fully-private/providers/helm/clusterrolebinding.yaml b/bootstrap/terraform-fully-private/providers/helm/clusterrolebinding.yaml new file mode 100644 index 00000000..8fc135f9 --- /dev/null +++ b/bootstrap/terraform-fully-private/providers/helm/clusterrolebinding.yaml @@ -0,0 +1,12 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: ${sa-name} +subjects: + - kind: ServiceAccount + name: ${sa-name} + namespace: ${namespace} +roleRef: + kind: ClusterRole + name: ${cluster-role} + apiGroup: rbac.authorization.k8s.io diff --git a/bootstrap/terraform-fully-private/providers/helm/provider-config.yaml b/bootstrap/terraform-fully-private/providers/helm/provider-config.yaml new file mode 100644 index 00000000..bf38c00d --- /dev/null +++ b/bootstrap/terraform-fully-private/providers/helm/provider-config.yaml @@ -0,0 +1,9 @@ +# https://github.com/crossplane-contrib/provider-helm/blob/master/examples/provider-config/provider-config-incluster.yaml +--- +apiVersion: helm.crossplane.io/v1beta1 +kind: ProviderConfig +metadata: + name: ${provider-config-name} +spec: + credentials: + source: InjectedIdentity diff --git a/bootstrap/terraform-fully-private/providers/helm/provider.yaml b/bootstrap/terraform-fully-private/providers/helm/provider.yaml new file mode 100644 index 00000000..fb880125 --- /dev/null +++ b/bootstrap/terraform-fully-private/providers/helm/provider.yaml @@ -0,0 +1,8 @@ +apiVersion: pkg.crossplane.io/v1 +kind: Provider +metadata: + name: ${helm-provider-name} +spec: + package: ${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/crossplane-contrib/provider-helm:${version} + runtimeConfigRef: + name: ${runtime-config} diff --git a/bootstrap/terraform-fully-private/providers/helm/runtime-config.yaml b/bootstrap/terraform-fully-private/providers/helm/runtime-config.yaml new file mode 100644 index 00000000..65f025a8 --- /dev/null +++ b/bootstrap/terraform-fully-private/providers/helm/runtime-config.yaml @@ -0,0 +1,13 @@ +apiVersion: pkg.crossplane.io/v1beta1 +kind: DeploymentRuntimeConfig +metadata: + name: ${runtime-config} +spec: + deploymentTemplate: + spec: + replicas: 1 + selector: {} + template: {} + serviceAccountTemplate: + metadata: + name: ${sa-name} diff --git a/bootstrap/terraform-fully-private/providers/kubernetes/clusterrolebinding.yaml b/bootstrap/terraform-fully-private/providers/kubernetes/clusterrolebinding.yaml new file mode 100644 index 00000000..8fc135f9 --- /dev/null +++ b/bootstrap/terraform-fully-private/providers/kubernetes/clusterrolebinding.yaml @@ -0,0 +1,12 @@ +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: ${sa-name} +subjects: + - kind: ServiceAccount + name: ${sa-name} + namespace: ${namespace} +roleRef: + kind: ClusterRole + name: ${cluster-role} + apiGroup: rbac.authorization.k8s.io diff --git a/bootstrap/terraform-fully-private/providers/kubernetes/provider-config.yaml b/bootstrap/terraform-fully-private/providers/kubernetes/provider-config.yaml new file mode 100644 index 00000000..ff25439b --- /dev/null +++ b/bootstrap/terraform-fully-private/providers/kubernetes/provider-config.yaml @@ -0,0 +1,8 @@ +--- +apiVersion: kubernetes.crossplane.io/v1alpha1 +kind: ProviderConfig +metadata: + name: ${provider-config-name} +spec: + credentials: + source: InjectedIdentity diff --git a/bootstrap/terraform-fully-private/providers/kubernetes/provider.yaml b/bootstrap/terraform-fully-private/providers/kubernetes/provider.yaml new file mode 100644 index 00000000..4f4ae6f5 --- /dev/null +++ b/bootstrap/terraform-fully-private/providers/kubernetes/provider.yaml @@ -0,0 +1,8 @@ +apiVersion: pkg.crossplane.io/v1 +kind: Provider +metadata: + name: ${kubernetes-provider-name} +spec: + package: ${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/crossplane-contrib/provider-kubernetes:${version} + runtimeConfigRef: + name: ${runtime-config} diff --git a/bootstrap/terraform-fully-private/providers/kubernetes/runtime-config.yaml b/bootstrap/terraform-fully-private/providers/kubernetes/runtime-config.yaml new file mode 100644 index 00000000..65f025a8 --- /dev/null +++ b/bootstrap/terraform-fully-private/providers/kubernetes/runtime-config.yaml @@ -0,0 +1,13 @@ +apiVersion: pkg.crossplane.io/v1beta1 +kind: DeploymentRuntimeConfig +metadata: + name: ${runtime-config} +spec: + deploymentTemplate: + spec: + replicas: 1 + selector: {} + template: {} + serviceAccountTemplate: + metadata: + name: ${sa-name} diff --git a/bootstrap/terraform-fully-private/providers/upjet-aws/provider-config.yaml b/bootstrap/terraform-fully-private/providers/upjet-aws/provider-config.yaml new file mode 100644 index 00000000..b1fbebd1 --- /dev/null +++ b/bootstrap/terraform-fully-private/providers/upjet-aws/provider-config.yaml @@ -0,0 +1,9 @@ +# https://github.com/upbound/provider-aws/blob/main/docs/Configuration.md#create-a-providerconfig +--- +apiVersion: aws.upbound.io/v1beta1 +kind: ProviderConfig +metadata: + name: ${provider-config-name} +spec: + credentials: + source: IRSA diff --git a/bootstrap/terraform-fully-private/providers/upjet-aws/provider-family-aws.yaml b/bootstrap/terraform-fully-private/providers/upjet-aws/provider-family-aws.yaml new file mode 100644 index 00000000..c19cace4 --- /dev/null +++ b/bootstrap/terraform-fully-private/providers/upjet-aws/provider-family-aws.yaml @@ -0,0 +1,12 @@ +--- +apiVersion: pkg.crossplane.io/v1 +kind: Provider +metadata: + name: upbound-provider-family-aws +spec: + # Using a fully qualified image path in the private registry as part of the workaround for private registry support. + # This includes a subpath acting as an "org" to aid the rbac-manager in creating clusterroles. + # For more details, see the workaround description: https://github.com/crossplane/crossplane/issues/4299#issuecomment-1691379712 + package: ${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/upbound/provider-family-aws:${version} + runtimeConfigRef: + name: ${runtime-config} diff --git a/bootstrap/terraform-fully-private/providers/upjet-aws/provider.yaml b/bootstrap/terraform-fully-private/providers/upjet-aws/provider.yaml new file mode 100644 index 00000000..77f58ddf --- /dev/null +++ b/bootstrap/terraform-fully-private/providers/upjet-aws/provider.yaml @@ -0,0 +1,13 @@ +# https://github.com/crossplane-contrib/provider-upjet-aws/blob/main/docs/family/Configuration.md#install-the-provider-family-aws-1 +--- +apiVersion: pkg.crossplane.io/v1 +kind: Provider +metadata: + name: provider-upjet-aws-${family} +spec: + package: ${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/upbound/provider-aws-${family}:${version} + # Dependency resolution is skipped for family providers. + # For more details, see the workaround description: https://github.com/crossplane/crossplane/issues/4299#issuecomment-1691379712 + skipDependencyResolution: true + runtimeConfigRef: + name: ${runtime-config} diff --git a/bootstrap/terraform-fully-private/providers/upjet-aws/runtime-config.yaml b/bootstrap/terraform-fully-private/providers/upjet-aws/runtime-config.yaml new file mode 100644 index 00000000..678b12c0 --- /dev/null +++ b/bootstrap/terraform-fully-private/providers/upjet-aws/runtime-config.yaml @@ -0,0 +1,37 @@ +# https://github.com/crossplane-contrib/provider-upjet-aws/blob/main/docs/family/Configuration.md +--- +apiVersion: pkg.crossplane.io/v1beta1 +kind: DeploymentRuntimeConfig +metadata: + name: ${runtime-config} +spec: + deploymentTemplate: + spec: + replicas: 1 + selector: {} + template: + spec: + containers: + - name: package-runtime + args: + - --debug + # Uncomment the following lines to configure a proxy for Crossplane providers + # env: + # - name: http_proxy + # value: "http://proxy-server:8080" + # - name: https_proxy + # value: "http://proxy-server:8080" + # - name: HTTP_PROXY + # value: "http://proxy-server:8080" + # - name: HTTPS_PROXY + # value: "http://proxy-server:8080" + # - name: no_proxy + # value: "10.0.0.0/8, .svc, .cluster.local" + # - name: NO_PROXY + # value: "10.0.0.0/8, .svc, .cluster.local" + securityContext: + fsGroup: 2000 + serviceAccountTemplate: + metadata: + annotations: + eks.amazonaws.com/role-arn: ${iam-role-arn} diff --git a/bootstrap/terraform-fully-private/scripts/cleanup.sh b/bootstrap/terraform-fully-private/scripts/cleanup.sh new file mode 100755 index 00000000..9cb7009c --- /dev/null +++ b/bootstrap/terraform-fully-private/scripts/cleanup.sh @@ -0,0 +1,29 @@ +#!/bin/bash + +AWS_REGION='us-east-1' +ACCOUNT_ID=$(aws sts get-caller-identity --output json | jq -r ".Account" | tr -d '[:space:]') +ECR_URL="${ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com" + +PREFIXES=( + "crossplane-contrib/" + "docker-hub/" + "ecr/" + "k8s/" + "quay/" + "upbound/" + "crossplane" +) + +echo "Running Terraform destroy..." +terraform destroy --auto-approve + +REPOS=$(aws ecr describe-repositories --query 'repositories[*].repositoryName' --output text --region ${AWS_REGION}) + +for repo in ${REPOS}; do + for prefix in "${PREFIXES[@]}"; do + if [[ $repo == $prefix* ]]; then + echo "Deleting repository: ${repo}" + aws ecr delete-repository --repository-name ${repo} --region ${AWS_REGION} --force + fi + done +done diff --git a/bootstrap/terraform-fully-private/scripts/copy-images-to-ecr.sh b/bootstrap/terraform-fully-private/scripts/copy-images-to-ecr.sh new file mode 100755 index 00000000..0ce385a0 --- /dev/null +++ b/bootstrap/terraform-fully-private/scripts/copy-images-to-ecr.sh @@ -0,0 +1,41 @@ +#!/bin/bash + +AWS_REGION='us-east-1' +ACCOUNT_ID=$(aws sts get-caller-identity --output json | jq -r ".Account" | tr -d '[:space:]') +ECR_URL="${ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com" + +PROGRAM=crane + +PACKAGES=( + "crossplane/crossplane:v1.16.0" + "crossplane-contrib/provider-helm:v0.18.1" + "crossplane-contrib/provider-kubernetes:v0.13.0" + "upbound/provider-aws-apigateway:v1.6.0" + "upbound/provider-aws-cloudwatch:v1.6.0" + "upbound/provider-aws-cloudwatchlogs:v1.6.0" + "upbound/provider-aws-dynamodb:v1.6.0" + "upbound/provider-aws-ec2:v1.6.0" + "upbound/provider-aws-eks:v1.6.0" + "upbound/provider-aws-elasticache:v1.6.0" + "upbound/provider-aws-iam:v1.6.0" + "upbound/provider-aws-kms:v1.6.0" + "upbound/provider-aws-lambda:v1.6.0" + "upbound/provider-aws-rds:v1.6.0" + "upbound/provider-aws-s3:v1.6.0" + "upbound/provider-aws-sns:v1.6.0" + "upbound/provider-aws-sqs:v1.6.0" + "upbound/provider-aws-vpc:v1.6.0" + "upbound/provider-aws-cloudfront:v1.6.0" + "upbound/provider-family-aws:v1.6.0" +) + +aws ecr get-login-password --region $AWS_REGION | crane auth login --username AWS --password-stdin $ECR_URL + +for pkg in "${PACKAGES[@]}"; do + if [ "$PROGRAM" == "crane" ]; then + crane copy xpkg.upbound.io/${pkg} ${ECR_URL}/${pkg} + else + echo "Unsupported program: $PROGRAM" + exit 1 + fi +done diff --git a/bootstrap/terraform-fully-private/scripts/create-crossplane-ecr-repos.sh b/bootstrap/terraform-fully-private/scripts/create-crossplane-ecr-repos.sh new file mode 100755 index 00000000..0e6464b1 --- /dev/null +++ b/bootstrap/terraform-fully-private/scripts/create-crossplane-ecr-repos.sh @@ -0,0 +1,34 @@ +#!/bin/bash + +AWS_REGION='us-east-1' +ACCOUNT_ID=$(aws sts get-caller-identity --output json | jq -r ".Account" | tr -d '[:space:]') +ECR_URL="${ACCOUNT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com" + +aws ecr get-login-password | docker login --username AWS --password-stdin $ECR_URL + +REPOSITORIES=( + "crossplane-contrib/provider-helm" + "crossplane-contrib/provider-kubernetes" + "crossplane/crossplane" + "upbound/provider-aws-apigateway" + "upbound/provider-aws-cloudwatch" + "upbound/provider-aws-cloudwatchlogs" + "upbound/provider-aws-dynamodb" + "upbound/provider-aws-ec2" + "upbound/provider-aws-eks" + "upbound/provider-aws-elasticache" + "upbound/provider-aws-iam" + "upbound/provider-aws-kms" + "upbound/provider-aws-lambda" + "upbound/provider-aws-rds" + "upbound/provider-aws-cloudfront" + "upbound/provider-aws-s3" + "upbound/provider-aws-sns" + "upbound/provider-aws-sqs" + "upbound/provider-aws-vpc" + "upbound/provider-family-aws" +) + +for repo in "${REPOSITORIES[@]}"; do + aws ecr create-repository --repository-name ${repo} --region ${AWS_REGION} +done diff --git a/bootstrap/terraform-fully-private/values/argocd.yaml b/bootstrap/terraform-fully-private/values/argocd.yaml new file mode 100644 index 00000000..ef8a35f8 --- /dev/null +++ b/bootstrap/terraform-fully-private/values/argocd.yaml @@ -0,0 +1,421 @@ +global: + image: + repository: "${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/quay/argoproj/argocd" + pullPolicy: IfNotPresent + # Uncomment and set the values below if you need to configure proxy settings + # env: + # - name: http_proxy + # value: "" + # - name: https_proxy + # value: "" + # - name: no_proxy + # value: "" + # - name: HTTP_PROXY + # value: "" + # - name: HTTPS_PROXY + # value: "" + # - name: NO_PROXY + # value: "" + +dex: + enabled: false # Disable dex since we are not using + image: + repository: "${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/docker-hub/dexidp/dex" + pullPolicy: IfNotPresent + +redis-ha: + enabled: true + image: + repository: "${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/docker-hub/library/redis" + pullPolicy: IfNotPresent + haproxy: + image: + repository: "${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/docker-hub/library/haproxy" + pullPolicy: IfNotPresent + +controller: + replicas: 1 # Additional replicas will cause sharding of managed clusters across number of replicas. + metrics: + enabled: true + service: + annotations: + prometheus.io/scrape: true + env: + - name: ARGOCD_K8S_CLIENT_QPS #required for Crossplane too many CRDs https://github.com/argoproj/argo-cd/pull/448 + value: "300" + +repoServer: + autoscaling: + enabled: true + minReplicas: 1 + resources: # Adjust based on your specific use case (required for HPA) + requests: + cpu: "100m" + memory: "256Mi" + limits: + cpu: "200m" + memory: "512Mi" + metrics: + enabled: true + service: + annotations: + prometheus.io/scrape: true + +applicationSet: + replicaCount: 1 # The controller doesn't scale horizontally, is active-standby replicas + metrics: + enabled: true + service: + annotations: + prometheus.io/scrape: true + +server: + autoscaling: + enabled: true + minReplicas: 1 + resources: # Adjust based on your specific use case (required for HPA) + requests: + cpu: "100m" + memory: "256Mi" + limits: + cpu: "200m" + memory: "512Mi" + metrics: + enabled: true + service: + annotations: + prometheus.io/scrape: true + service: + type: "LoadBalancer" + annotations: + service.beta.kubernetes.io/aws-load-balancer-scheme: "internal" + +configs: + params: + application.namespaces: "cluster-*" + cm: + application.resourceTrackingMethod: "annotation" #use annotation for tracking required for Crossplane + resource.exclusions: | + - kinds: + - ProviderConfigUsage + apiGroups: + - "*" + resource.customizations: | + "awsblueprints.io/*": + health.lua: | + health_status = { + status = "Progressing", + message = "Provisioning ..." + } + + if obj.status == nil or obj.status.conditions == nil then + return health_status + end + + for i, condition in ipairs(obj.status.conditions) do + if condition.type == "LastAsyncOperation" then + if condition.status == "False" then + health_status.status = "Degraded" + health_status.message = condition.message + return health_status + end + end + + if condition.type == "Synced" then + if condition.status == "False" then + health_status.status = "Degraded" + health_status.message = condition.message + return health_status + end + end + + if condition.type == "Ready" then + if condition.status == "True" then + health_status.status = "Healthy" + health_status.message = "Resource is up-to-date." + return health_status + end + end + end + + return health_status + "*.aws.upbound.io/*": + health.lua: | + health_status = { + status = "Progressing", + message = "Provisioning ..." + } + + local function contains (table, val) + for i, v in ipairs(table) do + if v == val then + return true + end + end + return false + end + + local has_no_status = { + "ProviderConfig", + "ProviderConfigUsage" + } + + if obj.status == nil or next(obj.status) == nil and contains(has_no_status, obj.kind) then + health_status.status = "Healthy" + health_status.message = "Resource is up-to-date." + return health_status + end + + if obj.status == nil or next(obj.status) == nil or obj.status.conditions == nil then + if obj.kind == "ProviderConfig" and obj.status.users ~= nil then + health_status.status = "Healthy" + health_status.message = "Resource is in use." + return health_status + end + return health_status + end + + if obj.status == nil or obj.status.conditions == nil then + return health_status + end + + for i, condition in ipairs(obj.status.conditions) do + if condition.type == "LastAsyncOperation" then + if condition.status == "False" then + health_status.status = "Degraded" + health_status.message = condition.message + return health_status + end + end + + if condition.type == "Synced" then + if condition.status == "False" then + health_status.status = "Degraded" + health_status.message = condition.message + return health_status + end + end + + if condition.type == "Ready" then + if condition.status == "True" then + health_status.status = "Healthy" + health_status.message = "Resource is up-to-date." + return health_status + end + end + end + + return health_status + "*.aws.crossplane.io/*": + health.lua: | + health_status = { + status = "Progressing", + message = "Provisioning ..." + } + + local function contains (table, val) + for i, v in ipairs(table) do + if v == val then + return true + end + end + return false + end + + local has_no_status = { + "Composition", + "CompositionRevision", + "DeploymentRuntimeConfig", + "ControllerConfig", + "ProviderConfig", + "ProviderConfigUsage" + } + + if obj.status == nil or next(obj.status) == nil and contains(has_no_status, obj.kind) then + health_status.status = "Healthy" + health_status.message = "Resource is up-to-date." + return health_status + end + + if obj.status == nil or next(obj.status) == nil or obj.status.conditions == nil then + if obj.kind == "ProviderConfig" and obj.status.users ~= nil then + health_status.status = "Healthy" + health_status.message = "Resource is in use." + return health_status + end + return health_status + end + + if obj.status == nil or obj.status.conditions == nil then + return health_status + end + + for i, condition in ipairs(obj.status.conditions) do + if condition.type == "LastAsyncOperation" then + if condition.status == "False" then + health_status.status = "Degraded" + health_status.message = condition.message + return health_status + end + end + + if condition.type == "Synced" then + if condition.status == "False" then + health_status.status = "Degraded" + health_status.message = condition.message + return health_status + end + end + + if condition.type == "Ready" then + if condition.status == "True" then + health_status.status = "Healthy" + health_status.message = "Resource is up-to-date." + return health_status + end + end + end + + return health_status + "kubernetes.crossplane.io/*": + health.lua: | + health_status = { + status = "Progressing", + message = "Provisioning ..." + } + + local function contains (table, val) + for i, v in ipairs(table) do + if v == val then + return true + end + end + return false + end + + local has_no_status = { + "Composition", + "CompositionRevision", + "DeploymentRuntimeConfig", + "ControllerConfig", + "ProviderConfig", + "ProviderConfigUsage" + } + + if obj.status == nil or next(obj.status) == nil and contains(has_no_status, obj.kind) then + health_status.status = "Healthy" + health_status.message = "Resource is up-to-date." + return health_status + end + + if obj.status == nil or next(obj.status) == nil or obj.status.conditions == nil then + if obj.kind == "ProviderConfig" and obj.status.users ~= nil then + health_status.status = "Healthy" + health_status.message = "Resource is in use." + return health_status + end + return health_status + end + + if obj.status == nil or obj.status.conditions == nil then + return health_status + end + + for i, condition in ipairs(obj.status.conditions) do + if condition.type == "LastAsyncOperation" then + if condition.status == "False" then + health_status.status = "Degraded" + health_status.message = condition.message + return health_status + end + end + + if condition.type == "Synced" then + if condition.status == "False" then + health_status.status = "Degraded" + health_status.message = condition.message + return health_status + end + end + + if condition.type == "Ready" then + if condition.status == "True" then + health_status.status = "Healthy" + health_status.message = "Resource is up-to-date." + return health_status + end + end + end + + return health_status + "helm.crossplane.io/*": + health.lua: | + health_status = { + status = "Progressing", + message = "Provisioning ..." + } + + local function contains (table, val) + for i, v in ipairs(table) do + if v == val then + return true + end + end + return false + end + + local has_no_status = { + "Composition", + "CompositionRevision", + "DeploymentRuntimeConfig", + "ControllerConfig", + "ProviderConfig", + "ProviderConfigUsage" + } + + if obj.status == nil or next(obj.status) == nil and contains(has_no_status, obj.kind) then + health_status.status = "Healthy" + health_status.message = "Resource is up-to-date." + return health_status + end + + if obj.status == nil or next(obj.status) == nil or obj.status.conditions == nil then + if obj.kind == "ProviderConfig" and obj.status.users ~= nil then + health_status.status = "Healthy" + health_status.message = "Resource is in use." + return health_status + end + return health_status + end + + if obj.status == nil or obj.status.conditions == nil then + return health_status + end + + for i, condition in ipairs(obj.status.conditions) do + if condition.type == "LastAsyncOperation" then + if condition.status == "False" then + health_status.status = "Degraded" + health_status.message = condition.message + return health_status + end + end + + if condition.type == "Synced" then + if condition.status == "False" then + health_status.status = "Degraded" + health_status.message = condition.message + return health_status + end + end + + if condition.type == "Ready" then + if condition.status == "True" then + health_status.status = "Healthy" + health_status.message = "Resource is up-to-date." + return health_status + end + end + end + + return health_status diff --git a/bootstrap/terraform-fully-private/values/aws-load-balancer-controller.yaml b/bootstrap/terraform-fully-private/values/aws-load-balancer-controller.yaml new file mode 100644 index 00000000..c5e5ce1b --- /dev/null +++ b/bootstrap/terraform-fully-private/values/aws-load-balancer-controller.yaml @@ -0,0 +1,3 @@ +image: + repository: "${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/ecr/eks/aws-load-balancer-controller" + pullPolicy: IfNotPresent diff --git a/bootstrap/terraform-fully-private/values/crossplane.yaml b/bootstrap/terraform-fully-private/values/crossplane.yaml new file mode 100644 index 00000000..c01c07a5 --- /dev/null +++ b/bootstrap/terraform-fully-private/values/crossplane.yaml @@ -0,0 +1,21 @@ +image: + repository: "${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/crossplane/crossplane" + +args: + - "--enable-environment-configs" +metrics: + enabled: true +resourcesCrossplane: + limits: + cpu: "1" + memory: "2Gi" + requests: + cpu: "100m" + memory: "1Gi" +resourcesRBACManager: + limits: + cpu: "500m" + memory: "1Gi" + requests: + cpu: "100m" + memory: "512Mi" diff --git a/bootstrap/terraform-fully-private/values/gatekeeper.yaml b/bootstrap/terraform-fully-private/values/gatekeeper.yaml new file mode 100644 index 00000000..c4a879b3 --- /dev/null +++ b/bootstrap/terraform-fully-private/values/gatekeeper.yaml @@ -0,0 +1,26 @@ +image: + repository: "${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/docker-hub/openpolicyagent/gatekeeper" + crdRepository: "${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/docker-hub/openpolicyagent/gatekeeper-crds" + +preInstall: + crdRepository: + image: + repository: null + +postUpgrade: + labelNamespace: + image: + repository: "${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/docker-hub/openpolicyagent/gatekeeper-crds" + +postInstall: + labelNamespace: + image: + repository: "${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/docker-hub/openpolicyagent/gatekeeper-crds" + probeWebhook: + image: + repository: "${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/docker-hub/curlimages/curl" + +preUninstall: + deleteWebhookConfigurations: + image: + repository: "${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/docker-hub/openpolicyagent/gatekeeper-crds" diff --git a/bootstrap/terraform-fully-private/values/metrics-server.yaml b/bootstrap/terraform-fully-private/values/metrics-server.yaml new file mode 100644 index 00000000..a8c93f80 --- /dev/null +++ b/bootstrap/terraform-fully-private/values/metrics-server.yaml @@ -0,0 +1,3 @@ +image: + repository: "${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/k8s/metrics-server/metrics-server" + pullPolicy: IfNotPresent diff --git a/bootstrap/terraform-fully-private/values/prometheus.yaml b/bootstrap/terraform-fully-private/values/prometheus.yaml new file mode 100644 index 00000000..a4b41d62 --- /dev/null +++ b/bootstrap/terraform-fully-private/values/prometheus.yaml @@ -0,0 +1,116 @@ +global: + imageRegistry: "${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com" +prometheusOperator: + image: + repository: quay/prometheus-operator/prometheus-operator + admissionWebhooks: + deployment: + image: + repository: quay/prometheus-operator/admission-webhook + patch: + image: + repository: k8s/ingress-nginx/kube-webhook-certgen + prometheusConfigReloader: + image: + repository: quay/prometheus-operator/prometheus-config-reloader +alertmanager: + alertmanagerSpec: + image: + repository: quay/prometheus/alertmanager +prometheus: + prometheusSpec: + image: + repository: quay/prometheus/prometheus + service: + type: "LoadBalancer" + annotations: + service.beta.kubernetes.io/aws-load-balancer-scheme: "internal" + additionalPodMonitors: + - name: "crossplane" + namespaceSelector: + matchNames: + - "crossplane-system" + podMetricsEndpoints: + - port: "metrics" + selector: {} + additionalServiceMonitors: + - name: "argocd" + namespaceSelector: + matchNames: + - "argocd" + endpoints: + - port: "metrics" + selector: + matchLabels: + prometheus.io/scrape: "true" +prometheus-node-exporter: + image: + repository: quay/prometheus/node-exporter +kube-state-metrics: + image: + repository: k8s/kube-state-metrics/kube-state-metrics +grafana: + global: + imageRegistry: "${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com" + downloadDashboardsImage: + repository: ${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/docker-hub/curlimages/curl + image: + repository: ${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/docker-hub/grafana/grafana + imageRenderer: + image: + repository: ${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/docker-hub/grafana/grafana-image-renderer + sidecar: + image: + repository: ${ecr_account_id}.dkr.ecr.${ecr_region}.amazonaws.com/quay/kiwigrid/k8s-sidecar + service: + type: "LoadBalancer" + annotations: + service.beta.kubernetes.io/aws-load-balancer-scheme: "internal" + resources: + requests: + cpu: "100m" + memory: "1Gi" + limits: + cpu: "1" + memory: "2Gi" + datasources: + datasources.yaml: + apiVersion: 1 + datasources: + - name: Prometheus + type: prometheus + access: proxy + url: http://kube-prometheus-stack-prometheus.kube-prometheus-stack:9090/ + isDefault: false + uid: prometheusdatasource + deleteDatasources: + - name: Prometheus + dashboardProviders: + dashboardproviders.yaml: + apiVersion: 1 + providers: + - name: "default" + orgId: 1 + type: file + disableDeletion: false + editable: true + options: + path: /var/lib/grafana/dashboards/default + dashboards: + default: + crossplane: + gnetId: 21169 + revision: 1 + datasource: prometheusdatasource + argocd: + gnetId: 14584 + revision: 1 + datasource: prometheusdatasource + eks: + gnetId: 14623 + revision: 1 + datasource: prometheusdatasource + ekscontrolplane: + gnetId: 21192 + revision: 1 + datasource: prometheusdatasource diff --git a/bootstrap/terraform-fully-private/variables.tf b/bootstrap/terraform-fully-private/variables.tf new file mode 100644 index 00000000..735c5dc8 --- /dev/null +++ b/bootstrap/terraform-fully-private/variables.tf @@ -0,0 +1,82 @@ +# Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. +# SPDX-License-Identifier: Apache-2.0 + +variable "region" { + description = "AWS region" + type = string + default = "us-east-1" +} + +variable "name" { + description = "EKS Cluster Name and the VPC name" + type = string + default = "crossplane-blueprints" +} + +variable "cluster_version" { + type = string + description = "Kubernetes Version" + default = "1.29" +} + +variable "capacity_type" { + type = string + description = "Capacity SPOT or ON_DEMAND" + default = "SPOT" +} + +variable "enable_upjet_aws_provider" { + type = bool + description = "Installs the upjet aws provider" + default = true +} + +variable "enable_aws_provider" { + type = bool + description = "Installs the contrib aws provider" + default = false +} + +variable "enable_kubernetes_provider" { + type = bool + description = "Installs the kubernetes provider" + default = true +} + +variable "enable_helm_provider" { + type = bool + description = "Installs the helm provider" + default = true +} + +variable "ecr_account_id" { + type = string + description = "ECR repository AWS Account ID" + default = "" +} + +variable "ecr_region" { + type = string + description = "ECR repository AWS Region" + default = "" +} + +variable "docker_secret" { + type = object({ + username = string + accessToken = string + }) + default = { + username = "" + accessToken = "" + } + sensitive = true + validation { + condition = !(var.docker_secret.username == "" || var.docker_secret.accessToken == "") + error_message = < [!IMPORTANT] diff --git a/bootstrap/terraform/providers/kubernetes/runtime-config.yaml b/bootstrap/terraform/providers/kubernetes/runtime-config.yaml index 65f025a8..4ab4d633 100644 --- a/bootstrap/terraform/providers/kubernetes/runtime-config.yaml +++ b/bootstrap/terraform/providers/kubernetes/runtime-config.yaml @@ -10,4 +10,4 @@ spec: template: {} serviceAccountTemplate: metadata: - name: ${sa-name} + name: ${sa-name} \ No newline at end of file