Skip to content

Commit

Permalink
Add GitLab runner example with managed node-group
Browse files Browse the repository at this point in the history
  • Loading branch information
bobdoah committed Mar 31, 2023
1 parent 0989506 commit f1a3e4f
Show file tree
Hide file tree
Showing 7 changed files with 302 additions and 0 deletions.
Binary file not shown.
101 changes: 101 additions & 0 deletions examples/ci-cd/gitlab-runner-with-managed-node-group/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,101 @@
# GitLab Runner with Managed Node-groups

This example deploys a new EKS cluster, with the GitLab Runner installed. The example also deploys a GitLab project under the user's namespace. This project is a GitLab provided sample: https://gitlab.com/gitlab-org/ci-sample-projects/cicd-templates/android.latest.gitlab-ci.yml-test-project. The runner spawns pods on the managed node-group when a pipeline is triggered.

## How to Deploy

### Prerequisites:

Ensure that you have installed the following tools in your Mac or Windows Laptop before start working with this module and run Terraform Plan and Apply

1. [AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/install-cliv2.html)
2. [Kubectl](https://Kubernetes.io/docs/tasks/tools/)
3. [Terraform](https://learn.hashicorp.com/tutorials/terraform/install-cli)
4. [GitLab account](https://gitlab.com)

### Deployment Steps

#### Step 1: Clone the repo using the command below

```sh
git clone https://github.com/aws-ia/terraform-aws-eks-blueprints.git
```

#### Step 2: Create a GitLab personal access token

From your GitHub profile page, select "Access Tokens". Create a token with "api" privileges. Export that token as the environment variable `GITLAB_TOKEN`:

```sh
export GITLAB_TOKEN=<token>
```

#### Step 3: Run Terraform INIT

Initialize a working directory with configuration files

```sh
cd examples/ci-cd/gitlab-runner-with-managed-node-groups
terraform init
```

#### Step 4: Run Terraform PLAN

Verify the resources created by this execution

```sh
export AWS_REGION=<ENTER YOUR REGION> # Select your own region
terraform plan
```

#### Step 4: Finally, Terraform APPLY

**Deploy the pattern**

```sh
terraform apply
```

Enter `yes` to apply.

### Configure `kubectl` and test cluster

EKS Cluster details can be extracted from terraform output or from AWS Console to get the name of cluster.
This following command used to update the `kubeconfig` in your local machine where you run kubectl commands to interact with your EKS Cluster.

#### Step 5: Run `update-kubeconfig` command

`~/.kube/config` file gets updated with cluster details and certificate from the below command

$ aws eks --region <enter-your-region> update-kubeconfig --name <cluster-name>

#### Step 6: List all the worker nodes by running the command below

$ kubectl get nodes


#### Step 7: Trigger a pipeline build

In the GitLab UI find the project created under your profile. Trigger a pipeline with the "Run pipeline" button on the project's "CI/CD > Pipelines" page.


#### Step 7: List all the pods running in `gitlab-runner` namespace

$ kubectl get pods -n gitlab-runner

## Cleanup

To clean up your environment, destroy the Terraform modules in reverse order.

Destroy the Kubernetes Add-ons, EKS cluster with Node groups and VPC

```sh
terraform destroy -target="module.eks_blueprints_kubernetes_addons" -auto-approve
terraform destroy -target="module.eks_blueprints" -auto-approve
terraform destroy -target="module.vpc" -auto-approve
```

Finally, destroy any additional resources that are not in the above modules

```sh
terraform destroy -auto-approve
```
172 changes: 172 additions & 0 deletions examples/ci-cd/gitlab-runner-with-managed-node-group/main.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
provider "aws" {
region = local.region
}

provider "kubernetes" {
host = module.eks_blueprints.eks_cluster_endpoint
cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.this.token
}

provider "helm" {
kubernetes {
host = module.eks_blueprints.eks_cluster_endpoint
cluster_ca_certificate = base64decode(module.eks_blueprints.eks_cluster_certificate_authority_data)
token = data.aws_eks_cluster_auth.this.token
}
}

data "aws_eks_cluster_auth" "this" {
name = module.eks_blueprints.eks_cluster_id
}

data "aws_availability_zones" "available" {}

locals {
name = basename(path.cwd)
region = "us-west-2"

vpc_cidr = "10.0.0.0/16"
azs = slice(data.aws_availability_zones.available.names, 0, 3)

tags = {
Blueprint = local.name
GithubRepo = "github.com/aws-ia/terraform-aws-eks-blueprints"
}
}

module "eks_blueprints" {
source = "../../.."

cluster_name = local.name
cluster_version = "1.24"

vpc_id = module.vpc.vpc_id
private_subnet_ids = module.vpc.private_subnets

#----------------------------------------------------------------------------------------------------------#
# Security groups used in this module created by the upstream modules terraform-aws-eks (https://github.com/terraform-aws-modules/terraform-aws-eks).
# Upstream module implemented Security groups based on the best practices doc https://docs.aws.amazon.com/eks/latest/userguide/sec-group-reqs.html.
# So, by default the security groups are restrictive. Users needs to enable rules for specific ports required for App requirement or Add-ons
# See the notes below for each rule used in these examples
#----------------------------------------------------------------------------------------------------------#
node_security_group_additional_rules = {
# Extend node-to-node security group rules. Recommended and required for the Add-ons
ingress_self_all = {
description = "Node to node all ports/protocols"
protocol = "-1"
from_port = 0
to_port = 0
type = "ingress"
self = true
}
# Recommended outbound traffic for Node groups
egress_all = {
description = "Node all egress"
protocol = "-1"
from_port = 0
to_port = 0
type = "egress"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
# Allows Control Plane Nodes to talk to Worker nodes on all ports. Added this to simplify the example and further avoid issues with Add-ons communication with Control plane.
# This can be restricted further to specific port based on the requirement for each Add-on e.g., metrics-server 4443, spark-operator 8080, karpenter 8443 etc.
# Change this according to your security requirements if needed
ingress_cluster_to_node_all_traffic = {
description = "Cluster API to Nodegroup all traffic"
protocol = "-1"
from_port = 0
to_port = 0
type = "ingress"
source_cluster_security_group = true
}
}

managed_node_groups = {
mg_5 = {
node_group_name = "managed-ondemand"
instance_types = ["m5.large"]
subnet_ids = module.vpc.private_subnets
force_update_version = true
}
}

tags = local.tags
}

data "gitlab_current_user" "this" {}

resource "gitlab_project" "android_test_project" {
name = "${local.name}-android-test"
description = "${local.name} Android Test Project"

namespace_id = data.gitlab_current_user.this.namespace_id

# A sample GitLab project
import_url = "https://gitlab.com/gitlab-org/ci-sample-projects/cicd-templates/android.latest.gitlab-ci.yml-test-project.git"

visibility_level = "public"

# Only use EKS runners
shared_runners_enabled = false
}


module "eks_blueprints_kubernetes_addons" {
source = "../../../modules/kubernetes-addons"

eks_cluster_id = module.eks_blueprints.eks_cluster_id
eks_cluster_endpoint = module.eks_blueprints.eks_cluster_endpoint
eks_oidc_provider = module.eks_blueprints.oidc_provider
eks_cluster_version = module.eks_blueprints.eks_cluster_version
eks_worker_security_group_id = module.eks_blueprints.worker_node_security_group_id
auto_scaling_group_names = module.eks_blueprints.self_managed_node_group_autoscaling_groups

# GitLab Runner
enable_gitlab_runner = true
gitlab_runner_helm_config = {
set_sensitive = [
{ name = "runnerRegistrationToken", value = gitlab_project.android_test_project.runners_token },
]
}

tags = local.tags

}

module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "~> 3.0"

name = local.name
cidr = local.vpc_cidr

azs = local.azs
public_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k)]
private_subnets = [for k, v in local.azs : cidrsubnet(local.vpc_cidr, 8, k + 10)]

enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true

# Manage so we can name
manage_default_network_acl = true
default_network_acl_tags = { Name = "${local.name}-default" }
manage_default_route_table = true
default_route_table_tags = { Name = "${local.name}-default" }
manage_default_security_group = true
default_security_group_tags = { Name = "${local.name}-default" }

public_subnet_tags = {
"kubernetes.io/cluster/${local.name}" = "shared"
"kubernetes.io/role/elb" = 1
}

private_subnet_tags = {
"kubernetes.io/cluster/${local.name}" = "shared"
"kubernetes.io/role/internal-elb" = 1
}

tags = local.tags
}
Empty file.
Binary file not shown.
Empty file.
29 changes: 29 additions & 0 deletions examples/ci-cd/gitlab-runner-with-managed-node-group/versions.tf
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
terraform {
required_version = ">= 1.0.0"

required_providers {
aws = {
source = "hashicorp/aws"
version = ">= 4.9"
}
kubernetes = {
source = "hashicorp/kubernetes"
version = ">= 2.10"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.4.1"
}
gitlab = {
source = "gitlabhq/gitlab"
version = ">= 15.7.1"
}
}

# ## Used for end-to-end testing on project; update to suit your needs
# backend "s3" {
# bucket = "terraform-ssp-github-actions-state"
# region = "us-west-2"
# key = "e2e/complete-kubernetes-addons/terraform.tfstate"
# }
}

0 comments on commit f1a3e4f

Please sign in to comment.