This project explores bootstrapping a greenfield cloud software project in the quickest possible way. The goal was to find a setup that helps to get into the feedback loop with minimum hassle, many DevOps features available out of the box, and little to no self-managed infrastructure. Also, I wanted to keep as many resources as possible (i.e. all resources) within AWS to minimize the need for external Repos, Terraform Cloud accounts, etc.
This project comes with
- a terraform stack that builds a dev and prod compute environment in AWS with automated ci/cd and blue/green deployment
- a sample application with a http endpoint so we have something to play around with
The following technologies and services are used:
- Hashicorp Terraform (for creating and managing all AWS resources)
- An 'as-simple-as-possible' Kotlin-powered Spring Boot project (so that we have something that we can deploy and play around with)
- Docker (for packaging the application)
- AWS CodeCommit/CodeBuild/CodePipeline/CodeDeploy (for hosting, building, and deploying our application)
- AWS ECR (for storing our container images)
- AWS ECS/Fargate (for running the application with minimum management overhead and to simplify scaling)
Terraform remote state information and locking is maintained in S3/DynamoDB.
This sample was built with the help of Amazon's excellent labs and workshops on ECS and CI/CD:
In order to run this sample, you'll need:
The detailed setup steps for setting up the AWS CLI and Terraform can be found below. Assuming you have everything set up and ready to go, running this sample involves the following steps:
- checkout this project
- cd into the
terraform/remote-state
directory, runterraform init
- run
terraform apply
You now have all initial resources to maintain the state of your Terraform stack within AWS (S3 & DynamoDB).
- cd into the
terraform/infrastructure
directory, runterraform init
- run
terraform apply
The infrastructure for maintaining, building, deploying, and running your application is now ready. Pay attention to the stack's output, mainly source_repo_clone_url_http
and ecs_task_execution_role_arn_prod
, you will need these in the next step.
- Use the previous stack's output (
source_repo_clone_url_http
) and usegit clone <url>
to clone it to a location of your choice - copy all contents of the
cloud-bootstap-app
directory into the empty repo directory, thencd
into it - In order for CodeDeploy to be able to create ECS tasks (during blue/green deployment), create your own ecs task definition from the prepared template by filling in your execution role's arn:
# make sure to perform this step in your cloned application repo from step 1 & 2
export TASK_EXEC_ROLE_ARN=<your-prod-exec-role-arn-from-terraform-output>
envsubst < taskdef-prod.json.template > taskdef-prod.json
rm taskdef-prod.json.template
- commit and push the changes
- check the ci/cd execution in the CodePipeline console, optionally have a look at the service events in the ECS console to observe the deployment process
- Test the DEV stage: Hit the load balancer's endpoint URL (see
alb_address_dev
stack output) - the service should be online (a good idea would be to hit the service's Swagger UI @/swagger-ui.html
). - Change the application's code on your machine (maybe add a mountain in
MountainsController.kt
?), commit and push - Check the CodePipeline console again. upon successful deployment to DEV, there is a manual approval step that you'll need to confirm in order to trigger the PROD deployment
- Upon approval, the blue/green deployment to PROD is triggered. Observe it in the CodeDeployment console and the ECS console. Deployment should take a few minutes.
- Verify that the changes have actually been deployment to production by
curl
ing the application's PROD endpoint (seealb_address_prod
stack output)
That's it.
Should you run into any errors along the way, please have a look at the initial setup steps below. Also, please don't forget to tear down everything when you're done to avoid unnecessary cost.
The following section dives deeper into the steps required to get started.
Configure the AWS CLI to match the desired region:
aws configure
AWS Access Key ID [None]:
AWS Secret Access Key [None]:
Default region name [None]: eu-central-1
Default output format [None]:
cd terraform/infrastructure
Edit terraform.tfvars
, leave the aws_profile
as "default"
, and set aws_region
to match your needs.
The following resources will be created by terraform:
- S3 buckets for terraform state and build artifacts - view it in the S3 console.
- DynamoDB table for terraform state locking - view it in the DynamoDB console.
- ALB - view it in the EC2 console.
- ECS cluster - view it in the ECS console.
- ECR container registry - view it in the ECR console.
- CodeCommit git repo - view it in the CodeCommit console.
- CodeBuild project - view it in the CodeBuild console.
- CodePipeline build pipeline - view it in the CodePipeline console.
- CodeDeploy blue/green deployment - view it in the CodeDeploy console.
In order to be able to interact with the CodeCommit repo created by this terraform stack, please make sure to setup your git installation appropriately. You will need to set the codecommit credential-helper
for things to run smoothly.
git config --global user.name "John Doe" # you might have set this up already
git config --global user.email [email protected] # same here
git config --global credential.helper '!aws codecommit credential-helper $@'
git config --global credential.UseHttpPath true
You should now be able to clone the CodeCommit Repo to a local directory of your choice. The repo URL can be found looking at the terraform outputs of the stack, see source_repo_clone_url_http
or run terraform output source_repo_clone_url_http
.
macOS users: In case you encounter weird HTTP 403 errors when cloning, please look at any previously stored CodeCommit credentials in your Keychain Access app, and delete them.
From the output of the Terraform build, note the Terraform output alb_address_dev
(dev stage) and alb_address_prod
(prod stage), or run terraform output alb_address_<stage>
. With it, you should be able to access the application:
- Perform a GET request against the
<your-alb-address-here>/mountains
resource - Check out the Swagger UI by GETting the
<your-alb-address-here>/swagger-ui.html
resource
Testing the deployment process can best be tested by changing the application and observing how these end up in the respective stages. You can try this out by e.g. adding a mountain in the MountainsController
class, and committing/pushing the change. This will trigger the following:
- Automated deployment to the ECS dev cluster stage
- Automated blue/green deployment to production. This requires a manual approval step in CodePipeline after the deployment to the dev stage completed successfully
In order to tear down the cluster, execute the following commands:
cd terraform/infrastructure
terraform destroy
cd terraform/initial-setup/remote-state
terraform destroy
The created S3 buckets might fail to delete if not empty. In this case, these need to be deleted manually.
Due to the fact that variables aren't supported in terraform's backend
section (there's an open issue on that here), this project will cause a bucket name collision when being deployed more than once.
You can fix this by supplying an alternative backend state bucket name in remote-state/main.tf
and adjust infrastructure/main.tf
to reflect the change.