Setup AWS infrastructure with CloudFormation
templates.
This repository helps you set up networking resources such as VPC (Virtual Private Cloud)
, Internet Gateway
, Route Tables
and Routes
.
We will use AWS CloudFormation
for infrastructure setup and tear-down.
NOTE: Provide unique names to the resources (wherever supported). You should be able to create multiple networks in the same account.
Use the following instructions to set up dev
, prod
and root
profiles for resource creation using AWS CloudFormation
.
- Sign in to your AWS
root
account console. - Navigate into the IAM console.
- Create a user group named
csye6225-ta
withReadOnlyAccess
privileges. - Follow the above two steps for
dev
andprod
accounts.
- Sign in to your AWS
root
account console. - Navigate into the IAM console.
- Create a user by providing the
username
. - Add the
username
user to the user groupcsye6225-ta
created above. - Do not configure credentials for the users. Leave the default setting "Autogenerated password" checked and copy the generated password. AWS does not email autogenerated passwords. You need to manually send the email with the password.
- Provide appropriate tag(s), they're highly recommended.
- Install and configure AWS Command Line Interface (CLI) on your development machine (laptop). See Install the AWS Command Line Interface for detailed instructions to use AWS CLI with Windows, MacOS or Linux.
- Below are the steps to download and use the AWS CLI on MacOS:
- Download the file using the
curl
command:
# On macOS only
curl "https://awscli.amazonaws.com/AWSCLIV2.pkg" -o "AWSCLIV2.pkg"
- Run the
macOS installer
to install AWS CLI:
# On macOS only
sudo installer -pkg ./AWSCLIV2.pkg -target /
- Verify that
zsh
can find and runaws
command using the following commands:
which aws
#/usr/local/bin/aws
aws --version
#aws-cli/2.8.2 Python/3.9.11 Darwin/21.6.0 exe/x86_64 prompt/off
NOTE: Alternatively, you can use the homebrew to install AWS CLI v2 on your Mac. See detailed instructions here.
- Create a
CLI
group in yourdev
andprod
root accounts, on the AWS Console. - Provide the
Administrator Access
policy to this group. - Add the
dev-cli
andprod-cli
users to their respective user groups. - In the terminal, create
dev
user profile for your dev AWS account andprod
user profile for your production AWS account. Do not set up adefault
profile. - Both
dev
andprod
AWS CLI profiles should be set to use theus-east-1
region or the region closest to you. - To create a profile, use the set of following command:
aws configure --profile <profile-name>
-
The above command will ask you to fill out the following:
AWS Access Key ID
AWS Secret Access Key
Region
Output
-
To change the region on any profile, use the following command:
# change the region
aws configure set region <region-name> --profile dev
# you can omit --profile dev is you have env variables set (see below)
aws configure set region <region-name>
- To use a particular profile, use the command:
# For prod profile
export AWS_PROFILE=prod
# For dev profile
export AWS_PROFILE=dev
- To stop using a profile, use the following command:
# To stop using a profile
export AWS_PROFILE=
Configure the networking infrastructure setup using AWS Cloudformation:
- Create a CloudFormation template
csye6225-infra.json
orcsye6225-infra.yml
that can be used to set up required networking resources. - Do not hardcode values for your VPCs and its networking resources.
- Create a Virtual Private Cloud(VPC).
- Create subnets in your VPC. You must create
3
subnets, each in a different availability zone in the same region in the same VPC. - Create an Internet Gateway resource and attach the Internet Gateway to the VPC.
- Create a public route table. Attach all subnets created to the route table.
- Create a public route in the public route table created above with the destination CIDR block
0.0.0.0/0
and internet gateway created above as the target.
To create a default VPC in case you deleted the default VPC in your AWS account, use the following command:
aws ec2 create-default-vpc
To create a stack with custom AMI, replace the AMI default value under the AMI
parameter with the custom AMI id that is created using Packer:
parameters:
AMI:
Type: String
Default: "<your-ami-id>"
Description: "The custom AMI built using Packer"
NOTE: For more details on how we'll be using HCP Packer, refer here.
To launch the EC2 AMI at CloudFormation stack creation, we need to have a few configurations in place.
We need to create a custom security group for our application with the following ingress rules
to allow TCP traffic on our VPC
:
SSH
protocol on PORT22
.HTTP
protocol on PORT80
.HTTPS
protocol on PORT443
.- PORT
1337
for our webapp to be hosted on. (This can vary according to developer needs) - Their IPs should be accessible from anywhere in the world.
To launch the custom EC2 AMI using the CloudFormation stack, we need to configure the EC2 instance with the custom security group we created above, and then define the EBS volumes
with the following properties:
- Custom AMI ID (created using Packer)
- Instance type :
t2.micro
- Protected against accidental termination:
no
- Root volume size: 50
- Root volume type:
General Purpose SDD (GP2)
To use the RDS and S3 on AWS we need to configure the following:
-
AWS::S3::Bucket
- Default encryption for bucket.
- Lifecycle policy to change storage type from
STANDARD
toSTANDARD_IA
after 30 days.
-
AWS::RDS::DBParameterGroup
- DB Engine config.
-
AWS::RDS::DBSubnetGroup
-
AWS::EC2::SecurityGroup
- Ingress rule for
5432
port for Postgres. Application Security Group
is the source for traffic.
- Ingress rule for
-
AWS::IAM::Role
-
AWS::IAM::InstanceProfile
-
AWS::IAM::Policy
{ "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:*" ], "Effect": "Allow", "Resource": [ "arn:aws:s3:::YOUR_BUCKET_NAME", "arn:aws:s3:::YOUR_BUCKET_NAME/*" ] } ] }
NOTE: Replace
*
with appropriate permissions for the S3 bucket to create security policies. -
AWS::RDS::DBInstance
- Configure the following:
- Database Engine: MySQL/PostgreSQL
- DB Instance Class: db.t3.micro
- Multi-AZ deployment: No
- DB instance identifier: csye6225
- Master username: csye6225
- Master password: pick a strong password
- Subnet group: Private subnet for RDS instances
- Public accessibility: No
- Database name: csye6225
- Configure the following:
NOTE: To run the application on a custom bucket, we need to update the
UserData
field in theAWS::EC2::Instance
.
- To hard delete a bucket, you can use the following command:
aws s3 rm s3://<bucket-name> --recursive
To configure the Domain Name System (DNS), we need to do the following from the AWS Console:
- Register a domain with a domain registrar (Namecheap). Namecheap offers free domain for a year with Github Student Developer pack.
- Configure AWS Route53 for DNS service:
- Create a
HostedZone
for the root AWS account, where we create a public hosted zone for domainyourdomainname.tld
. - Configure Namecheap with the custom
Name Servers
provided by AWS Route53 to use Route53 name servers. - Create a public hosted zone in the dev AWS account, with the subdomain
dev.yourdomainname.tld
. - Create a public hosted zone in the prod AWS account, with the subdomain
prod.yourdomainname.tld
. - Configure the name servers and subdomain in the root AWS account (for both dev and prod).
- Create a
- AWS Route53 is updated from the CloudFormation template. We need to add an
A
record to the Route53 zone so that your domain points to your EC2 instance and your web application is accessible throughhttp://your-domain-name.tld/
. - The application must be accessible using root context i.e.
http://your-domain-name.tld/
and nothttp://your-domain-name.tld/app-0.1/
.
The following steps are done manually and only for the subdomain in the prod AWS account:
- Verify Domain in Amazon SES.
- Authenticate Email with DKIM in Amazon SES.
- Move Out of the Amazon SES Sandbox by Requesting Production Access.
Once you have production access, you can send out more than 50,000 mails per day. We need to create a custom MAIL FROM
domain in the AWS account where we have Amazon SES production access. We need to publish the MX
and TXT
records to Route53
so that our DNS has access to our mail servers, and in turn can send out emails.
Add the following resources with appropriate properties and rules to the cloudformation template to get Amazon DynamoDB and Amazon SNS setup:
AWS::DynamoDB::Table
ReadCapacity: 1
WriteCapacity: 1
TimeToLive
: 5 minutes
AWS::Lambda::Function
AWS::Lambda::Permission
AWS::IAM::Role
for Lambda FunctionManagedPolicyArns
:arn:aws:iam::aws:policy/AmazonSESFullAccess
arn:aws:iam::aws:policy/CloudWatchLogsFullAccess
arn:aws:iam::aws:policy/AmazonS3FullAccess
arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole
arn:aws:iam::aws:policy/AmazonDynamoDBFullAccess
AWS::SNS::Topic
AWS::SNS::TopicPolicy
Once the user hits the /v1/account/
endpoint to create an account, DynamoDB stores a unique token for that username, and the SNS topic will trigger the AWS Lambda function that will send out the mail to the user (via AWS SES) asking them to verify their account by clicking on a verifyUserEmail
route in the REST API.
Once we have changes to be updated in our webapp
, we will refresh/replace the instance currently running in the auto-scaling group of the previous AMI. When the new AMI is ready, we will refresh the current instance(s) with new ones that are created using the latest AMI. This workflow is to be executed using CI/CD pipelines in GitHub actions.
For reference, we'll be using the start-instance-refresh
command.
Add the following resources with appropriate properties and rules to the cloudformation template to get AutoScaling and LoadBalancing setup:
AWS::EC2::LaunchTemplate
AWS::AutoScaling::AutoScalingGroup
AWS::AutoScaling::ScalingPolicy
AWS::CloudWatch::Alarm
AWS::EC2::SecurityGroup
for LoadBalancerAWS::ElasticLoadBalancingV2::TargetGroup
AWS::ElasticLoadBalancingV2::LoadBalancer
AWS::ElasticLoadBalancingV2::Listener
To secure our EBS volume and RDS instance, we will use Amazon KMS (Key Management System) to use encrypted keys.
Add the following resources with appropriate properties and rules to the cloudformation template to get the EBS and RDS setup:
AWS::KMS::Key
AWS::KMS::Alias
To get a SSL Certificate for your domain, visit ZeroSSL. Follow the instructions to setup SSL for Amazon Web Services.
You may need to add the CNAME
record to Amazon Route 53
to get the SSL working.
To import the SSL certificate and private keys that you download from ZeroSSL
, use the following command:
aws acm import-certificate --certificate fileb://certificate.crt --certificate-chain fileb://ca_bundle.crt --private-key fileb://private.key
Validate the CloudFormation template using the following command:
aws cloudformation validate-template --template-body file://<path-to-template-file>.yaml
To create the stack, run the following command:
aws cloudformation create-stack --stack-name <stack-name> --template-body file://<path-to-template-file>.yaml
To create a stack with custom parameters:
aws cloudformation create-stack --stack-name app-stack \
--template-body file://templates/<your-template>.yaml \
--parameters ParameterKey=Environment,ParameterValue=prod \
ParameterKey=AMI,ParameterValue=<ami-id> \
ParameterKey=SSLCertificateId, ParameterValue=<your-certificate-id> \
--capabilities CAPABILITY_NAMED_IAM
If you want to use a separate file that stores these parameters, you'll need to specify the path to this parameter file when creating (or updating) the stack.
This parameter file should have the extension .json
or .yaml
. However, support for YAML parameter files in AWS CLI is not yet implemented. Please refer this issue for more details. Native support for JSON parameters file is present and easy to use:
aws cloudformation create-stack --stack-name <your-stack-name> \
--template-body file://templates/<your-template>.yaml \
--parameters file://./<params-file>.json \
--capabilities CAPABILITY_NAMED_IAM
However, for best practices, it's better to not have mixed markups for AWS Cloudformation configurations. Since our base template is in YAML, we'll use the parameter file written in YAML. The only hack here is we need to install a separate package called yq, which will help us parse our .yaml
file as a valid parameter file to the AWS CLI Cloudformation command.
To install yq
:
# mac install only
# Refer https://github.com/mikefarah/yq for installation options on other OS platforms
brew install yq
To use the .yaml
parameter file:
aws cloudformation create-stack --stack-name <your-stack-name> \
--template-body file://templates/<your-template>.yaml \
--parameters $(yq eval -o=j ./<params-file>.yaml) \
--capabilities CAPABILITY_NAMED_IAM
Refer this issue for more details on how to use YAML for AWS CLI Cloudformation parameters option.
To update the stack, run the following command:
aws cloudformation update-stack --stack-name <stack-name> --template-body file://<path-to-template-file>.yml
To delete the stack, run the following command:
aws cloudformation delete-stack --stack-name <stack-name>
- To list all the stacks in the current
AWS_PROFILE
, use the following command:
aws cloudformation list-stacks --output table
- To view the details of the stack created, run the following command:
# displays the result in a table format
aws cloudformation describe-stacks --stack-name <stack-name> --output table
- To view the details of VPCs created, run the following command:
# displays the result in a table format
aws ec2 describe-vpcs --output table
- To get the AZs(Availability Zones) of a region, use the following command:
aws ec2 describe-availability-zones [--region <region-name>] --output table
To use the instance we build using the custom AMI and Cloudformation stack, use the following command:
ssh <username>@<ip-address> -v -i ~/.ssh/<key-name>
Ex: ssh [email protected] -v -i ~/.ssh/ec2-user
To access your database on the EC2 instance, use the following command:
psql --host=<your-rds.amazonaws.com-host> --port=5432 --username=<your-username> --password --dbname=<your-db-name>
Running the above command will prompt your to enter the password to your database.