Skip to content

Commit

Permalink
Merge pull request #2 from abdullahkhawer/code-v1.1
Browse files Browse the repository at this point in the history
feat: Update code to fix bugs in it and to refactor it.
  • Loading branch information
abdullahkhawer authored Jan 23, 2024
2 parents 1ad32f0 + babe2c2 commit 5d3cf26
Show file tree
Hide file tree
Showing 22 changed files with 695 additions and 634 deletions.
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -6,3 +6,4 @@ terraform.tfstate.d
.env
.idea
tf-plan*
mongodb.key
12 changes: 12 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,18 @@
All notable changes to this project will be documented in this file.


## [1.1.0] - 2024-01-23

[1.1.0]: https://github.com/abdullahkhawer/mongodb-cluster-on-aws-ecs/releases/tag/v1.1.0

### Features

- Update code to set the threshold for CPU, memory, and Disk space utilization to 85%, create locals to define AWS VPC private subnets along with their length, select the correct AWS VPC private subnet ID even if there are fewer subnets than the number of AWS EC2 instances, select the correct private AWS Route 53 hosted zone if both public and private exist with the same name/domain, set correct AWS ECS cluster name under dimensions for AWS CloudWatch metric alarms, fix Terraform code with respect to the AWS Terraform provider v4.65.0, update backups AWS S3 bucket's lifecycle policy rules to set a rule for INTELLIGENT_TIERING, add code to wait for the first AWS EC2 instance to be running and complete status checks, refactor the whole Terraform code and update the README accordingly.

### Miscellaneous Tasks

- Add mongodb.key in .gitignore file.

## [1.0.0] - 2024-01-15

[1.0.0]: https://github.com/abdullahkhawer/mongodb-cluster-on-aws-ecs/releases/tag/v1.0.0
Expand Down
42 changes: 22 additions & 20 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,21 @@
# MongoDB cluster on AWS ECS
# MongoDB Cluster on AWS ECS - Terraform Module

- Founder: Abdullah Khawer (LinkedIn: https://www.linkedin.com/in/abdullah-khawer/)

## Introduction

A Terraform module developed to quickly deploy a secure, persistent, highly available, self healing, efficient and cost effective single-node or multi-node MongoDB NoSQL document database cluster on AWS ECS cluster as there is no managed service available for MongoDB on AWS with such features.
A Terraform module developed to quickly deploy a secure, persistent, highly available, self healing, efficient and cost effective single-node or multi-node MongoDB NoSQL document database cluster on AWS ECS cluster with monitoring and alerting enabled as there is no managed service available for MongoDB on AWS with such features.

## Key Highlights

- A single-node or multi-node MongoDB cluster under AWS Auto Scaling group to launch multiple MongoDB nodes as replicas to make it highly available, efficient and self healing with a help of bootstrapping script with some customizations.
- Using AWS ECS service registry with awsvpc as network mode instead of AWS ELB to save cost on networking side and make it more secure. AWS ECS task IPs are updated by the bootstrapping script on an AWS Route 53 hosted zone.
- A single-node (1 node) or multi-node (2 or 3 nodes) MongoDB cluster under AWS Auto Scaling group to launch multiple MongoDB nodes as replicas to make it highly available, efficient and self healing with a help of bootstrapping script with some customizations.
- Using AWS Route 53 private hosted zone for AWS ECS services with `awsvpc` as the network mode instead of AWS ELB to save cost on networking side and make it more secure. AWS ECS services' task IPs are updated on the AWS Route 53 private hosted zone by the bootstrapping script that runs on each AWS EC2 instance node as user data.
- Persistent and encrypted AWS EBS volumes of type gp3 using rexray/ebs Docker plugin so the data stays secure and reliable.
- AWS S3 bucket for backups storage for disaster recovery along with lifecycle rules for data archival and deletion.
- Custom backup and restore scripts for data migration and disaster recovery capabilities available on each AWS EC2 instance due to a bootstrapping script.
- Each AWS EC2 instance is configured with various customizations like pre-installed wget, unzip, awscli, Docker, ECS agent, MongoDB, Mongosh, MongoDB database tools, key file for MongoDB Cluster, custom agent for AWS EBS volumes disk usage monitoring and cronjobs to take a backup at 03:00 AM daily and to send disk usage metrics to AWS CloudWatch at every minute.
- Each AWS EC2 instance is configured with soft rlimits and ulimits defined and transparent huge pages disabled to make MongoDB database more efficient.
- AWS S3 bucket for backups storage for disaster recovery along with a lifecycle rule with Intelligent-Tiering as storage class for objects to save data storage cost.
- Custom backup and restore scripts for data migration and disaster recovery capabilities are available on each AWS EC2 instance node by the bootstrapping script running as user data.
- Each AWS EC2 instance node is configured with various customizations like pre-installed wget, unzip, awscli, Docker, ECS agent, MongoDB, Mongosh, MongoDB database tools, key file for MongoDB Cluster, custom agent for AWS EBS volumes disk usage monitoring and cronjobs to take a backup at 03:00 AM UTC daily and to send disk usage metrics to AWS CloudWatch at every minute.
- Each AWS EC2 instance node is configured with soft rlimits and ulimits defined and transparent huge pages disabled to make MongoDB database more efficient.
- AWS CloudWatch alarms to send alerts when the utilization of CPU, Memory and Disk Space goes beyond 85%.

## Usage Notes

Expand All @@ -28,49 +29,50 @@ Following are the resources that should exist already before starting the deploy
- `openssl rand -base64 756 > mongodb.key`
- `chmod 400 mongodb.key`
- 1 key pair named `[PROJECT]-[ENVIRONMENT_NAME]-mongodb` under **AWS EC2 Key Pairs**.
- 1 private hosted zone under **AWS Route53** with any working domain.
- 1 vpc under **AWS VPC** having at least 1 private subnet or ideally, 3 private and 3 public subnets with name tags (e.g., Private-1-Subnet, Private-2-Subnet, etc).
- 1 private hosted zone under **AWS Route53** with a working domain.
- 1 vpc under **AWS VPC** having at least 1, 2 or 3 private subnets having a name tag on each (e.g., Private-1-Subnet, Private-2-Subnet, etc).
- 1 topic under **AWS SNS** to send notifications via AWS CloudWatch alarms.

## Deployment Instructions

Simply deploy it from the terraform directory directly or either as a Terraform module by specifying the desired values for the variables. You can check `terraform-usage-example.tf` file as an example.

## Post Deployment Replica Set Configuration

Once the deployment is done, log into the MongoDB cluster via its 1st AWS EC2 instance node using AWS SSM Session Manager using the following command: `mongosh "mongodb://[USERNAME]:[PASSWORD]@mongodb1.[ENVIRONMENT_NAME]-local:27017/admin?&retryWrites=false"`
Once the deployment is done, log into the MongoDB cluster via its 1st AWS EC2 instance node using AWS SSM Session Manager using the following command after replacing `[USERNAME]`, `[PASSWORD]`, `[ENVIRONMENT_NAME]` and `[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]` in it: `mongosh "mongodb://[USERNAME]:[PASSWORD]@[ENVIRONMENT_NAME]-mongodb1.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017/admin?&retryWrites=false"`

Then initiate the replica set using the following command:
Then initiate the replica set using the following command after replacing `[ENVIRONMENT_NAME]` and `[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]` in it:

```
rs.initiate({
_id: "rs0",
members: [
{ _id: 0, host: "mongodb1.[ENVIRONMENT_NAME]-local:27017" },
{ _id: 1, host: "mongodb2.[ENVIRONMENT_NAME]-local:27017" },
{ _id: 2, host: "mongodb3.[ENVIRONMENT_NAME]-local:27017" }
{ _id: 0, host: "[ENVIRONMENT_NAME]-mongodb1.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" },
{ _id: 1, host: "[ENVIRONMENT_NAME]-mongodb2.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" },
{ _id: 2, host: "[ENVIRONMENT_NAME]-mongodb3.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" }
]
})
```

You can now connect to the replica set using the following command: `mongosh "mongodb://[USERNAME]:[PASSWORD]@mongodb1.[ENVIRONMENT_NAME]-local:27017,mongodb2.[ENVIRONMENT_NAME]-local:27017,mongodb3.[ENVIRONMENT_NAME]-local:27017/admin?replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=true"`
You can now connect to the replica set using the following command after replacing `[USERNAME]`, `[PASSWORD]`, `[ENVIRONMENT_NAME]` and `[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]` in it: `mongosh "mongodb://[USERNAME]:[PASSWORD]@[ENVIRONMENT_NAME]-mongodb1.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017,[ENVIRONMENT_NAME]-mongodb2.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017,[ENVIRONMENT_NAME]-mongodb3.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017/admin?replicaSet=rs0&readPreference=secondaryPreferred&retryWrites=true"`

*Note: The sample commands in the above example assumes that the cluster has 3 nodes.*

## Replica Set Recovery

If you lost the replica set, you can reconfigure it using the following commands:
If you lost the replica set, you can reconfigure it using the following commands after replacing `[ENVIRONMENT_NAME]` and `[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]` in them:

```
rs.reconfig({
_id: "rs0",
members: [
{ _id: 0, host: "mongodb1.stage-local:27017" }
{ _id: 0, host: "[ENVIRONMENT_NAME]-mongodb1.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" }
]
}, {"force":true})
rs.add({ _id: 1, host: "mongodb2.stage-local:27017" })
rs.add({ _id: 1, host: "[ENVIRONMENT_NAME]-mongodb2.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" })
rs.add({ _id: 2, host: "mongodb3.stage-local:27017" })
rs.add({ _id: 2, host: "[ENVIRONMENT_NAME]-mongodb3.[AWS_ROUTE_53_PRIVATE_HOSTED_ZONE_NAME]:27017" })
```

*Note: The sample commands in the above example assumes that the cluster has 3 nodes.*
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
v1.0.0
v1.1.0
6 changes: 3 additions & 3 deletions terraform-usage-example.tf
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,12 @@ module "mongodb-cluster-on-aws-ecs" {
image = "docker.io/mongo:5.0.6"
hosted_zone_name = "project.net" # dummy value
ec2_key_pair_name = "project-dev-mongodb" # dummy value
number_of_instances = 3
private_subnet_tag_name = "Private-1*" # dummy value
number_of_instances = 3 # Minimum 1, Maximum 3
private_subnet_tag_name = "Private-1*" # dummy value

# If you want to enable disk usage monitoring
monitoring_enabled = true
alarm_treat_missing_data = "missing"
alarm_treat_missing_data = "ignore"
aws_sns_topic = "arn:aws:sns:eu-west-1:012345678910:AWS_SNS_TOPIC_NAME"

# If you want to enable backups
Expand Down
43 changes: 0 additions & 43 deletions terraform/cloudwatch.tf

This file was deleted.

34 changes: 0 additions & 34 deletions terraform/data.tf

This file was deleted.

83 changes: 0 additions & 83 deletions terraform/ec2_asg.tf

This file was deleted.

8 changes: 0 additions & 8 deletions terraform/ecs_cluster.tf

This file was deleted.

12 changes: 0 additions & 12 deletions terraform/ecs_service.tf

This file was deleted.

Loading

0 comments on commit 5d3cf26

Please sign in to comment.