Skip to content

Commit

Permalink
Feature/scaffolding_tools (#57)
Browse files Browse the repository at this point in the history
  • Loading branch information
samidbb authored Jan 4, 2024
1 parent ac7e8d9 commit 95cfe95
Show file tree
Hide file tree
Showing 19 changed files with 313 additions and 23 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/housekeeping.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,4 +12,4 @@ jobs:
delete_head_branch: true
squash_merge: true
branch_protection: true
status_checks: true
status_checks: false
1 change: 1 addition & 0 deletions .github/workflows/qa.yml
Original file line number Diff line number Diff line change
Expand Up @@ -107,6 +107,7 @@ jobs:
release:
runs-on: ubuntu-latest
needs: qa-test
if: github.ref == 'refs/heads/main'
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
Expand Down
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -196,3 +196,6 @@ cython_debug/

.trunk/
*.pem

# Auto-generated files
auto-generated
8 changes: 3 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -88,17 +88,16 @@ Terraform module for AWS RDS instances
| <a name="input_environment"></a> [environment](#input\_environment) | Specify the staging environment.<br> Valid Values: "dev", "test", "staging", "uat", "training", "prod".<br> Notes: The value will set configuration defaults according to DFDS policies. | `string` | n/a | yes |
| <a name="input_final_snapshot_identifier_prefix"></a> [final\_snapshot\_identifier\_prefix](#input\_final\_snapshot\_identifier\_prefix) | Specifies the name which is prefixed to the final snapshot on cluster destroy.<br> Valid Values: .<br> Notes: . | `string` | `"final"` | no |
| <a name="input_iam_database_authentication_enabled"></a> [iam\_database\_authentication\_enabled](#input\_iam\_database\_authentication\_enabled) | Set this to true to enable authentication using IAM.<br> Valid Values: .<br> Notes: This requires creating mappings between IAM users/roles and database accounts in the RDS instance for this to work properly. | `bool` | `false` | no |
| <a name="input_identifier"></a> [identifier](#input\_identifier) | Specify the name of the RDS instance to create.<br> Valid Values: .<br> Notes: . | `string` | n/a | yes |
| <a name="input_identifier"></a> [identifier](#input\_identifier) | Specify the name of the RDS instance to create.<br> Valid Values: .<br> Notes: This | `string` | n/a | yes |
| <a name="input_instance_class"></a> [instance\_class](#input\_instance\_class) | Specify instance type of the RDS instance.<br> Valid Values:<br> "db.t3.micro",<br> "db.t3.small",<br> "db.t3.medium",<br> "db.t3.large",<br> "db.t3.xlarge",<br> "db.t3.2xlarge",<br> "db.r6g.xlarge",<br> "db.m6g.large",<br> "db.m6g.xlarge",<br> "db.t2.micro",<br> "db.t2.small",<br> "db.t2.medium",<br> "db.m4.large",<br> "db.m5d.large",<br> "db.m6i.large",<br> "db.m5.xlarge",<br> "db.t4g.micro",<br> "db.t4g.small",<br> "db.t4g.large",<br> "db.t4g.xlarge"<br> Notes: If omitted, the instance type will be set to db.t3.micro. | `string` | `null` | no |
| <a name="input_instance_is_multi_az"></a> [instance\_is\_multi\_az](#input\_instance\_is\_multi\_az) | Specify if the RDS instance is multi-AZ.<br> Valid Values: .<br> Notes:<br> - This creates a primary DB instance and a standby DB instance in a different AZ for high availability and data redundancy.<br> - Standby DB instance doesn't support connections for read workloads.<br> - If this variable is omitted:<br> - This value is set to true by default for production environments.<br> - This value is set to false by default for non-production environments. | `bool` | `null` | no |
| <a name="input_instance_parameters"></a> [instance\_parameters](#input\_instance\_parameters) | Specify a list of DB parameters (map) to modify.<br> Valid Values: Example:<br> instance\_parameters = [{<br> name = "rds.force\_ssl"<br> value = 1<br> apply\_method = "pending-reboot",<br> ... # Other parameters<br> }]<br> Notes: See [documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html#Appendix.PostgreSQL.CommonDBATasks.Parameters) for more information. | `list(map(string))` | `[]` | no |
| <a name="input_instance_terraform_timeouts"></a> [instance\_terraform\_timeouts](#input\_instance\_terraform\_timeouts) | Specify Terraform resource management timeouts.<br> Valid Values: .<br> Notes: Applies to `aws_db_instance` in particular to permit resource management times. See [documentation](https://www.terraform.io/docs/configuration/resources.html#operation-timeouts) for more information. | `map(string)` | `{}` | no |
| <a name="input_iops"></a> [iops](#input\_iops) | Specify The amount of provisioned IOPS.<br> Valid Values: .<br> Notes: Setting this implies a storage\_type of 'io1' or `gp3`. See `notes` for limitations regarding this variable for `gp3`" | `number` | `null` | no |
| <a name="input_is_cluster"></a> [is\_cluster](#input\_is\_cluster) | n/a | `bool` | `false` | no |
| <a name="input_is_instance"></a> [is\_instance](#input\_is\_instance) | n/a | `bool` | `true` | no |
| <a name="input_is_cluster"></a> [is\_cluster](#input\_is\_cluster) | [Experiemental Feature] Specify whether or not to deploy the instance as multi-az database cluster.<br> Valid Values: .<br> Notes:<br> - This feature is currently in beta and is subject to change.<br> - It creates a DB cluster with a primary DB instance and two readable standby DB instances,<br> - Each DB instance in a different Availability Zone (AZ).<br> - Provides high availability, data redundancy and increases capacity to serve read workloads<br> - Proxy is not supported for cluster instances.<br> - For smaller workloads we recommend considering using a single instance instead of a cluster. | `bool` | `false` | no |
| <a name="input_is_kubernetes_app_enabled"></a> [is\_kubernetes\_app\_enabled](#input\_is\_kubernetes\_app\_enabled) | Specify whether or not to enable access from Kubernetes pods.<br> Valid Values: .<br> Notes: Enabling this will create the following resources:<br> - IAM role for service account (IRSA)<br> - IAM policy for service account (IRSA) | `bool` | `false` | no |
| <a name="input_is_proxy_included"></a> [is\_proxy\_included](#input\_is\_proxy\_included) | Specify whether or not to include proxy.<br> Valid Values: .<br> Notes: Proxy helps managing database connections. See [documentation](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/rds-proxy-planning.html) for more information. | `bool` | `false` | no |
| <a name="input_is_serverless"></a> [is\_serverless](#input\_is\_serverless) | n/a | `bool` | `false` | no |
| <a name="input_is_publicly_accessible"></a> [is\_publicly\_accessible](#input\_is\_publicly\_accessible) | Specify whether or not this instance is publicly accessible.<br> Valid Values: .<br> Notes:<br> - Setting this to true will do the followings:<br> - Assign a public IP address and the host name of the DB instance will resolve to the public IP address.<br> - Access from within the VPC can be achived by using the private IP address of the assigned Network Interface. | `bool` | `false` | no |
| <a name="input_maintenance_window"></a> [maintenance\_window](#input\_maintenance\_window) | Specify the window to perform maintenance in.<br> Valid Values: Syntax: `ddd:hh24:mi-ddd:hh24:mi`. Eg: `"Mon:00:00-Mon:03:00"`.<br> Notes: Default value is set to `"Sat:18:00-Sat:20:00"`. This is adjusted in accordance with AWS Backup schedule, see info [here](https://wiki.dfds.cloud/en/playbooks/aws-backup/aws-backup-getting-started). | `string` | `"Sat:18:00-Sat:20:00"` | no |
| <a name="input_manage_master_user_password"></a> [manage\_master\_user\_password](#input\_manage\_master\_user\_password) | Set to true to allow RDS to manage the master user password in Secrets Manager<br> Valid Values: .<br> Notes:<br> - Default value is set to true. It is recommended to use this feature.<br> - If set to true, the `password` variable will be ignored. | `bool` | `true` | no |
| <a name="input_max_allocated_storage"></a> [max\_allocated\_storage](#input\_max\_allocated\_storage) | Set the value to enable Storage Autoscaling and to set the max allocated storage.<br> Valid Values: .<br> Notes:<br> - If this variable is omitted:<br> - This value is set to 50 by default for production environments.<br> - This value is set to 0 by default for non-production environments. | `number` | `null` | no |
Expand All @@ -117,7 +116,6 @@ Terraform module for AWS RDS instances
| <a name="input_proxy_idle_client_timeout"></a> [proxy\_idle\_client\_timeout](#input\_proxy\_idle\_client\_timeout) | Specify idle client timeout of the RDS proxy (keep connection alive).<br> Valid Values: .<br> Notes: . | `number` | `1800` | no |
| <a name="input_proxy_require_tls"></a> [proxy\_require\_tls](#input\_proxy\_require\_tls) | Specify whether or not to require TLS for the proxy.<br> Valid Values: .<br> Notes: Default value is set to true. | `bool` | `true` | no |
| <a name="input_proxy_security_group_rules"></a> [proxy\_security\_group\_rules](#input\_proxy\_security\_group\_rules) | Specify additional security group rules for the RDS proxy.<br> Valid Values: .<br> Notes:<br> - Only ingress(inbound) rules are supported.<br> - Ingress rules are set to "Allow outbound traffic to PostgreSQL instance"<br> – Ingress rules are set to "Allow inbound traffic from same security group on specified database port" | <pre>object({<br> ingress_rules = list(any)<br> ingress_with_self = optional(list(any), [])<br> })</pre> | <pre>{<br> "ingress_rules": []<br>}</pre> | no |
| <a name="input_publicly_accessible"></a> [publicly\_accessible](#input\_publicly\_accessible) | Specify whether or not this instance is publicly accessible.<br> Valid Values: .<br> Notes:<br> - Setting this to true will do the followings:<br> - Assign a public IP address and the host name of the DB instance will resolve to the public IP address.<br> - Access from within the VPC can be achived by using the private IP address of the assigned Network Interface. | `bool` | `false` | no |
| <a name="input_rds_security_group_rules"></a> [rds\_security\_group\_rules](#input\_rds\_security\_group\_rules) | Specify additional security group rules for the RDS instance.<br> Valid Values: .<br> Notes: . | <pre>object({<br> ingress_rules = list(any)<br> ingress_with_self = optional(list(any), [])<br> egress_rules = optional(list(any), [])<br> })</pre> | n/a | yes |
| <a name="input_replicate_source_db"></a> [replicate\_source\_db](#input\_replicate\_source\_db) | Inidicate that this resource is a Replicate database, and to use this value as the source database.<br> Valid Values: The identifier of another Amazon RDS Database to replicate in the same region.<br> Notes: In case of cross-region replication, specify the ARN of the source DB instance. | `string` | `null` | no |
| <a name="input_resource_owner_contact_email"></a> [resource\_owner\_contact\_email](#input\_resource\_owner\_contact\_email) | Provide an email address for the resource owner (e.g. team or individual).<br> Valid Values: .<br> Notes: This set the dfds.owner tag. See recommendations [here](https://wiki.dfds.cloud/en/playbooks/standards/tagging_policy). | `string` | `null` | no |
Expand Down
2 changes: 1 addition & 1 deletion locals.tf
Original file line number Diff line number Diff line change
Expand Up @@ -72,7 +72,7 @@ locals {
########################################################################

iops = var.iops == null && var.storage_type == "io1" ? 1000 : var.iops # The minimum value is 1,000 IOPS and the maximum value is 256,000 IOPS. The IOPS to GiB ratio must be between 0.5 and 50
is_serverless = var.is_serverless # temporary controlled by variable. TODO: Replace by calculation
is_serverless = false # temporary controlled by variable. TODO: Replace by calculation
final_snapshot_identifier = var.skip_final_snapshot ? null : "${var.final_snapshot_identifier_prefix}-${var.identifier}-${try(random_id.snapshot_identifier[0].hex, "")}"

engine = "postgres"
Expand Down
2 changes: 1 addition & 1 deletion main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ module "db_instance" {
multi_az = local.instance_is_multi_az
iops = var.iops
storage_throughput = var.storage_throughput
publicly_accessible = var.publicly_accessible
publicly_accessible = var.is_publicly_accessible
ca_cert_identifier = var.ca_cert_identifier
allow_major_version_upgrade = var.allow_major_version_upgrade
auto_minor_version_upgrade = var.auto_minor_version_upgrade
Expand Down
2 changes: 1 addition & 1 deletion tests/instance/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ module "rds_instance_test" {
username = "instance_user"

apply_immediately = true
publicly_accessible = true
is_publicly_accessible = true
subnet_ids = concat(module.vpc.public_subnets)
enabled_cloudwatch_logs_exports = ["upgrade", "postgresql"]
cloudwatch_log_group_retention_in_days = 1
Expand Down
2 changes: 1 addition & 1 deletion tests/qa/main.tf
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ module "rds_instance_test" { # TODO: change to only use defaults and required va
iam_database_authentication_enabled = true
ca_cert_identifier = "rds-ca-ecc384-g1"
apply_immediately = true
publicly_accessible = true
is_publicly_accessible = true
subnet_ids = ["subnet-04d5d42ac21fd8e8f", "subnet-0e50a82dec5fc0272", "subnet-0a49d384ff2e8a580"]
enabled_cloudwatch_logs_exports = ["upgrade", "postgresql"]
cloudwatch_log_group_retention_in_days = 1
Expand Down
40 changes: 40 additions & 0 deletions tools/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
FROM python:slim

RUN apt update && \
apt install -y curl libpq-dev less jq tar unzip


# Adding GitHub public SSH key to known hosts
RUN ssh -T -o "StrictHostKeyChecking no" -o "PubkeyAuthentication no" [email protected] || true

# ========================================
# TERRAFORM DOCS
# ========================================
ENV TERRAFORM_DOCS_VERSION=0.17.0
RUN export BUILD_ARCHITECTURE=$(uname -m); \
if [ "$BUILD_ARCHITECTURE" = "x86_64" ]; then export BUILD_ARCHITECTURE_ARCH=amd64; fi; \
if [ "$BUILD_ARCHITECTURE" = "aarch64" ]; then export BUILD_ARCHITECTURE_ARCH=arm64; fi; \
curl -sSLo ./terraform-docs.tar.gz https://terraform-docs.io/dl/v${TERRAFORM_DOCS_VERSION}/terraform-docs-v${TERRAFORM_DOCS_VERSION}-linux-${BUILD_ARCHITECTURE_ARCH}.tar.gz
RUN tar -xzf terraform-docs.tar.gz
RUN chmod +x terraform-docs
RUN mv terraform-docs /usr/local/bin/

# ========================================
# TERRAFORM
# ========================================

ENV TERRAFORM_VERSION=1.4.6

RUN export BUILD_ARCHITECTURE=$(uname -m); \
if [ "$BUILD_ARCHITECTURE" = "x86_64" ]; then export BUILD_ARCHITECTURE_ARCH=amd64; fi; \
if [ "$BUILD_ARCHITECTURE" = "aarch64" ]; then export BUILD_ARCHITECTURE_ARCH=arm64; fi; \
curl -Os https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_${BUILD_ARCHITECTURE_ARCH}.zip \
&& unzip terraform_${TERRAFORM_VERSION}_linux_${BUILD_ARCHITECTURE_ARCH}.zip \
&& mv terraform /usr/local/bin/ \
&& terraform -install-autocomplete


COPY scaffolding/scripts /scripts
COPY scaffolding/templates /templates

ENTRYPOINT [ "bash", "/scripts/entrypoint.sh"]
13 changes: 13 additions & 0 deletions tools/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
Run docker file:

cd /dfds/aws-modules-rds/tools

```bash
docker build -t scaffold .
```

mkdir auto-generated

```bash
docker run -v <absolute-path>/aws-modules-rds/:/input -v <absolute-path>/aws-modules-rds/tools/auto-generated:/output scaffold:latest
```
51 changes: 51 additions & 0 deletions tools/scaffolding/scripts/entrypoint.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,51 @@
scripts_path="/scripts"
source_module_path="/input"

if [ ! -d "/output" ]; then
echo "output folder does not exist"
exit 1
fi

# TERRAFORM DOCS
output_json_file="/tmp/doc.json"

# TERRAFORM
source_json_doc=$output_json_file
generated_tf_module_data="/tmp/tf_module.json"
tf_module_template="/templates/main.tf.template"
tf_module_output="/output/terraform/module.tf"
tf_output_folders="/output/terraform"
mkdir -p $tf_output_folders

# DOCKER
docker_compose_template="/templates/compose.yml.template"
docker_compose_output="/output/docker/compose.yml"
docker_env_template="/templates/.env.template"
docker_env_output="/output/docker/.env"
docker_script_template="/templates/restore.sh.template"
docker_script_output="/output/docker/restore.sh"
docker_output_folders="/output/docker"

mkdir -p $docker_output_folders

if [ -z "$(ls -a $source_module_path)" ]; then
echo "empty $source_module_path"
exit 1
fi


# TODO: CHECK FOR output folder mount

# 1) Generate docs for all modules in a repo
terraform-docs json --show "all" $source_module_path --output-file $output_json_file

# # 2) Generate TF files
python3 $scripts_path/generate_tf_module.py --source-tf-doc $source_json_doc --temp-work-folder $generated_tf_module_data --tf-module-template $tf_module_template --tf-output-path $tf_module_output

# # 3) Format TF files
terraform fmt $tf_output_folders

# 3) Generate Docker files
python3 $scripts_path/generate_docker.py --docker-compose-template $docker_compose_template --docker-compose-output $docker_compose_output --env-template $docker_env_template --env-output $docker_env_output --docker-script-template $docker_script_template --docker-script-output $docker_script_output
# 4) Generate pipeline files
# TODO: generate pipeline
42 changes: 42 additions & 0 deletions tools/scaffolding/scripts/generate_docker.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
"""This scripts generates boiler plates for using docker compose files. Input: Template files, Output: Docker compose files."""
from string import Template
import shutil
import argparse

parser = argparse.ArgumentParser(
prog='Docker Compose Generator',
description='This scripts generates boiler plates for using docker compose files. Input: Template files, Output: Docker compose files.',
epilog='.')
parser.add_argument('--docker-compose-template', type=str, required=True, help='The template file for the docker compose.')
parser.add_argument('--docker-compose-output', type=str, required=True, help='The output path for the docker compose.')
parser.add_argument('--env-template', type=str, required=True, help='The template file for the env file.')
parser.add_argument('--env-output', type=str, required=True, help='The output path for the env file.')
parser.add_argument('--docker-script-template', type=str, required=True, help='The template file for the script that is used by the generated docker compose file.')
parser.add_argument('--docker-script-output', type=str, required=True, help='The output path for the script used that is by the generated docker compose file.')
args = parser.parse_args()

docker_template = args.docker_compose_template
output_docker = args.docker_compose_output
env_template = args.env_template
output_env = args.env_output
docker_script_template = args.docker_script_template
output_docker_script = args.docker_script_output

vars_sub = {
'pgpassword': 'example',
'pgdatabase': 'example',
'pghost': 'example',
'pgport': 'example',
'pguser': 'example'
}

with open(env_template, 'r', encoding='UTF-8') as f:
src = Template(f.read())
result = src.substitute(vars_sub)

with open(output_env, "w", encoding='UTF-8') as f:
f.write(result)

shutil.copy(docker_template, output_docker)

shutil.copy(docker_script_template, output_docker_script)
Loading

0 comments on commit 95cfe95

Please sign in to comment.