Skip to content

Terraform Module to create Terraform state storage backend based on AWS S3 and DynamoDB for state locking.

License

Notifications You must be signed in to change notification settings

squareops/terraform-aws-tfstate

Repository files navigation

AWS tfstate Terraform module

squareops_avatar

SquareOps Technologies Your DevOps Partner for Accelerating cloud journey.


Terraform module to create Remote State Storage resources for workload deployment on AWS Cloud.

Usage Example

module "backend" {
  source                       = "squareops/tfstate/aws"
  logging                      = true
  bucket_name                  = "production-tfstate-bucket" #unique global s3 bucket name
  environment                  = "prod"
  force_destroy                = true
  versioning_enabled           = true
  cloudwatch_logging_enabled   = true
  log_retention_in_days        = 90
  log_bucket_lifecycle_enabled = true
  s3_ia_retention_in_days      = 90
  s3_galcier_retention_in_days = 180
}

Refer examples for more details.

IAM Permissions

The required IAM permissions to create resources from this module can be found here

Important Note

Terraform state locking is a mechanism used to prevent multiple users from simultaneously making changes to the same Terraform state, which could result in conflicts and data loss. A state lock is acquired and maintained by Terraform while it is making changes to the state, and other instances of Terraform are unable to make changes until the lock is released.

An Amazon S3 bucket and a DynamoDB table can be used as a remote backend to store and manage the Terraform state file, and also to implement state locking. The S3 bucket is used to store the state file, while the DynamoDB table is used to store the lock information, such as who acquired the lock and when. Terraform will check the lock state in the DynamoDB table before making changes to the state file in the S3 bucket, and will wait or retry if the lock is already acquired by another instance. This provides a centralized and durable mechanism for managing the Terraform state and ensuring that changes are made in a controlled and safe manner.

Additionally, you may have a log bucket configured to store CloudTrail and CloudWatch logs. This log bucket can have a bucket lifecycle policy in place to automatically manage the lifecycle of log data. For example, log data can be transitioned to Amazon S3 Glacier for long-term storage after a certain period, and eventually to Amazon S3 Infrequent Access storage. This helps in optimizing storage costs and ensures that log data is retained according to your organization's compliance requirements.

Security & Compliance

Security scanning is graciously provided by Prowler. Proowler is the leading fully hosted, cloud-native solution providing continuous cluster security and compliance.

In this module, we have implemented the following CIS Compliance checks for S3:

Benchmark Description Status
Ensure S3 bucket access logging is enabled on the CloudTrail S3 bucket Enabled for S3 created using this module.
Ensure the S3 bucket CloudTrail logs to is not publicly accessible Enabled for S3 created using this module.

Requirements

Name Version
terraform >= 1.0
aws >= 4.9

Providers

Name Version
aws >= 4.9

Modules

Name Source Version
kms_key clouddrove/kms/aws 1.3.1
log_bucket terraform-aws-modules/s3-bucket/aws 4.1.2
s3_bucket terraform-aws-modules/s3-bucket/aws 4.1.2

Resources

Name Type
aws_cloudtrail.s3_cloudtrail resource
aws_cloudwatch_log_group.s3_cloudwatch resource
aws_dynamodb_table.dynamodb_table resource
aws_iam_policy.s3_cloudtrail_cloudwatch_policy resource
aws_iam_role.s3_cloudtrail_cloudwatch_role resource
aws_iam_role.this resource
aws_iam_role_policy_attachment.s3_cloudtrail_policy_attachment resource
aws_kms_key.mykey resource
aws_caller_identity.current data source
aws_iam_policy_document.bucket_policy data source
aws_iam_policy_document.cloudtrail_assume_role data source
aws_iam_policy_document.default data source
aws_region.region data source

Inputs

Name Description Type Default Required
bucket_name Name of the S3 bucket to be created. string "" no
cloudwatch_logging_enabled Enable or disable CloudWatch log group logging. bool true no
environment Specify the type of environment(dev, demo, prod) in which the S3 bucket will be created. string "" no
force_destroy Whether or not to delete all objects from the bucket to allow for destruction of the bucket without error. bool false no
log_bucket_lifecycle_enabled Enable or disable the S3 bucket's lifecycle rule for log data. bool true no
log_retention_in_days Retention period (in days) for CloudWatch log groups. number 90 no
logging Configuration for S3 bucket access logging. bool true no
s3_galcier_retention_in_days Retention period (in days) for moving S3 log data to Glacier storage. number 180 no
s3_ia_retention_in_days Retention period (in days) for moving S3 log data to Infrequent Access storage. number 90 no
versioning_enabled Whether or not to enable versioning for the S3 bucket, which allows multiple versions of an object to be stored in the same bucket. bool false no

Outputs

Name Description
dynamodb_table_name Name of the DynamoDB table that will be used to manage locking and unlocking of the terraform state file.
log_bucket_name Name of the S3 bucket that will be used to store logs.
region Name of the region in which Cloudtrail is created
state_bucket_name Specify the region in which an S3 bucket will be created by the module.

Contribution & Issue Reporting

To report an issue with a project:

  1. Check the repository's issue tracker on GitHub
  2. Search to see if the issue has already been reported
  3. If you can't find an answer to your question in the documentation or issue tracker, you can ask a question by creating a new issue. Make sure to provide enough context and details .

License

Apache License, Version 2.0, January 2004 (http://www.apache.org/licenses/).

Support Us

To support a GitHub project by liking it, you can follow these steps:

  1. Visit the repository: Navigate to the GitHub repository

  2. Click the "Star" button: On the repository page, you'll see a "Star" button in the upper right corner. Clicking on it will star the repository, indicating your support for the project.

  3. Optionally, you can also leave a comment on the repository or open an issue to give feedback or suggest changes.

Starring a repository on GitHub is a simple way to show your support and appreciation for the project. It also helps to increase the visibility of the project and make it more discoverable to others.

Who we are

We believe that the key to success in the digital age is the ability to deliver value quickly and reliably. That’s why we offer a comprehensive range of DevOps & Cloud services designed to help your organization optimize its systems & Processes for speed and agility.

  1. We are an AWS Advanced consulting partner which reflects our deep expertise in AWS Cloud and helping 100+ clients over the last 4 years.
  2. Expertise in Kubernetes and overall container solution helps companies expedite their journey by 10X.
  3. Infrastructure Automation is a key component to the success of our Clients and our Expertise helps deliver the same in the shortest time.
  4. DevSecOps as a service to implement security within the overall DevOps process and helping companies deploy securely and at speed.
  5. Platform engineering which supports scalable,Cost efficient infrastructure that supports rapid development, testing, and deployment.
  6. 24*7 SRE service to help you Monitor the state of your infrastructure and eradicate any issue within the SLA.

We provide support on all of our projects, no matter how small or large they may be.

You can find more information about our company on this squareops.com, follow us on Linkedin, or fill out a job application. If you have any questions or would like assistance with your cloud strategy and implementation, please don't hesitate to contact us.