These .tf files provide a starter set which can be used to deploy a developer edition IBM MQ Queue Manager container using Terraform onto AWS ECS.
We have tested this configuration with an id that has AWS AdministratorAccess
permission.
We have yet to test these configurations with OpenTofu. Our expectations that they should work with little to no admendment, as
Initially, OpenTofu will be a drop-in replacement for Terraform, as it will be compatible with Terraform versions 1.5.x. You won’t need to make any changes to your code to ensure compatibility.
Although you won't need to use it directly, the terraform CLI makes use of your AWS CLI configuration.
If you have configured the AWS CLI with the AWS Account access key, the terraform commands will be able to access AWS using the configuration. That is, if you have have run aws configure
from the command line,
then you do not need to provide AWS access key details to the terraform CLI
These files have been tested with both Terraform and OpenTofu. You can use either.
Install the Terraform CLI following this Terraform guide.
Install OpenTofu following this OpenTofu guide.
If you are using OpenTofu, subtitute tofu
for terraform
in the init
, apply
, output
and destroy
steps.
There are two sets of configuration in this repository.
- One Step (no external persistent storage)
- This is a single set of configuration
.tf
files that can be applied to create- A VPC with 2 private and 2 public subnets
- A load balancer / firewall in a public subnet
- An ECS service / task combination with a single container running a queue manager in a private subnet
- Security groups and network configuration to allow external 443 and 1414 traffic to flow into ports 9443 and 1414 respectively on the container.
- CloudWatch logs
- This is a single set of configuration
- Multi Step (with EFS as external persistent storage)
- This is four sets of configuration
.tf
files that can be applied in sequence to create- A VPC with 2 private and 2 public subnets
- An EFS File system with an access point
- A run of
runmqserver -i
to initialise the EFS storage. Once completed the terraform resource should be destroyed, otherwise terraform will keep restarting the process. See below for an explanation and details. - A queue manager as
- An ECS service / task combination with a single container running a queue manager in a private subnet
- and a load balancer running in a public subnet
- with xecurity groups and network configuration to allow external 443 and 1414 traffic to flow into ports 9443 and 1414 respectively on the container.
- and CloudWatch logs.
- This is four sets of configuration
Run terraform init
to download and configure the requisite providers and modules - aws
and vpc
.
This file exposes the customisable parameters. Most come with defaults, but all can be overridden using either command line arguments or envrionment variables.
The default region is set in this file to eu-west-2
.
The variables are used as parameters to ensure consistency when the same value is used multiple times in the configuration.
EG.
- The container name used in the task definition needs to be consistent so that the ECS service can correctly configure the load balancer.
- The log group is paramertized as its usage needs to be consistent throughout the script. Otherwise the queuemanager image will fail on start.
We use variable
in lieu of locals
so that the values can be customised when the configuration is applied.
Start up environment parameters for the MQ container image are also specified in the variables.tf
file.
There are no defaults for mq_app_password
or mq_admin_password
, so they must be set on terraform apply
, either interactively or as -var
command line parameters. eg.
terraform apply -var mq_app_password="AD1ficlutToDeciferAppPassw0rd" -var mq_admin_password="AD1ficlutToDeciferAdminPassw0rd"
Other MQ parameters are defaulted to
"LICENSE" = "accept",
"MQ_QMGR_NAME" = "QM1"
This file configures the AWS Terraform provider. For latest provider version and documentation see terraform aws doc
A new VPC is created and a single IBM MQ Advanced for Developers image is put into a private subnet.
A Network load balancer is put into a public subnet. The load balancer allows ingress traffic into the ports 443 and 1414 only.
443 traffic is routed to the mq image on port 9443. 1414 traffic is routed to the mq port 1414.
Is the container definition template file.
The output from terraform apply
is the load balancer DNS name.
Run terraform apply
to create all the AWS resources needed to run an IBM MQ queuemanager as a service on ECS/Fargate.
You can partially remove some resources, by setting count to 0
.
eg.
resource "aws_ecs_service" "mq-dev-service" {
count = 0
...
}
and running terraform apply
.
Run terraform destroy
to delete all the AWS resources created.
The multi step configuration is forced as the external EFS drive needs to be initialised with the correct directory structure and restrictive access permissions.
Steps 1 and 2 could be combined, but we separated them out so that in step 1 only the VPC is created.
In step 1 all input variables are defaulted, but can be overridden using the standard terraform mechanisms. EG. Environment settings or command line overrides.
The output consists of VPC details. Only the vpc_id
is needed for subsequent configurations.
Step 2 builds the EFS external file system. It needs 2 inputs:
- region
which is defaulted to variable
eu-west-2
- vpc_id which must be provided
The EFS and an access point is created.
Mount targets are created in each of the private subnets, along with a security group, ingress rules and
permissions to allow access to the storage from the VPC.
The output consists of efs_id
and efs_access_point_id
Step 3 initialises the EFS storage created in Step 2.
This step needs 4 inputs
- region
which is defaulted to variable
eu-west-2
- vpc_id which must be provided (output from step 1)
- efs_id which must be provided (output from step 2)
- efs_access_point which must be provided (output from step 2)
eg.
terraform apply -var vpc_id="vpc-123" -var efs_id="fs-456" -var efs_access_point="fsap-789"
The storage initialisation is a single one off task that completes, after which the container terminates. Once the storage has been initialised, ie. you see
Created directory structure under /var/mqm
in the logs then the task has been completed.
Note: You still need to run terraform destroy
, to clean up resources.
There are no outputs from this configuration.
This configuration makes use of the VPC created in step 1 and the EFS created in step 2 and initialised in step 3 and creates
- Security and Target groups that will allow input
- 443 and 1414 traffic into the load balancer
- 443 and 1414 traffic from the load balancer into the private subnets hosting the queue manager container
- 9443 and 1414 traffic into the queue manager
- CloudWatch logging
- Load balancer acting as a firewall
- ECS service and task to start up a single container running a queue manager in a private subnet.
This step needs 4 inputs
- region
which is defaulted to variable
eu-west-2
- vpc_id which must be provided (output from step 1)
- efs_id which must be provided (output from step 2, initialised in step3)
- efs_access_point which must be provided (output from step 2)
To remove resources that are not being used run
terraform destroy
in each of the steps in reverse order.