This repo covers how to set up a cluster running both Consul and Nomad and use it to deploy the HashiCups application.
There are several jobspec files for the application and each one builds on the previous, moving away from the monolithic design and towards microservices.
- Nomad CLI installed
- Consul CLI installed
- Packer CLI installed
- Terraform CLI installed
- AWS account with credentials environment variables set
openssl
andhey
CLI tools installed
- Build the cluster
- Set up Consul and Nomad access
- Deploy the initial HashiCups application
- Deploy HashiCups with Consul service discovery and DNS on a single VM
- Deploy HashiCups with Consul service discovery and DNS on multiple VMs
- Deploy HashiCups with service mesh and API gateway
- Scale the HashiCups application
- Cleanup jobs and infrastructure
Begin by creating the machine image with Packer.
Change into the aws
directory.
cd aws
Rename variables.hcl.example
to variables.hcl
and open it in your text editor.
cp variables.hcl.example variables.hcl
Update the region variable with your preferred AWS region. In this example, the region is us-east-1
. The remaining variables are for Terraform and you can update them after building the AMI.
# Packer variables (all are required)
region = "us-east-1"
...
Initialize Packer to download the required plugins.
packer init image.pkr.hcl
Build the image and provide the variables file with the -var-file
flag.
packer build -var-file=variables.hcl image.pkr.hcl
Example output from the above command.
Build 'amazon-ebs' finished after 14 minutes 32 seconds.
==> Wait completed after 14 minutes 32 seconds
==> Builds finished. The artifacts of successful builds are:
--> amazon-ebs: AMIs were created:
us-east-1: ami-0445eeea5e1406960
Open variables.hcl
in your text editor and update the ami
variable with the value output from the Packer build. In this example, the value is ami-0445eeea5e1406960
.
# Packer variables (all are required)
region = "us-east-1"
# Terraform variables (all are required)
ami = "ami-0b2d23848882ae42d"
The Terraform code uses the Consul Terraform provider to create Consul ACL tokens.
Consul is configured with TLS encryption and to trust the certificate provided by the Consul servers. The Consul Terraform provider requires the CONSUL_TLS_SERVER_NAME
environment variable to be set.
The Terraform code defaults the datacenter
and domain
variables in variables.hcl
to dc1
and global
so CONSUL_TLS_SERVER_NAME
will be consul.dc1.global
.
You can update these variables with other values. If you do, be sure to also update the CONSUL_TLS_SERVER_NAME
variable.
Export the CONSUL_TLS_SERVER_NAME
environment variable.
export CONSUL_TLS_SERVER_NAME="consul.dc1.global"
Initialize the Terraform configuration to download the necessary providers and modules.
terraform init
Provision the resources and provide the variables file with the -var-file
flag. Respond yes
to the prompt to confirm the operation.
terraform apply -var-file=variables.hcl
From the Terraform output you can retrieve the links to connect to your newly created datacenter.
Apply complete! Resources: 85 added, 0 changed, 0 destroyed.
Outputs:
Configure-local-environment = "source ./datacenter.env"
Consul_UI = "https://52.202.91.53:8443"
Consul_UI_token = <sensitive>
Nomad_UI = "https://52.202.91.53:4646"
Nomad_UI_token = <sensitive>
Once Terraform finishes creating the infrastructure, you can set up access to Consul and Nomad from your local environment.
Run the datacenter.env
script to set Consul and Nomad environment variables with values from the infrastructure Terraform created.
source ./datacenter.env
Open the Consul UI with the URL in the Consul_UI
Terraform output variable and log in with the token in the Consul_UI_token
output variable. You will need to trust the certificate in your browser.
Open the Nomad UI with the IP in Nomad_UI
and log in with Nomad_UI_token
.
Test connectivity to the Nomad cluster from your local environment.
nomad server members
HashiCups represents a monolithic application that has been broken apart into separate services and configured to run with Docker Compose. The initial version is a translation of the fictional Docker Compose file to a Nomad jobspec.
Change to the jobs
directory.
cd ../shared/jobs
Submit the job to Nomad.
nomad job run 01.hashicups.nomad.hcl
View the application by navigating to the public IP address of the NGINX service endpoint. This compound command finds the node on which the hashicups
allocation is running (nomad job allocs
) and uses the ID of the found node to retrieve the public IP address of the node (nomad node status
). It then formats the output with the HTTP protocol.
nomad node status -verbose \
$(nomad job allocs hashicups | grep -i running | awk '{print $2}') | \
grep -i public-ipv4 | awk -F "=" '{print $2}' | xargs | \
awk '{print "http://"$1}'
Example output from the above command.
http://18.191.53.222
Copy the IP address and open it in your browser. You do not need to specify a port as NGINX is running on port 80
.
Stop the deployment when you are ready to move on. The -purge
flag removes the job from the UI.
nomad job stop -purge hashicups
This jobspec integrates Consul and uses service discovery and DNS to facilitate communication between the microservices but runs all of the services on a single node.
Submit the job to Nomad.
nomad job run 02.hashicups.nomad.hcl
Open the Consul UI and navigate to the Services page to see that each microservice is now registered in Consul with health checks.
Click on the nginx service and then click on the instance name to view the instance details page. Copy the public hostname in the top right corner of the page and open it in your browser to see the application.
Stop the deployment when you are ready to move on.
nomad job stop -purge hashicups
This jobspec separates the services into their own task groups and allows them to run on different nodes.
Submit the job to Nomad.
nomad job run 03.hashicups.nomad.hcl
Open the Consul UI and navigate to the Services page to see that each microservice is now registered in Consul with health checks.
Click on the nginx service and then click on the instance name to view the instance details page. Copy the public hostname in the top right corner of the page and open it in your browser to see the application.
Open the Nomad UI and navigate to the Topology page from the left navigation to see that the NGINX service is running on a different node than the other services.
Stop the deployment when you are ready to move on.
nomad job stop -purge hashicups
This jobspec further integrates Consul by using service mesh and API gateway. Services use localhost
and the Envoy proxy to enable mutual TLS and upstream service configurations for better security. The API gateway allows external access to the NGINX service.
Set up the API gateway configurations in Consul.
./04.api-gateway.config.sh
Set up the service intentions in Consul to allow the necessary services to communicate with each other.
./04.intentions.consul.sh
Submit the API gateway job to Nomad.
nomad job run 04.api-gateway.nomad.hcl
Submit the HashiCups job to Nomad.
nomad job run 04.hashicups.nomad.hcl
Open the Consul UI and navigate to the Services page to see that each microservice and the API gateway service are registered in Consul.
View the application by navigating to the public IP address of the API gateway. Note the --namespace
flag; the API gateway is running in another namespace. You will need to trust the certificate in your browser.
nomad node status -verbose \
$(nomad job allocs --namespace=ingress api-gateway | grep -i running | awk '{print $2}') | \
grep -i public-ipv4 | awk -F "=" '{print $2}' | xargs | \
awk '{print "https://"$1":8443"}'
Example output from the above command.
https://3.135.190.255:8443
Stop the deployment when you are ready to move on.
nomad job stop -purge hashicups
This jobspec is the same as the API gateway version with the addition of the scaling
block. This block instructs the Nomad Autoscaler to scale the frontend service up and down based on traffic load.
The Nomad Autoscaler is a separate service and is run here as a Nomad job.
Run the autoscaler configuration script.
./05.autoscaler.config.sh
Submit the autoscaler job to Nomad.
nomad job run 05.autoscaler.nomad.hcl
Submit the HashiCups job to Nomad.
nomad job run 05.hashicups.nomad.hcl
View the application by navigating to the public IP address of the API gateway. Note the --namespace
flag; the API gateway is running in another namespace. You will need to trust the certificate in your browser.
nomad node status -verbose \
$(nomad job allocs --namespace=ingress api-gateway | grep -i running | awk '{print $2}') | \
grep -i public-ipv4 | awk -F "=" '{print $2}' | xargs | \
awk '{print "https://"$1":8443"}'
Example output from the above command.
https://3.135.190.255:8443
In another browser tab, open the Nomad UI, click on the hashicups job name, and then click on the frontend task from within the Task Groups list. This page displays a graph that shows scaling events at the bottom of the page. Keep this page open so you can reference it when scaling starts.
Open another terminal in your local environment to generate load with the hey
tool.
Run the hey
tool against the API gateway. In this example, the URL is https://3.135.190.255:8443
. This command generates load for 20 seconds.
hey -z 20s -m GET https://3.135.190.255:8443
Navigate back to the frontend task group page in the Nomad UI and refresh it a few times to see that additional allocations are being created as the autoscaler scales the frontend service up and removed as the autoscaler scales it back down.
Open up the terminal session from where you submitted the jobs and stop the deployment when you are ready to move on.
nomad job stop -purge hashicups
Stop and purge the hashicups and autoscaler jobs.
nomad job stop -purge hashicups autoscaler
Stop and purge the api-gateway job. Note that it runs in a different namespace.
nomad job stop -purge --namespace ingress api-gateway
Change to the aws
directory.
cd ../../aws
Run the script to unset local environment variables.
source ../shared/scripts/unset_env_variables.sh
Use terraform destroy
to remove the provisioned infrastructure. Respond yes
to the prompt to confirm removal.
terraform destroy -var-file=variables.hcl
Delete the stored AMI built using packer using the deregister-image
command.
aws ec2 deregister-image --image-id ami-0445eeea5e1406960
To delete stored snapshots, first query for the snapshot using the describe-snapshots
command.
aws ec2 describe-snapshots \
--owner-ids self \
--query "Snapshots[*].{ID:SnapshotId,Time:StartTime}"
Next, delete the stored snapshot using the delete-snapshot
command by specifying the snapshot-id
value.
aws ec2 delete-snapshot --snapshot-id snap-1234567890abcdef0
- Initial jobspec for HashiCups
- Translation of fictional Docker Compose file to Nomad jobspec
- Adds
service
blocks withprovider="consul"
and health checks - Uses Consul DNS and static ports
- Separates tasks into different groups
- Uses client node constraints
- Uses Consul service mesh
- Defines service upstreams and mapped service ports
- Uses
localhost
and Envoy proxy instead of DNS for service communication
- Adds
scaling
block to the frontend service for horizontal application autoscaling
- Runs the API gateway on port
8443
- Constrains the job to a public client node
- Runs the Nomad Autoscaler agent
- Uses the Nomad APM plugin