Docker for AWS is a CloudFormation template that configures Docker in swarm-mode, running on EC2 instances backed by custom AMIs. This allows to create a cluster of Docker Engine in swarm-mode with a single click. This workshop will take the multi-container application and deploy it on multiple hosts.
What is required for creating this CloudFormation template?
-
SSH key in AWS in the region where you want to deploy (required to access the completed Docker install). Amazon EC2 Key Pair explains how to add SSH key to your account
-
AWS account that support EC2-VPC
Launch Stack to create the CloudFormation template.
Click on Next
Select the number of Swarm manager (3) and worker (5) nodes. This wll create a 8 node cluster. Each node will initialize a new EC2 instance. Feel free to alter the number of master and worker nodes. For example, a more reasonable number for testing may be 1 master and 3 worker nodes.
Select the SSH key that will be used to access the cluster.
By default, the template is configured to redirect all log statements to CloudWatch. Until #30691 is fixed, the logs will only be available using CloudWatch. Alternatively, you may select to not redirect logs to CloudWatch. In this case, the usual command to get the logs will work.
Scroll down to select manager and worker properties.
m4.large
(2 vCPU and 8 GB memory) is a good start for manager. m4.xlarge
(4 vCPU and 16 GB memory) is a good start for worker node. Feel free to choose m3.medium
(1 vCPU and 3.75 GB memory) for manager and m3.large
(2 vCPU and 7.5 GB memory) for a smaller cluster. Make sure the EC2 instance size is chosen to accommodate the processing and memory needs of containers that will run there.
Click on Next
Take default for all the options and click on Next
.
Accept the checkbox for CloudFormation to create IAM resources. Click on Create
to create the Swarm cluster.
It will take a few minutes for the CloudFormation template to complete. The output will look like:
EC2 Console will show the EC2 instances for manager and worker.
Select one of the manager nodes, copy the public IP address:
Create a SSH tunnel using the command:
ssh -i ~/.ssh/arun-us-east1.pem -o StrictHostKeyChecking=no -NL localhost:2374:/var/run/docker.sock [email protected] &
Get more details about the cluster using the command docker -H localhost:2374 info
. This shows the output:
Containers: 5
Running: 4
Paused: 0
Stopped: 1
Images: 5
Server Version: 17.09.0-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: awslogs
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: active
NodeID: rb6rju2eln0bn80z7lqocjkuy
Is Manager: true
ClusterID: t38bbbex5w3bpfmnogalxn5k1
Managers: 3
Nodes: 8
Orchestration:
Task History Retention Limit: 5
Raft:
Snapshot Interval: 10000
Number of Old Snapshots to Retain: 0
Heartbeat Tick: 1
Election Tick: 3
Dispatcher:
Heartbeat Period: 5 seconds
CA Configuration:
Expiry Duration: 3 months
Force Rotate: 0
Autolock Managers: false
Root Rotation In Progress: false
Node Address: 172.31.46.94
Manager Addresses:
172.31.26.163:2377
172.31.46.94:2377
172.31.8.136:2377
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 06b9cb35161009dcb7123345749fef02f7cea8e0
runc version: 3f2f8b84a77f73d38244dd690525642a72156c64
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.49-moby
Operating System: Alpine Linux v3.5
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.785GiB
Name: ip-172-31-46-94.ec2.internal
ID: F65G:UTHH:7YEM:XPEZ:NBIZ:XN25:ONG6:QN5R:7MGJ:I3RS:BAX3:UO7A
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): true
File Descriptors: 299
Goroutines: 399
System Time: 2017-10-07T01:04:00.971903882Z
EventsListeners: 0
Registry: https://index.docker.io/v1/
Labels:
os=linux
region=us-east-1
availability_zone=us-east-1c
instance_type=m4.large
node_type=manager
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
List of nodes in the cluster can be seen using docker -H localhost:2374 node ls
:
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
xdhwdiglfs5wsvkcl0j65wl04 ip-172-31-4-89.ec2.internal Ready Active
xbrejk2g7mk9v15hg9xzu3syq ip-172-31-8-136.ec2.internal Ready Active Leader
bhwc67r78cfqtquri82qdwtnk ip-172-31-13-38.ec2.internal Ready Active
ygxdfloly3x203x9p5wbpk34d ip-172-31-17-74.ec2.internal Ready Active
toyfec889wuqdix6z618mlj85 ip-172-31-26-163.ec2.internal Ready Active Reachable
37lzvgrtlnnq0lnr3cip0fwhw ip-172-31-28-204.ec2.internal Ready Active
k2aprr08b3q28nvze9uv26821 ip-172-31-39-252.ec2.internal Ready Active
rb6rju2eln0bn80z7lqocjkuy * ip-172-31-46-94.ec2.internal Ready Active Reachable
Use the Compose file to deploy a multi-container application to this Docker cluster. This will deploy a multi-container application to multiple hosts.
Create a new directory and cd
to it:
mkdir webapp cd webapp
Create a new Compose definition docker-compose.yml
using the configuration file from https://github.com/docker/labs/blob/master/developer-tools/java/chapters/ch05-compose.adoc#configuration-file.
The command is:
docker -H localhost:2374 stack deploy --compose-file=docker-compose.yml webapp
The output is:
Ignoring deprecated options:
container_name: Setting the container name is not supported.
Creating network webapp_default
Creating service webapp_web
Creating service webapp_db
WildFly Swarm and MySQL services are started on this cluster. Each service has a single container. A new overlay network is created. This allows multiple containers on different hosts to communicate with each other.
Verify that the WildFly and Couchbase services are running using docker -H localhost:2374 service ls
:
ID NAME MODE REPLICAS IMAGE PORTS
q4d578ime45e webapp_db replicated 1/1 mysql:8 *:3306->3306/tcp
qt5qrzp1jpyq webapp_web replicated 1/1 arungupta/docker-javaee:dockerconeu17 *:8080->8080/tcp,*:9990->9990/tcp
REPLICAS
colum shows that one of one replica for the container is running for each service. It might take a few minutes for the service to be running as the image needs to be downloaded on the host where the container is started.
Let’s find out which node the services are running. Do this for the web application first:
docker -H localhost:2374 service ps webapp_web
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
npmunk4ll9f4 webapp_web.1 arungupta/docker-javaee:dockerconeu17 ip-172-31-39-252.ec2.internal Running Running 2 hours ago
The NODE
column shows the internal IP address of the node where this service is running.
Now, do this for the database:
docker -H localhost:2374 service ps webapp_db
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
vzaji4xdi2qh webapp_db.1 mysql:8 ip-172-31-17-74.ec2.internal Running Running 2 hours ago
The NODE
column for this service shows that the service is running on a different node.
More details about the service can be obtained using docker -H localhost:2374 service inspect webapp_web
:
[
{
"ID": "qt5qrzp1jpyq1ur7qhg55ijf1",
"Version": {
"Index": 58
},
"CreatedAt": "2017-10-07T01:09:32.519975146Z",
"UpdatedAt": "2017-10-07T01:09:32.535587602Z",
"Spec": {
"Name": "webapp_web",
"Labels": {
"com.docker.stack.image": "arungupta/docker-javaee:dockerconeu17",
"com.docker.stack.namespace": "webapp"
},
"TaskTemplate": {
"ContainerSpec": {
"Image": "arungupta/docker-javaee:dockerconeu17@sha256:6a403c35d2ab4442f029849207068eadd8180c67e2166478bc3294adbf578251",
"Labels": {
"com.docker.stack.namespace": "webapp"
},
"Privileges": {
"CredentialSpec": null,
"SELinuxContext": null
},
"StopGracePeriod": 10000000000,
"DNSConfig": {}
},
"Resources": {},
"RestartPolicy": {
"Condition": "any",
"Delay": 5000000000,
"MaxAttempts": 0
},
"Placement": {
"Platforms": [
{
"Architecture": "amd64",
"OS": "linux"
}
]
},
"Networks": [
{
"Target": "b0ig9m1qsjax95tp9m1i2m4yo",
"Aliases": [
"web"
]
}
],
"ForceUpdate": 0,
"Runtime": "container"
},
"Mode": {
"Replicated": {
"Replicas": 1
}
},
"UpdateConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"RollbackConfig": {
"Parallelism": 1,
"FailureAction": "pause",
"Monitor": 5000000000,
"MaxFailureRatio": 0,
"Order": "stop-first"
},
"EndpointSpec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 8080,
"PublishedPort": 8080,
"PublishMode": "ingress"
},
{
"Protocol": "tcp",
"TargetPort": 9990,
"PublishedPort": 9990,
"PublishMode": "ingress"
}
]
}
},
"Endpoint": {
"Spec": {
"Mode": "vip",
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 8080,
"PublishedPort": 8080,
"PublishMode": "ingress"
},
{
"Protocol": "tcp",
"TargetPort": 9990,
"PublishedPort": 9990,
"PublishMode": "ingress"
}
]
},
"Ports": [
{
"Protocol": "tcp",
"TargetPort": 8080,
"PublishedPort": 8080,
"PublishMode": "ingress"
},
{
"Protocol": "tcp",
"TargetPort": 9990,
"PublishedPort": 9990,
"PublishMode": "ingress"
}
],
"VirtualIPs": [
{
"NetworkID": "i41xh4kmuwl5vc47h536l3mxs",
"Addr": "10.255.0.10/16"
},
{
"NetworkID": "b0ig9m1qsjax95tp9m1i2m4yo",
"Addr": "10.0.0.2/24"
}
]
}
}
]
Logs for the service are redirected to CloudWatch and thus cannot be seen using docker service logs
. This will be fixed with #30691. Let’s view the logs using using CloudWatch Logs.
Select the log group:
Pick webapp_web.xxx
log stream to see the log statements from WildFly Swarm:
Application is accessed using manager’s IP address and on port 8080. By default, the port 8080 is not open.
In EC2 Console, select an EC2 instance with name Docker-Manager
, click on Docker-Managerxxx
in Security groups
. Click on Inbound
, Edit
, Add Rule
, and create a rule to enable TCP traffic on port 8080.
Click on Save
to save the rules.
Now, the application is accessible using the command curl -v http://ec2-34-200-216-30.compute-1.amazonaws.com:8080/resources/employees
and shows output:
* Trying 34.200.216.30...
* TCP_NODELAY set
* Connected to ec2-34-200-216-30.compute-1.amazonaws.com (34.200.216.30) port 8080 (#0)
> GET /resources/employees HTTP/1.1
> Host: ec2-34-200-216-30.compute-1.amazonaws.com:8080
> User-Agent: curl/7.51.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Connection: keep-alive
< Content-Type: application/xml
< Content-Length: 478
< Date: Sat, 07 Oct 2017 02:53:11 GMT
<
* Curl_http_done: called premature == 0
* Connection #0 to host ec2-34-200-216-30.compute-1.amazonaws.com left intact
<?xml version="1.0" encoding="UTF-8" standalone="yes"?><collection><employee><id>1</id><name>Penny</name></employee><employee><id>2</id><name>Sheldon</name></employee><employee><id>3</id><name>Amy</name></employee><employee><id>4</id><name>Leonard</name></employee><employee><id>5</id><name>Bernadette</name></employee><employee><id>6</id><name>Raj</name></employee><employee><id>7</id><name>Howard</name></employee><employee><id>8</id><name>Priya</name></employee></collection>
Shutdown the application using the command docker -H localhost:2374 stack rm webapp
:
Removing service webapp_db
Removing service webapp_web
Removing network webapp_default
This stops the container in each service and removes the services. It also deletes any networks that were created as part of this application.