A distributed, fault-tolerant, production-ready FizzBuzz implementation for the microservice cloud.
Dependencies:
aws cli
(pip install --upgrade --user awscli
)jq
(brew install jq
)kops
(brew install kops
)kubectl
(brew install kubectl
)- An existing Route53 Hosted Zone that is correctly serving DNS queries.
- Easiest way to do this is simply set up a Hosted Zone for a subdomain already handled in Route53. Something like
cluster.example.com
.
- Easiest way to do this is simply set up a Hosted Zone for a subdomain already handled in Route53. Something like
- aws cli must be authenticated with a user that has the following group policies
arn:aws:iam::aws:policy/AmazonEC2FullAccess
arn:aws:iam::aws:policy/AmazonRoute53FullAccess
arn:aws:iam::aws:policy/AmazonS3FullAccess
arn:aws:iam::aws:policy/IAMFullAccess
arn:aws:iam::aws:policy/AmazonVPCFullAccess
Then, finally,
./bin/get-programming-job.sh cluster.example.com
- Create an S3 bucket to store the
kops
state, - Create a multi-zone Kubernetes cluster using
kops
, - Wait for the cluster to spin up and schedule pods,
- Deploy an ElasticSearch
StatefulSet
, - Deploy a fluentd
DaemonSet
running a pod on each node in the cluster which forwards logs to ElasticSearch for indexing, - Deploy a 100-replica
StatefulSet
of thefizzbuzzer
container, which accepts aStatefulSet
hostname (fizzbuzzer-n
) as an argument and computes whether or not it should outputFizz
,Buzz
,FizzBuzz
, orn
. - Waits for all of those pods to have spun up and then,
- Queries the ElasticSearch Search API to collect all of the logs from the
fizzbuzzer
pods, - Outputs the result of the FizzBuzz,
- Tears everything down (cluster, s3 bucket)
Someone had to.
Occasionally there are DNS and connectivity issues, but I'm just going to ignore them because I've spent enough time on this as it is and it works most of the time.