This system is based heavily around Helm. It provides the mechanism to properly separate versions of code amongst other things.
Some nuggets from the Helm part of the project. Below these nuggets is the full explanation of the Helm Charts which highlight some interesting things about each chart.
One important part of this arctehicture is the immutable and separate deployment goal. Helm helps achieve this goal by providing a strong template system which we can leverage to configure a deployment.
Everything that is deployed in the system gets a host name based on the deployment version.
Think about how Zookeeper needs to know where all the other Zookeeper nodes are. Additionally the node count needs to be configurable meaning it cannot be hard coded. The server addresses cannot be hard coded as they will change with the deployment version.
Helm simplifies this by allowing values to be swapped and changed and utilised by the powerful Go Templates based system.
This is the deployment configuration Helm Chart that all deployments will get. Because the deployments are immutable and well versioned it's easy to know which configuration belongs to which deployment. This file is located at /deployments/Helm/configs/templates/configs.yaml
.
{{- $root := . -}}
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config-{{.Values.Version}}-{{.Values.Build}}
namespace: default
data:
nimbusnodes: | {{range $i, $e := until (int .Values.NimbusNodes)}}
- nimbus-{{$root.Values.Version}}-{{$root.Values.Build}}-{{$i}}.nimbus-hs-{{$root.Values.Version}}-{{$root.Values.Build}}.default.svc.cluster.local
{{- end }}
zookeepernodes: | {{range $i, $e := until (int .Values.ZookeeperNodes)}}
- zk-{{$root.Values.Version}}-{{$root.Values.Build}}-{{$i}}.zk-hs-{{$root.Values.Version}}-{{$root.Values.Build}}.default.svc.cluster.local
{{- end }}
The associated values.yaml
file includes
ZookeeperNodes: 3
NimbusNodes: 2
These values will be adjusted at runtime by the deployment scripts using the helm --set
command (see here for more information).
The 0.deploy_all_services.sh
script creates a setter
variable.
export setter="--set Version=$ver --set Build=$build --set StorageAccount=$storage_account \
--set eventhub_read_policy_key=$eventhub_read_policy_key \
--set eventhub_read_policy_name=$eventhub_read_policy_name \
--set eventhub_name=$eventhub_name \
--set eventhub_namespace=$eventhub_namespace \
--set cosmos_service_endpoint=$cosmos_service_endpoint \
--set cosmos_key=$cosmos_key \
--set cosmos_collection_name=$cosmos_collection_name \
--set cosmos_database_name=$cosmos_database_name"
This variable is then passed in to all helm calls to dynamically set values in the helm deployments without having to modify files on the filesystem.
echo "Update Configs"
helm template $setter -f ../Helm/configs/values.yaml ../Helm/configs | kubectl $kcommand -f -
helm template
will tell helm to generate the template locally, rather than apply it to the cluster (we want to apply viakubectl
as it gives us more flexibility)$setter
will send the variables in to the templating engine for replacement over the top of thevalues.yaml
file-f ../Helm/configs/values.yaml
passes in the basevalues.yaml
file - this is to allow some settings to be stored in the values file and some dynamically provided by the scripts../Helm/configs
the chart that is being used| kubectl $kcommand -f -
pipes the data from the helm template build tokubectl
without having to be written to the filesystem.$kcommand
is the parameter passed in from the terminal (apply
delete
etc.) to apply the script to create or delete the deployment.
This section explains the chats this project uses. Each chart takes a version and applies it so that the items that are deployed (StatefulSets, Deployments, Services, Configs, Secrets etc.)
/deployments/Helm/configs
The config chart deploys deployment specific secrets and configuration. It includes settings that will be entered in the script config file as described in the Deploying the Bits document.
Version: 2
Build: 136
ZookeeperNodes: 3
NimbusNodes: 2
eventhub_read_policy_key: "val_eventhub_read_policy_key"
eventhub_read_policy_name: "val_eventhub_read_policy_name"
eventhub_name: "val_eventhub_name"
eventhub_namespace: "val_eventhub_namespace"
cosmos_service_endpoint: "val_cosmos_service_endpoint"
cosmos_key: "val_cosmos_key"
cosmos_database_name: "val_cosmos_database_name"
cosmos_collection_name: "val_cosmos_collection_name"
Of note is that secrets and configs are separated as proposed in the design document.
/deployments/Helm/heartbeat
This deploys the heartbeat service. This service writes a hearbeat every few seconds to an Azure Files SMB share.
Note the volume mounts which load the Azure Files share that is created and configured during the Cluster Build.
volumeMounts:
- mountPath: /hb
name: hbvolume
volumes:
- name: hbvolume
persistentVolumeClaim:
claimName: azurefilecustom
Note the heartbeat folder is modified to write to a sub-folder location unique to this deployment version.
Also note that the cluster configuration is pulled from the Config Map and exposed to the pod as an Environment Variable.
env:
- name: HEART_BEAT_FOLDER
value: /hb/v{{.Values.Version}}-{{.Values.Build}}
- name: THIS_CLUSTER
valueFrom:
configMapKeyRef:
name: ravenswoodconfig
key: this_cluster
- name: THAT_CLUSTER
valueFrom:
configMapKeyRef:
name: ravenswoodconfig
key: other_cluster
/deployments/Helm/nimbus
/deployments/Helm/zookeeper
These charts are similar.
Deploys the Apache Storm Nimbus nodes as a Kubernetes StatefulSet.
Note the PodDisruptionBudget and the updateStrategy which allow the set to be upgraded within limits of the Nimbus design principles (number of nodes that must be running).
Note the antiPodAffinity which allows for highly available design considerations by limiting deployment of the pod to nodes that already contain the same pod.
/deployments/Helm/services
Deploys multiple versions of the sample services.
Of note is that the services are quite simple - they reflect an envionment variable (as described in the Docker document). This allows for the same service to be deployed multiple times to demonstrate versioning and intelligent routing.
env:
- name: WRITE_BACK
value: svc1v1
/deployments/Helm/storage
Deploys the Azure Files shared location as a Persistent Volume which can be accessed in pods from multiple clusters - for usage in the heartbeat.
Of note that it creates a Kubernetes Secret containing the details of the share (rather than hard-coding directly in the chart).
/deployments/Helm/supervisor
Deploys a configurable number of supervisor nodes - which are the workhorses of an Apache Storm cluster.
Of interest is that the Storm Nimbus and Zookeeper configuration is passed through from the config map and exposed as a file which is then used by the setup script (/deployments/docker/base/configure.sh
)
volumes:
- name: application-config
configMap:
name: app-config-{{.Values.Version}}-{{.Values.Build}}
items:
- key: nimbusnodes
path: nimbusnodes
- key: zookeepernodes
path: zookeepernodes
echo "nimbus.seeds:" >> conf/storm.yaml
cat $CONFIG_BASE/nimbusnodes >> conf/storm.yaml
/deployments/Helm/topology
Deploys the Storm Topology in to the Storm cluster.
Of note is that it uses an initContainer to block the deployment if it's running in the "b" cluster. This init container will unblock of the heartbeat flatlines.
initContainers:
- name: heartmon
imagePullPolicy: {{.Values.ImagePullPolicy}}
image: {{.Values.ImageHeartMon}}
resources:
requests:
memory: {{.Values.Memory}}
cpu: {{.Values.Cpu}}
/deployments/Helm/ui
Deploys the Storm UI which can be used to check the status of the Storm cluster. See Deploying the Bits for information on how to access it (hint kubectl get svc
).