This repository contains all the files for the workshop around Autoscaling in Kubernetes.
This repository showcase the usage of several solutions with Dynatrace:
- OpenCost
- Keptn lifecylce Toolkit
- HPA
The following tools need to be install on your machine :
- jq
- kubectl
- git
- gcloud ( if you are using GKE)
- Helm
PROJECT_ID="<your-project-id>"
gcloud services enable container.googleapis.com --project ${PROJECT_ID}
gcloud services enable monitoring.googleapis.com \
cloudtrace.googleapis.com \
clouddebugger.googleapis.com \
cloudprofiler.googleapis.com \
--project ${PROJECT_ID}
ZONE=europe-west3-a
NAME=autoscaling-workshop
gcloud container clusters create ${NAME} --zone=${ZONE} --machine-type=e2-standard-8 --num-nodes=2
git clone https://github.com/henrikrexed/Autoscaling-workshop
cd Autoscaling-workshop
If you don't have any Dynatrace tenant , then i suggest to create a trial using the following link : Dynatrace Trial
Once you have your Tenant save the Dynatrace (including https) tenant URL in the variable DT_TENANT_URL
(for example : https://dedededfrf.live.dynatrace.com)
DT_TENANT_URL=<YOUR TENANT URL>
The dynatrace operator will require to have several tokens:
- Token to deploy and configure the various components
- Token to ingest metrics and Traces
One for the operator having the following scope:
- Create ActiveGate tokens
- Read entities
- Read Settings
- Write Settings
- Access problem and event feed, metrics and topology
- Read configuration
- Write configuration
- Paas integration - installer downloader
Save the value of the token . We will use it later to store in a k8S secret
API_TOKEN=<YOUR TOKEN VALUE>
Create a Dynatrace token with the following scope:
- Ingest metrics (metrics.ingest)
- Ingest logs (logs.ingest)
- Ingest events (events.ingest)
- Ingest OpenTelemetry
- Read metrics
DATA_INGEST_TOKEN=<YOUR TOKEN VALUE>
cd ..
chmod 777 deployment.sh
./deployment.sh --clustername "${NAME}" --dturl "${DT_TENANT_URL}" --dtingesttoken "${DATA_INGEST_TOKEN}" --dtoperatortoken "${API_TOKEN}"
First let's start with the cluster efficiency Dashboard :
curl -X 'POST' \
'${DT_TENANT_URL}/api/config/v1/dashboards' \
-H 'accept: application/json; charset=utf-8' \
-H 'Content-Type: application/json; charset=utf-8' \
-H 'Authorization: Api-Token ${API_TOKEN}'\
-d @dynatrace/Cluster efficiency.json'
then the K6 dashboard:
curl -X 'POST' \
'${DT_TENANT_URL}/api/config/v1/dashboards' \
-H 'accept: application/json; charset=utf-8' \
-H 'Content-Type: application/json; charset=utf-8' \
-H 'Authorization: Api-Token ${API_TOKEN}'\
-d @dynatrace/K6 load test.json'
We can see that the deployed workload is not efficient and the namespace hipster-shop is the most expensive. We can reduce the cost of the cluster by modifying the request & limits.
The repository has another version of the hipster-shop deployment file having lower value for the ressource :
- requests
- limit Let's apply the update version of the hipster-shop :
kubectl apply -f hipstershop/k8s-manifest.yaml -n hipster-shop
kubectl apply -f k6/loadtest_job.yaml -n hipster-shop
To handle the load properly let's deploy some HPA rules on the following deployments :
- frontend
- productcalalogservice
- cartservice
- checkoutservice
- recommendationservice let's stop the load test started :
kubectl delete -f k6/loadtest_job.yaml -n hipster-shop
kubectl apply -f hpa/hpa_cpu.yaml-n hipster-shop
kubectl apply -f k6/loadtest_job.yaml -n hipster-shop
let's stop the load test started :
kubectl delete -f k6/loadtest_job.yaml -n hipster-shop
By looking at Dynatrace, we can see that :
- the cost of the cluster has increased
- we have pending workload
- we still have performance issues
Let's remove The hipster-shop to make sure all our workload has only 1 replica
kubectl delete -f hpa/hpa_cpu.yaml-n hipster-shop
kubectl delete -f hipstershop/k8s-manifest.yaml -n hipster-shop
sleep 5
kubectl apply -f hipstershop/k8s-manifest.yaml -n hipster-shop
kubectl apply -f keptn/metricProvider.yaml -n hipster-shop
In Dynatrace, let's create a metric expression to measure :
- the number of request coming in the frontend service
- the % of CPU throttling
kubectl apply -f keptn/keptnmetric.yaml -n hipster-shop
kubectl apply -f keptn/hpa.yaml -n hipster-shop
kubectl apply -f k6/loadtest_job.yaml -n hipster-shop
kubectl delete -f keptn/hpa.yaml -n hipster-shop
kubectl delete -f k6/loadtest_job.yaml -n hipster-shop
kubectl delete -f hipstershop/k8s-manifest.yaml -n hipster-shop
sleep 5
kubectl apply -f hipstershop/k8s-manifest.yaml -n hipster-shop
Let's create our new Metrics:
kubectl apply -f keptn/v2/keptnmetric.yam -n hipster-shop
And create our new HPA rule:
kubectl apply -f keptn/v2/hpa.yaml -n hipster-shop
And let's run our new load test:
kubectl apply -f k6/loadtest_job.yaml -n hipster-shop