In this section, we will demonstrate how to integrate service mesh components with a Kubernetes cluster. Service mesh is a layer that manages communication between microservices and it is becoming increasingly popular for cloud native applications. Some of the critical features are service discovery, load balancing, automatic retries, circuit breakers, collect request/response metrics and tracing info.
We will look into two well-known service mesh integrations:
-
Linkerd
-
Istio
Linkerd is a layer 5/7 proxy that routes and load balances traffic over HTTP, Thrift, Mux, HTTP/2 and gRPC. Its based on Finagle (built by Twitter), and in January 2017 linkerd became a member of the CNCF, alongside Kubernetes.
Linkerd adds visibility, control, and reliability to your application with a wide array of powerful techniques: circuit-breaking, latency-aware load balancing, eventually consistent (“advisory”) service discovery, deadline propagation, and tracing and instrumentation.
In this exercise, we will focus on visibility for your services running in a Kubernetes cluster. We will install linkerd in your k8s cluster, run a couple of simple microservices and demonstrate how linkerd captures top-line service metrics such as success rates, request volumes and latencies.
In this exercise we’ll look at a few of the features linkerd provides, such as:
-
Monitoring the traffic within the service mesh
-
Per request routing
Install linkerd using kubectl
. This will install linkerd as a DaemonSet (i.e., one instance per
host) running in the default Kubernetes namespace.
kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/\ k8s-daemonset/k8s/linkerd.yml configmap "l5d-config" created daemonset "l5d" created service "l5d" created
Check that the linkerd (named as l5d
) pods are running:
$ kubectl get pods NAME READY STATUS RESTARTS AGE l5d-4kg47 2/2 Running 0 55s l5d-d0wvv 2/2 Running 0 55s l5d-gfpc7 2/2 Running 0 55s l5d-w0d95 2/2 Running 0 55s l5d-xppqs 2/2 Running 0 55s
This is the output shown from a 5 worker node cluster.
Check that the linkerd service is running:
$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 15h l5d LoadBalancer 100.68.65.99 a306c8f52b92c... 4140:30609/TCP,4141:31489/TCP,9990:31710/TCP 24s
Get more details about linkerd service:
$ kubectl describe svc/l5d Name: l5d Namespace: default Labels: <none> Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"l5d","namespace":"default"},"spec":{"ports":[{"name":"outgoing","port":4140},{... Selector: app=l5d Type: LoadBalancer IP: 100.70.56.141 LoadBalancer Ingress: abf44db7dbe1211e7a7d00220684043f-1319890531.eu-central-1.elb.amazonaws.com Port: outgoing 4140/TCP TargetPort: 4140/TCP NodePort: outgoing 30108/TCP Endpoints: 100.96.3.2:4140,100.96.4.2:4140,100.96.5.4:4140 + 2 more... Port: incoming 4141/TCP TargetPort: 4141/TCP NodePort: incoming 31209/TCP Endpoints: 100.96.3.2:4141,100.96.4.2:4141,100.96.5.4:4141 + 2 more... Port: admin 9990/TCP TargetPort: 9990/TCP NodePort: admin 32117/TCP Endpoints: 100.96.3.2:9990,100.96.4.2:9990,100.96.5.4:9990 + 2 more... Session Affinity: None External Traffic Policy: Cluster Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal CreatingLoadBalancer 3m service-controller Creating load balancer Normal CreatedLoadBalancer 3m service-controller Created load balancer
You can go to linkerd’s admin page (i.e ELB:9990) to verify installation. It may take a minute or two before the ELB DNS is available. If you have an issue accessing the load balancer endpoint, it could be because your firewall or VPN is preventing access to port 9990.
LINKERD_ELB=$(kubectl get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}") open http://$LINKERD_ELB:9990
The default dashboard looks like as shown:
The Github repo for linkerd examples has two microservices apps called “hello” and “world”. They both communicate with each other to complete the request. Run this command to install these apps in the default namespace.
$ kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/\ k8s-daemonset/k8s/hello-world.yml replicationcontroller "hello" created service "hello" created replicationcontroller "world-v1" created service "world-v1" created
Generate some traffic by running this command:
http_proxy=$LINKERD_ELB:4140 curl -s http://hello
Linkerd will show the number of requests being served, connections and a bunch of rich data:
linkerd-viz is a monitoring application based on Prometheus and Grafana. It can automatically find linkerd instances and services that are installed in your k8s cluster.
$ kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-viz/master/k8s/linkerd-viz.yml replicationcontroller "linkerd-viz" created service "linkerd-viz" created
You can open linkerd-viz ELB to view the dashboard:
LINKERD_VIZ_ELB=$(kubectl get svc linkerd-viz -o jsonpath="{.status.loadBalancer.ingress[0].*}") open http://$LINKERD_VIZ_ELB
As with the previous example, it may take a minute or two before the ELB DNS is available.
We’ll use the same “hello-world” application used in the example above, but this time we’ll deploy version 2 of the “world” microservice, and we’ll specify on a per request level whether the request should use v1 or v2 of the “"world” microservice.
If you haven’t already deployed the “hello-world” application, deploy it now.
kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/\ k8s-daemonset/k8s/hello-world.yml
Delete the previous linkerd Daemonset, as we’re going to update the ConfigMap and install a new one:
$ kubectl delete ds/l5d
Deploy the linkerd ingress so we can access the application externally.
$ kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/\ k8s-daemonset/k8s/linkerd-ingress.yml configmap "l5d-config" configured daemonset "l5d" configured service "l5d" configured
Now deploy version2 of the “world” microservice.
$ kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/\ k8s-daemonset/k8s/world-v2.yml replicationcontroller "world-v2" created service "world-v2" created
Send a request to v1 of the service. It should reply with 'Hello world'.
INGRESS_LB=$(kubectl get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}") curl -H 'Host: www.hello.world' $INGRESS_LB
After a minute or two, it should reply with Hello world
as shown:
Hello (100.96.1.12) world (100.96.1.14)
Now send a request to v2 of the service by modifying the header in the request.
curl -H "Host: www.hello.world" -H "l5d-dtab: /host/world => /srv/world-v2;" $INGRESS_LB
It should reply with 'Hello earth' as shown:
Hello (100.96.1.11) earth (100.96.2.14)
This demonstrates that v1 and v2 of the world
service are running in the cluster, and you can specify in the
request header which version of the service to route individual requests to.
That’s it!
You can look into linkerd configuration files in linkerd examples to learn more.
Remove the installed components:
kubectl delete -f https://raw.githubusercontent.com/linkerd/linkerd-viz/master/k8s/linkerd-viz.yml kubectl delete -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/\ k8s-daemonset/k8s/world-v2.yml kubectl delete -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/\ k8s-daemonset/k8s/hello-world.yml kubectl delete -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/\ k8s-daemonset/k8s/linkerd-ingress.yml kubectl delete -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/\ k8s-daemonset/k8s/linkerd.yml
Istio is a layer 4/7 proxy that routes and load balances traffic over HTTP, WebSocket, HTTP/2, gRPC and supports application protocols such as MongoDB and Redis. Istio uses the Envoy proxy to manage all inbound/outbound traffic in the service mesh. Envoy was built by Lyft, and in Sept 2017 Envoy became a member of the CNCF, alongside Kubernetes. Istio is a joint collaboration between Google, IBM and Lyft.
Istio has a wide variety of traffic management features that live outside the application code, such as A/B testing, phased/canary rollouts, failure recovery, circuit breaker, layer 7 routing and policy enforcement (all provided by the Envoy proxy). Istio also supports ACLs, rate limits, quotas, authentication, request tracing and telemetry collection using its Mixer component. The goal of the Istio project is to support traffic management and security of microservices without requiring any changes to the application; it does this by injecting a sidecar into your pod that handles all network communications.
In this exercise we’ll look at a few of the features Istio provides, such as:
-
Weighted routing
-
Distributed tracing
-
Mutual TLS authentication
Istio requires a binary installed on your laptop in order to inject the Envoy sidecar into your pods. This means you’ll need to download Istio. Istio can also automatically inject the sidecar; for more info see the Istio quick start
curl -L https://git.io/getLatestIstio | sh - cd istio-* export PATH=$PWD/bin:$PATH
You should now be able to run the istioctl
CLI
$ istioctl version Version: 0.2.10 GitRevision: f27f2803f59994367c1cca47467c362b1702d605 GitBranch: release-0.2 User: sebastienvas@ee792364cfc2 GolangVersion: go1.8
Install Istio using kubectl
. This will install Istio into its own namespace, istio-system
. Change to the
directory where you downloaded Istio in the step above.
kubectl apply -f install/kubernetes/istio.yaml
Check the Istio has been installed. Note that Istio is installed into its own namespace.
$ kubectl get all --namespace istio-system
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/istio-ca 1 1 1 1 1m
deploy/istio-egress 1 1 1 1 1m
deploy/istio-ingress 1 1 1 1 1m
deploy/istio-mixer 1 1 1 1 2m
deploy/istio-pilot 1 1 1 1 1m
NAME DESIRED CURRENT READY AGE
rs/istio-ca-2651333813 1 1 1 1m
rs/istio-egress-2836352731 1 1 1 1m
rs/istio-ingress-2873642151 1 1 1 1m
rs/istio-mixer-1999632368 1 1 1 2m
rs/istio-pilot-1811250569 1 1 1 1m
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/istio-ca 1 1 1 1 1m
deploy/istio-egress 1 1 1 1 1m
deploy/istio-ingress 1 1 1 1 1m
deploy/istio-mixer 1 1 1 1 2m
deploy/istio-pilot 1 1 1 1 1m
NAME READY STATUS RESTARTS AGE
po/istio-ca-2651333813-pcr1f 1/1 Running 0 1m
po/istio-egress-2836352731-sfj7j 1/1 Running 0 1m
po/istio-ingress-2873642151-vzfxr 1/1 Running 0 1m
po/istio-mixer-1999632368-nz0mw 2/2 Running 0 2m
po/istio-pilot-1811250569-mmfdg 1/1 Running 0 1m
We’ll use a sample application developed by the Istio team to check out the Istio features. Since we are using
the manual method of injecting the Envoy sidecar into the application, we need to use the istioctl
as shown below.
kubectl apply -f <(istioctl kube-inject -f samples/bookinfo/kube/bookinfo.yaml)
This will deploy the BookInfo application, which consists of 4 microservices, each written using a different language, which collaborate to show book product information, book details and book reviews. Each microservice is deployed in its own pod, with the Envoy proxy injected into the pod; Envoy will now take over all network communications between the pods.
Let’s check that all components were installed
$ kubectl get all
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/details-v1 1 1 1 1 3h
deploy/productpage-v1 1 1 1 1 3h
deploy/ratings-v1 1 1 1 1 3h
deploy/reviews-v1 1 1 1 1 3h
deploy/reviews-v2 1 1 1 1 3h
deploy/reviews-v3 1 1 1 1 3h
NAME DESIRED CURRENT READY AGE
rs/details-v1-39705650 1 1 1 3h
rs/productpage-v1-1382449686 1 1 1 3h
rs/ratings-v1-3906799406 1 1 1 3h
rs/reviews-v1-2953083044 1 1 1 3h
rs/reviews-v2-348355652 1 1 1 3h
rs/reviews-v3-4088116596 1 1 1 3h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/details-v1 1 1 1 1 3h
deploy/productpage-v1 1 1 1 1 3h
deploy/ratings-v1 1 1 1 1 3h
deploy/reviews-v1 1 1 1 1 3h
deploy/reviews-v2 1 1 1 1 3h
deploy/reviews-v3 1 1 1 1 3h
NAME READY STATUS RESTARTS AGE
po/details-v1-39705650-vc2x0 2/2 Running 0 3h
po/productpage-v1-1382449686-b7frw 2/2 Running 0 3h
po/ratings-v1-3906799406-11pcn 2/2 Running 0 3h
po/reviews-v1-2953083044-sktvt 2/2 Running 0 3h
po/reviews-v2-348355652-xbbbv 2/2 Running 0 3h
po/reviews-v3-4088116596-pkkjk 2/2 Running 0 3h
If all components were installed successfully, you should be able to see the product page. This may take a minute or two, first for the Ingress to be created, and secondly for the Ingress to hook up with the services it exposes. Just keep refreshing the browser until the booking product page appears.
ISTIO_INGRESS=$(kubectl get ingress gateway -o jsonpath="{.status.loadBalancer.ingress[0].*}") open http://$ISTIO_INGRESS/productpage
It looks like as shown:
The sample application is pretty useful. You can see in the kubectl get all
command above that its deployed
more than one version of the 'reviews' microservice. We’re going to use weighted routing to route
50% of the traffic to v3 of the reviews microservice. v3 shows stars for each review, whereas v1 does not.
We’ll then query the bookinfo product page a few times and count the number of times a review page appears
containing stars for a review; this will indicate we are being routed to v3 of the reviews page.
$ kubectl create -f samples/bookinfo/kube/route-rule-all-v1.yaml routerule "productpage-default" created routerule "reviews-default" created routerule "ratings-default" created routerule "details-default" created $ kubectl replace -f samples/bookinfo/kube/route-rule-reviews-50-v3.yaml routerule "reviews-default" replaced
The Envoy proxy does not round robin the routing to different versions of the microservice, so if you access
the product page twice you are unlikely to see one request use v1 of reviews, and a second request use v3. However, over a hundred requests 50% of them should be routed to v3 of the reviews page. We can test this using
the script below. Make sure you don’t have a file called mfile
in your current folder before running this.
The script sends 100 curl
requests to the bookinfo product page, which may take around 30s, and then counts
those which have stars in the response. For the eagle eyed amongst you, the divde by 2 is because the
productpage html contains two reviewers, and we simply want to count the number of curls
that returned
“full stars” in the review page. Out of 100 curls we expect 50 of them to contain “full stars”.
ISTIO_INGRESS=$(kubectl get ingress gateway -o jsonpath="{.status.loadBalancer.ingress[0].*}") for((i=1;i<=100;i+=1));do curl -s http://$ISTIO_INGRESS/productpage >> mfile; done; a=$(grep 'full stars' mfile | wc -l) && echo Number of calls to v3 of reviews service "$(($a / 2))"
It shows the output as:
Number of calls to v3 of reviews service 50
Finally, remove the temporary file:
rm mfile
This weighted routing was handled by Istio routing the traffic between the versions and scaling the reviews microservice to accommodate the traffic load.
Istio is deployed as a sidecar proxy into each of your pods; this means it can see and monitor all the traffic flows between your microservices and generate a graphical representation of your mesh traffic. We’ll use the bookinfo application you deployed in the previous step to demonstrate this.
First, install Prometheus, which will obtain the metrics we need from Istio
$ kubectl apply -f install/kubernetes/addons/prometheus.yaml configmap "prometheus" created service "prometheus" created deployment "prometheus" created
Check that Prometheus is running:
$ kubectl -n istio-system get svc prometheus NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE prometheus ClusterIP 100.69.199.148 <none> 9090/TCP 47s
Now install the Servicegraph addon; Servicegraph queries Prometheus, which obtains details of the mesh traffic flows from Istio
$ kubectl apply -f install/kubernetes/addons/servicegraph.yaml deployment "servicegraph" created service "servicegraph" created
Check that the Servicegraph was deployed:
$ kubectl -n istio-system get svc servicegraph NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE servicegraph ClusterIP 100.65.77.1 <none> 8088/TCP 5m
Generate some traffic to the bookinfo application:
ISTIO_INGRESS=$(kubectl get ingress gateway -o jsonpath="{.status.loadBalancer.ingress[0].*}") open http://$ISTIO_INGRESS/productpage
View the Servicegraph UI - we’ll use port forwarding to access this
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=servicegraph -o jsonpath='{.items[0].metadata.name}') 8088:8088 & open http://localhost:8088/dotviz
You should see a distributed trace that looks something like this. It may take a few seconds for Servicegraph to become available, so refresh the browser if you do not receive a response.
Istio-auth enables secure communication between microservices by enforcing mutual TLS communication between the sidecar proxies. Implementing this is simple; we simply install Istio with mutual TLS enabled.
If you have run the examples above, uninstall Istio:
kubectl delete -f install/kubernetes/istio.yaml
and reinstall it with the Auth module enabled
kubectl apply -f install/kubernetes/istio-auth.yaml
all traffic between microservices will now be encrypted.
Remove the installed components
kubectl delete -f install/kubernetes/addons/servicegraph.yaml kubectl delete -f install/kubernetes/addons/prometheus.yaml kubectl delete -f install/kubernetes/istio-auth.yaml kubectl delete -f install/kubernetes/istio.yaml ./samples/bookinfo/kube/cleanup.sh
Accept the default
namespace in the cleanup script above.
Some errors may appear in the output when deleting Istio. These are related to Istio components you have not installed, so no need to worry about these. You can confirm that everything has been uninstalled as follows. No Istio or Bookinfo components should remain:
kubectl get all kubectl get all --namespace istio-system
You are now ready to continue on with the workshop!