Skip to content

Commit

Permalink
Merge pull request #27 from valencenet/add-envoy
Browse files Browse the repository at this point in the history
Release 0.3.0: Add envoy
  • Loading branch information
domenicrosati authored May 31, 2019
2 parents dbf03d5 + e942b6f commit 3fa1879
Show file tree
Hide file tree
Showing 14 changed files with 254 additions and 112 deletions.
38 changes: 21 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@

## How it works

Valence is based on the notion of Declarative Performance. We believe you should be able to declare performance objectives and have an operator (Valence) which figures out how to autoscale, right size, and pack your Kubernetes resources. In contrast, current Kubernetes scaling and performance management tools are largely imperative requiring overhead to determine right size, autoscaling metrics, related configuration. Since code, traffic, and node utilization changes - we believe this should be managed automatically by an operator, rather than by manual calculation and intervention. We also think the right unit of scaling isn't utilization or metrics thresholds but based, dynamically, on how applications behavour (utilization) responds to its use (such as HTTP Requests).
Valence is based on the notion of Declarative Performance. We believe you should be able to declare performance objectives and have an operator (Valence) which figures out how to autoscale, right size, and pack your Kubernetes resources. In contrast, current Kubernetes scaling and performance management tools are largely imperative requiring overhead to determine right size, autoscaling metrics, related configuration. Since code, traffic, and node utilization changes - we believe this should be managed automatically by an operator, rather than by manual calculation and intervention. We also think the right unit of scaling isn't utilization or metrics thresholds but based, dynamically, on how applications behavour (utilization) responds to its use (such as HTTP or gRPC Requests).

## Declarative Performance: The Service Level Objective Manifest

Expand Down Expand Up @@ -132,7 +132,7 @@ make valence LICENSE=<YOUR.EMAIL>
kubectl apply -f valence.yaml
```

- **Metered** by adding your license key you provisioned through during sign up on manifold and applying valence.
- **License** by adding your license key you provisioned through during sign up on manifold and applying valence.

```
make valence LICENSE=<YOUR.LICENSE.KEY>
Expand All @@ -145,7 +145,6 @@ Valence can be removed by deleting valence.yaml
kubectl delete -f valence.yaml
```


Components installed in valence-system namespace:

- Prometheus (Valence’s own managed Prometheus)
Expand Down Expand Up @@ -184,7 +183,7 @@ spec:
throughput: 500
```

**2) Label the deployment with that SLO and add Prometheus Proxy:**
**2) Label the deployment with that SLO and add Envoy:**

#### Selecting SLO

Expand All @@ -211,9 +210,9 @@ metadata:

#### Adding Sidecar

Valence collects application metrics through a sidecar. If you’d prefer to collect metrics based on your ingress, load-balancer, envoy containers, linkerd, istio or otherwise, let the Valence team know. This will eventually be automated, all feedback is appreciated!
Valence collects application metrics through a sidecar, [envoy](https://www.envoyproxy.io/). If you’d prefer to collect metrics based on your ingress, load-balancer, custom envoy containers, linkerd, istio or otherwise, let the Valence team know, we are currently working on custom app metrics. This will eventually be automated, all feedback is appreciated!

Add the proxy container to your deployment and set the target address to where your application is normally serving.
Add the envoy proxy container to your deployment and set the target address to where your application is normally serving.

Example: [todo-backend-django/deployment.yaml](./example/workloads/todo-backend-django-valence/deployment.yaml)

Expand All @@ -234,14 +233,17 @@ metadata:
...
spec:
containers:
- name: prometheus-proxy
image: valencenet/prometheus-proxy:0.2.11
imagePullPolicy: IfNotPresent
env:
- name: TARGET_ADDRESS
value: "http://127.0.0.1:8000" # where your app is serving on
args:
- start
- name: envoy
image: valencenet/envoyproxy:latest
imagePullPolicy: IfNotPresent
env:
- name: SERVICE_PORT_VALUE
value: "8000" # this should be the port your app is serving on.
ports:
- containerPort: 8081
name: envoy-sidecar
- containerPort: 8181
name: envoy-metrics
...
```

Expand All @@ -260,7 +262,7 @@ spec:

It is also helpful if you are using readiness and liveness probes to ensure availablity.

**3) Label your Kubernetes Service for that Deployment with the Valence proxy collection and replace your existing service with a Valence compatible service.**
**3) Label your Kubernetes Service for that Deployment with the envoy proxy collection and replace your existing service with a Valence compatible service.**

Example [todo-backend-django/service.yaml](./example/workloads/todo-backend-django-valence/service.yaml)
Change:
Expand Down Expand Up @@ -300,7 +302,7 @@ spec:
# This would be your port you were exposing your application on.
- name: headless # this name is arbitrary and can be changed to anything you want.
port: 80
targetPort: 8081 # this is the port prometheus-proxy is serving on
targetPort: 8081 # this is the port envoy is serving on
# These three lines allow us to scrape application metrics.
- name: prometheus
port: 8181
Expand Down Expand Up @@ -409,16 +411,18 @@ You will see:

## Example Workloads

If you want to test out valence on example workloads we have provided examples manifests that you can use. We generate synthetic workloads using our realistic workload generation tool Majin (see the workload.yaml files). See the `example/workloads` dir for more details.
If you want to test out valence on example workloads we have provided examples manifests that you can use. We generate synthetic workloads using our realistic workload generation tool Majin (see the workload.yaml files). See the `example/workloads` dir for more details. There are also additional gRPC workloads in `example/workloads/grpc`.

The workloads for testing are:

- todo-backend-django (this is a control workload not using valence)
- todo-backend-django-valence
- grpc (fortune-telling-app)

They will use the following SLO manifests:

- slo-webapps
- slo-grpc

Want to get started quickly with example workloads?

Expand Down
45 changes: 24 additions & 21 deletions example-workloads.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -61,17 +61,17 @@ spec:
app: todo-backend-django-valence
spec:
containers:
- args:
- start
env:
- name: TARGET_ADDRESS
value: http://127.0.0.1:8000
image: valencenet/prometheus-proxy:0.2.11
- env:
- name: SERVICE_PORT_VALUE
value: "8000"
image: valencenet/envoyproxy:0.3.0
imagePullPolicy: IfNotPresent
name: prometheus-proxy
resources:
requests:
cpu: 100m
name: envoy
ports:
- containerPort: 8081
name: envoy-sidecar
- containerPort: 8181
name: envoy-metrics
- env:
- name: PORT
value: "8000"
Expand Down Expand Up @@ -121,14 +121,17 @@ spec:
app: todo-backend-django
spec:
containers:
- args:
- start
env:
- name: TARGET_ADDRESS
value: http://127.0.0.1:8000
image: valencenet/prometheus-proxy:0.2.11
- env:
- name: SERVICE_PORT_VALUE
value: "8000"
image: valencenet/envoyproxy:0.3.0
imagePullPolicy: IfNotPresent
name: prometheus-proxy
name: envoy
ports:
- containerPort: 8081
name: envoy-sidecar
- containerPort: 8181
name: envoy-metrics
- env:
- name: PORT
value: "8000"
Expand Down Expand Up @@ -172,13 +175,13 @@ spec:
- args:
- attack
- --base-load
- "500"
- "300"
- --period
- "300"
env:
- name: TARGET
value: http://todo-backend-django-valence.default/todos
image: valencenet/majin:0.2.11
image: valencenet/majin:0.3.0
name: majin
restartPolicy: OnFailure
---
Expand All @@ -198,13 +201,13 @@ spec:
- args:
- attack
- --base-load
- "500"
- "300"
- --period
- "300"
env:
- name: TARGET
value: http://todo-backend-django.default/todos
image: valencenet/majin:0.2.11
image: valencenet/majin:0.3.0
name: majin
restartPolicy: OnFailure
---
Expand Down
35 changes: 35 additions & 0 deletions example/workloads/grpc/deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,35 @@
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: fortune-teller-app
labels:
k8s-app: fortune-teller-app
slo: slo-webapps
annotations:
valence.io/optimizer.configure: "true"
namespace: default
spec:
replicas: 1
template:
metadata:
labels:
k8s-app: fortune-teller-app
slo: slo-webapps
spec:
containers:
- name: envoy
image: valencenet/envoyproxy:0.3.0
imagePullPolicy: IfNotPresent
env:
- name: SERVICE_PORT_VALUE
value: "50051"
ports:
- containerPort: 8081
name: envoy-sidecar
- containerPort: 8181
name: envoy-metrics
- name: fortune-teller-app
image: quay.io/kubernetes-ingress-controller/grpc-fortune-teller:0.1
ports:
- containerPort: 50051
name: grpc
3 changes: 3 additions & 0 deletions example/workloads/grpc/grpc-load.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
# basic bash script for load testing the grpc service.
kubectl port-forward svc/fortune-teller-app 8080:80 &
for i in {1..1200}; do grpcurl -v -plaintext localhost:8080 build.stack.fortune.FortuneTeller/Predict; done
17 changes: 17 additions & 0 deletions example/workloads/grpc/service.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
apiVersion: v1
kind: Service
metadata:
name: fortune-teller-app
namespace: default
labels:
valence.net/prometheus: "true"
spec:
selector:
k8s-app: fortune-teller-app
ports:
- name: http2
port: 80
targetPort: 8081
- name: prometheus
port: 8181
targetPort: 8181
13 changes: 13 additions & 0 deletions example/workloads/grpc/slo-grpc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
apiVersion: optimizer.valence.io/v1alpha1
kind: ServiceLevelObjective
metadata:
name: slo-grpc
spec:
selector:
slo: slo-grpc
objectives:
- type: HTTP
http:
latency:
percentile: 95
responseTime: 100ms
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@ spec:
spec:
containers:
- name: majin
image: valencenet/majin:0.2.11
image: valencenet/majin:0.3.0
args:
- attack
- --base-load
- "500"
- "300"
- --period
- "300"
env:
Expand Down
4 changes: 2 additions & 2 deletions example/workloads/load-simulations/todo-backend-django.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,11 +7,11 @@ spec:
spec:
containers:
- name: majin
image: valencenet/majin:0.2.11
image: valencenet/majin:0.3.0
args:
- attack
- --base-load
- "500"
- "300"
- --period
- "300"
env:
Expand Down
18 changes: 9 additions & 9 deletions example/workloads/todo-backend-django-valence/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,17 +18,17 @@ spec:
spec:
restartPolicy: Always
containers:
- name: prometheus-proxy
image: valencenet/prometheus-proxy:0.2.11
- name: envoy
image: valencenet/envoyproxy:0.3.0
imagePullPolicy: IfNotPresent
env:
- name: TARGET_ADDRESS
value: "http://127.0.0.1:8000"
args:
- start
resources:
requests:
cpu: 100m
- name: SERVICE_PORT_VALUE
value: "8000"
ports:
- containerPort: 8081
name: envoy-sidecar
- containerPort: 8181
name: envoy-metrics
- image: manifoldco/todo-backend-django:latest
imagePullPolicy: IfNotPresent
name: todo-backend-django-valence
Expand Down
15 changes: 9 additions & 6 deletions example/workloads/todo-backend-django/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,14 +18,17 @@ spec:
spec:
restartPolicy: Always
containers:
- name: prometheus-proxy
image: valencenet/prometheus-proxy:0.2.11
- name: envoy
image: valencenet/envoyproxy:0.3.0
imagePullPolicy: IfNotPresent
env:
- name: TARGET_ADDRESS
value: "http://127.0.0.1:8000"
args:
- start
- name: SERVICE_PORT_VALUE
value: "8000"
ports:
- containerPort: 8081
name: envoy-sidecar
- containerPort: 8181
name: envoy-metrics
- image: manifoldco/todo-backend-django:latest
imagePullPolicy: IfNotPresent
name: todo-backend-django
Expand Down
Loading

0 comments on commit 3fa1879

Please sign in to comment.