diff --git a/README.md b/README.md index eedbe3d..68c3218 100644 --- a/README.md +++ b/README.md @@ -16,7 +16,7 @@ ## How it works -Valence is based on the notion of Declarative Performance. We believe you should be able to declare performance objectives and have an operator (Valence) which figures out how to autoscale, right size, and pack your Kubernetes resources. In contrast, current Kubernetes scaling and performance management tools are largely imperative requiring overhead to determine right size, autoscaling metrics, related configuration. Since code, traffic, and node utilization changes - we believe this should be managed automatically by an operator, rather than by manual calculation and intervention. We also think the right unit of scaling isn't utilization or metrics thresholds but based, dynamically, on how applications behavour (utilization) responds to its use (such as HTTP Requests). +Valence is based on the notion of Declarative Performance. We believe you should be able to declare performance objectives and have an operator (Valence) which figures out how to autoscale, right size, and pack your Kubernetes resources. In contrast, current Kubernetes scaling and performance management tools are largely imperative requiring overhead to determine right size, autoscaling metrics, related configuration. Since code, traffic, and node utilization changes - we believe this should be managed automatically by an operator, rather than by manual calculation and intervention. We also think the right unit of scaling isn't utilization or metrics thresholds but based, dynamically, on how applications behavour (utilization) responds to its use (such as HTTP or gRPC Requests). ## Declarative Performance: The Service Level Objective Manifest @@ -132,7 +132,7 @@ make valence LICENSE= kubectl apply -f valence.yaml ``` -- **Metered** by adding your license key you provisioned through during sign up on manifold and applying valence. +- **License** by adding your license key you provisioned through during sign up on manifold and applying valence. ``` make valence LICENSE= @@ -145,7 +145,6 @@ Valence can be removed by deleting valence.yaml kubectl delete -f valence.yaml ``` - Components installed in valence-system namespace: - Prometheus (Valence’s own managed Prometheus) @@ -184,7 +183,7 @@ spec: throughput: 500 ``` -**2) Label the deployment with that SLO and add Prometheus Proxy:** +**2) Label the deployment with that SLO and add Envoy:** #### Selecting SLO @@ -211,9 +210,9 @@ metadata: #### Adding Sidecar -Valence collects application metrics through a sidecar. If you’d prefer to collect metrics based on your ingress, load-balancer, envoy containers, linkerd, istio or otherwise, let the Valence team know. This will eventually be automated, all feedback is appreciated! +Valence collects application metrics through a sidecar, [envoy](https://www.envoyproxy.io/). If you’d prefer to collect metrics based on your ingress, load-balancer, custom envoy containers, linkerd, istio or otherwise, let the Valence team know, we are currently working on custom app metrics. This will eventually be automated, all feedback is appreciated! -Add the proxy container to your deployment and set the target address to where your application is normally serving. +Add the envoy proxy container to your deployment and set the target address to where your application is normally serving. Example: [todo-backend-django/deployment.yaml](./example/workloads/todo-backend-django-valence/deployment.yaml) @@ -234,14 +233,17 @@ metadata: ... spec: containers: - - name: prometheus-proxy - image: valencenet/prometheus-proxy:0.2.11 - imagePullPolicy: IfNotPresent - env: - - name: TARGET_ADDRESS - value: "http://127.0.0.1:8000" # where your app is serving on - args: - - start + - name: envoy + image: valencenet/envoyproxy:latest + imagePullPolicy: IfNotPresent + env: + - name: SERVICE_PORT_VALUE + value: "8000" # this should be the port your app is serving on. + ports: + - containerPort: 8081 + name: envoy-sidecar + - containerPort: 8181 + name: envoy-metrics ... ``` @@ -260,7 +262,7 @@ spec: It is also helpful if you are using readiness and liveness probes to ensure availablity. -**3) Label your Kubernetes Service for that Deployment with the Valence proxy collection and replace your existing service with a Valence compatible service.** +**3) Label your Kubernetes Service for that Deployment with the envoy proxy collection and replace your existing service with a Valence compatible service.** Example [todo-backend-django/service.yaml](./example/workloads/todo-backend-django-valence/service.yaml) Change: @@ -300,7 +302,7 @@ spec: # This would be your port you were exposing your application on. - name: headless # this name is arbitrary and can be changed to anything you want. port: 80 - targetPort: 8081 # this is the port prometheus-proxy is serving on + targetPort: 8081 # this is the port envoy is serving on # These three lines allow us to scrape application metrics. - name: prometheus port: 8181 @@ -409,16 +411,18 @@ You will see: ## Example Workloads -If you want to test out valence on example workloads we have provided examples manifests that you can use. We generate synthetic workloads using our realistic workload generation tool Majin (see the workload.yaml files). See the `example/workloads` dir for more details. +If you want to test out valence on example workloads we have provided examples manifests that you can use. We generate synthetic workloads using our realistic workload generation tool Majin (see the workload.yaml files). See the `example/workloads` dir for more details. There are also additional gRPC workloads in `example/workloads/grpc`. The workloads for testing are: - todo-backend-django (this is a control workload not using valence) - todo-backend-django-valence +- grpc (fortune-telling-app) They will use the following SLO manifests: - slo-webapps +- slo-grpc Want to get started quickly with example workloads? diff --git a/example-workloads.yaml b/example-workloads.yaml index fb35191..2aedc3d 100644 --- a/example-workloads.yaml +++ b/example-workloads.yaml @@ -61,17 +61,17 @@ spec: app: todo-backend-django-valence spec: containers: - - args: - - start - env: - - name: TARGET_ADDRESS - value: http://127.0.0.1:8000 - image: valencenet/prometheus-proxy:0.2.11 + - env: + - name: SERVICE_PORT_VALUE + value: "8000" + image: valencenet/envoyproxy:0.3.0 imagePullPolicy: IfNotPresent - name: prometheus-proxy - resources: - requests: - cpu: 100m + name: envoy + ports: + - containerPort: 8081 + name: envoy-sidecar + - containerPort: 8181 + name: envoy-metrics - env: - name: PORT value: "8000" @@ -121,14 +121,17 @@ spec: app: todo-backend-django spec: containers: - - args: - - start - env: - - name: TARGET_ADDRESS - value: http://127.0.0.1:8000 - image: valencenet/prometheus-proxy:0.2.11 + - env: + - name: SERVICE_PORT_VALUE + value: "8000" + image: valencenet/envoyproxy:0.3.0 imagePullPolicy: IfNotPresent - name: prometheus-proxy + name: envoy + ports: + - containerPort: 8081 + name: envoy-sidecar + - containerPort: 8181 + name: envoy-metrics - env: - name: PORT value: "8000" @@ -172,13 +175,13 @@ spec: - args: - attack - --base-load - - "500" + - "300" - --period - "300" env: - name: TARGET value: http://todo-backend-django-valence.default/todos - image: valencenet/majin:0.2.11 + image: valencenet/majin:0.3.0 name: majin restartPolicy: OnFailure --- @@ -198,13 +201,13 @@ spec: - args: - attack - --base-load - - "500" + - "300" - --period - "300" env: - name: TARGET value: http://todo-backend-django.default/todos - image: valencenet/majin:0.2.11 + image: valencenet/majin:0.3.0 name: majin restartPolicy: OnFailure --- diff --git a/example/workloads/grpc/deployment.yaml b/example/workloads/grpc/deployment.yaml new file mode 100644 index 0000000..52a4f06 --- /dev/null +++ b/example/workloads/grpc/deployment.yaml @@ -0,0 +1,35 @@ +apiVersion: extensions/v1beta1 +kind: Deployment +metadata: + name: fortune-teller-app + labels: + k8s-app: fortune-teller-app + slo: slo-webapps + annotations: + valence.io/optimizer.configure: "true" + namespace: default +spec: + replicas: 1 + template: + metadata: + labels: + k8s-app: fortune-teller-app + slo: slo-webapps + spec: + containers: + - name: envoy + image: valencenet/envoyproxy:0.3.0 + imagePullPolicy: IfNotPresent + env: + - name: SERVICE_PORT_VALUE + value: "50051" + ports: + - containerPort: 8081 + name: envoy-sidecar + - containerPort: 8181 + name: envoy-metrics + - name: fortune-teller-app + image: quay.io/kubernetes-ingress-controller/grpc-fortune-teller:0.1 + ports: + - containerPort: 50051 + name: grpc \ No newline at end of file diff --git a/example/workloads/grpc/grpc-load.sh b/example/workloads/grpc/grpc-load.sh new file mode 100644 index 0000000..1e8c4d2 --- /dev/null +++ b/example/workloads/grpc/grpc-load.sh @@ -0,0 +1,3 @@ +# basic bash script for load testing the grpc service. +kubectl port-forward svc/fortune-teller-app 8080:80 & +for i in {1..1200}; do grpcurl -v -plaintext localhost:8080 build.stack.fortune.FortuneTeller/Predict; done diff --git a/example/workloads/grpc/service.yaml b/example/workloads/grpc/service.yaml new file mode 100644 index 0000000..b27d441 --- /dev/null +++ b/example/workloads/grpc/service.yaml @@ -0,0 +1,17 @@ +apiVersion: v1 +kind: Service +metadata: + name: fortune-teller-app + namespace: default + labels: + valence.net/prometheus: "true" +spec: + selector: + k8s-app: fortune-teller-app + ports: + - name: http2 + port: 80 + targetPort: 8081 + - name: prometheus + port: 8181 + targetPort: 8181 \ No newline at end of file diff --git a/example/workloads/grpc/slo-grpc.yaml b/example/workloads/grpc/slo-grpc.yaml new file mode 100644 index 0000000..a33ab47 --- /dev/null +++ b/example/workloads/grpc/slo-grpc.yaml @@ -0,0 +1,13 @@ +apiVersion: optimizer.valence.io/v1alpha1 +kind: ServiceLevelObjective +metadata: + name: slo-grpc +spec: + selector: + slo: slo-grpc + objectives: + - type: HTTP + http: + latency: + percentile: 95 + responseTime: 100ms diff --git a/example/workloads/load-simulations/todo-backend-django-valence.yaml b/example/workloads/load-simulations/todo-backend-django-valence.yaml index 70e2c3b..83a8cac 100644 --- a/example/workloads/load-simulations/todo-backend-django-valence.yaml +++ b/example/workloads/load-simulations/todo-backend-django-valence.yaml @@ -7,11 +7,11 @@ spec: spec: containers: - name: majin - image: valencenet/majin:0.2.11 + image: valencenet/majin:0.3.0 args: - attack - --base-load - - "500" + - "300" - --period - "300" env: diff --git a/example/workloads/load-simulations/todo-backend-django.yaml b/example/workloads/load-simulations/todo-backend-django.yaml index c1bfea2..5b2500d 100644 --- a/example/workloads/load-simulations/todo-backend-django.yaml +++ b/example/workloads/load-simulations/todo-backend-django.yaml @@ -7,11 +7,11 @@ spec: spec: containers: - name: majin - image: valencenet/majin:0.2.11 + image: valencenet/majin:0.3.0 args: - attack - --base-load - - "500" + - "300" - --period - "300" env: diff --git a/example/workloads/todo-backend-django-valence/deployment.yaml b/example/workloads/todo-backend-django-valence/deployment.yaml index 4bc1892..2b513aa 100644 --- a/example/workloads/todo-backend-django-valence/deployment.yaml +++ b/example/workloads/todo-backend-django-valence/deployment.yaml @@ -18,17 +18,17 @@ spec: spec: restartPolicy: Always containers: - - name: prometheus-proxy - image: valencenet/prometheus-proxy:0.2.11 + - name: envoy + image: valencenet/envoyproxy:0.3.0 imagePullPolicy: IfNotPresent env: - - name: TARGET_ADDRESS - value: "http://127.0.0.1:8000" - args: - - start - resources: - requests: - cpu: 100m + - name: SERVICE_PORT_VALUE + value: "8000" + ports: + - containerPort: 8081 + name: envoy-sidecar + - containerPort: 8181 + name: envoy-metrics - image: manifoldco/todo-backend-django:latest imagePullPolicy: IfNotPresent name: todo-backend-django-valence diff --git a/example/workloads/todo-backend-django/deployment.yaml b/example/workloads/todo-backend-django/deployment.yaml index 3444acd..4217729 100644 --- a/example/workloads/todo-backend-django/deployment.yaml +++ b/example/workloads/todo-backend-django/deployment.yaml @@ -18,14 +18,17 @@ spec: spec: restartPolicy: Always containers: - - name: prometheus-proxy - image: valencenet/prometheus-proxy:0.2.11 + - name: envoy + image: valencenet/envoyproxy:0.3.0 imagePullPolicy: IfNotPresent env: - - name: TARGET_ADDRESS - value: "http://127.0.0.1:8000" - args: - - start + - name: SERVICE_PORT_VALUE + value: "8000" + ports: + - containerPort: 8081 + name: envoy-sidecar + - containerPort: 8181 + name: envoy-metrics - image: manifoldco/todo-backend-django:latest imagePullPolicy: IfNotPresent name: todo-backend-django diff --git a/manifests/valence/grafana/dashboard-valence.yaml b/manifests/valence/grafana/dashboard-valence.yaml index 414ef4f..92ae44e 100644 --- a/manifests/valence/grafana/dashboard-valence.yaml +++ b/manifests/valence/grafana/dashboard-valence.yaml @@ -22,7 +22,7 @@ data: "gnetId": null, "graphTooltip": 0, "id": 1, - "iteration": 1551793537194, + "iteration": 1559313049824, "links": [], "panels": [ { @@ -314,13 +314,19 @@ data: "steppedLine": false, "targets": [ { - "expr": "sum(increase(promproxy_metric_handler_detailed_requests_count{service=\"$deployment\"}[1m])) / 60", + "expr": "sum(rate(envoy_http_downstream_rq_total{service=\"$deployment\"}[1m]))", "format": "time_series", "instant": false, "interval": "", "intervalFactor": 1, "legendFormat": "HTTP Queries Per Second", "refId": "A" + }, + { + "expr": "", + "format": "time_series", + "intervalFactor": 1, + "refId": "B" } ], "thresholds": [], @@ -400,13 +406,19 @@ data: "steppedLine": false, "targets": [ { - "expr": "avg(rate(promproxy_metric_handler_detailed_requests{service=\"$deployment\", quantile=\"$LatencyPercentile\", code!=\"502\"}[1m]))", + "expr": "avg(histogram_quantile($LatencyPercentile, sum(rate(envoy_http_downstream_rq_time_bucket{service=\"$deployment\"}[1m])) by(le, pod)))", "format": "time_series", "instant": false, "interval": "", "intervalFactor": 1, "legendFormat": "HTTP Request Latency", "refId": "A" + }, + { + "expr": "valence_slo_http_latency{name=\"$deployment\"}", + "format": "time_series", + "intervalFactor": 1, + "refId": "B" } ], "thresholds": [], @@ -428,7 +440,7 @@ data: }, "yaxes": [ { - "format": "s", + "format": "ms", "label": "", "logBase": 1, "max": null, @@ -556,8 +568,9 @@ data: "allValue": null, "current": { "selected": false, - "text": "todo-backend-java", - "value": "todo-backend-java" + "tags": [], + "text": "todo-backend-django", + "value": "todo-backend-django" }, "datasource": "DS_PROM_VALENCE", "hide": 0, @@ -566,7 +579,7 @@ data: "multi": false, "name": "deployment", "options": [], - "query": "label_values(promproxy_metric_handler_detailed_requests, service)", + "query": "label_values(envoy_http_downstream_rq_total, service)", "refresh": 1, "regex": "", "sort": 0, @@ -580,30 +593,49 @@ data: "allValue": null, "current": { "selected": true, + "tags": [], "text": "0.95", "value": "0.95" }, - "datasource": "DS_PROM_VALENCE", "hide": 0, "includeAll": false, "label": "Latency Percentile", "multi": false, "name": "LatencyPercentile", - "options": [], - "query": "label_values(promproxy_metric_handler_detailed_requests, quantile)", - "refresh": 1, - "regex": "", - "sort": 0, - "tagValuesQuery": "", - "tags": [], - "tagsQuery": "", - "type": "query", - "useTags": false + "options": [ + { + "selected": false, + "text": "0.5", + "value": "0.5" + }, + { + "selected": false, + "text": "0.75", + "value": "0.75" + }, + { + "selected": false, + "text": "0.9", + "value": "0.9" + }, + { + "selected": true, + "text": "0.95", + "value": "0.95" + }, + { + "selected": false, + "text": "0.99", + "value": "0.99" + } + ], + "query": "0.5, 0.75, 0.9, 0.95, 0.99", + "type": "custom" } ] }, "time": { - "from": "now-5m", + "from": "now-1h", "to": "now" }, "timepicker": { diff --git a/manifests/valence/operator/kustomization.yaml b/manifests/valence/operator/kustomization.yaml index 0f3773e..3c14a75 100644 --- a/manifests/valence/operator/kustomization.yaml +++ b/manifests/valence/operator/kustomization.yaml @@ -1,7 +1,7 @@ commonLabels: app.kubernetes.io/name: valence app.kubernetes.io/component: operator - app.kubernetes.io/version: 0.2.11 + app.kubernetes.io/version: 0.3.0 resources: - crds.yaml - rbac.yaml @@ -10,4 +10,4 @@ resources: - namespace.yaml imageTags: - name: valencenet/valence - newTag: 0.2.11 + newTag: 0.3.0 diff --git a/manifests/valence/prometheus/config-map.yaml b/manifests/valence/prometheus/config-map.yaml index 210ace5..f4d875b 100644 --- a/manifests/valence/prometheus/config-map.yaml +++ b/manifests/valence/prometheus/config-map.yaml @@ -117,13 +117,13 @@ data: - job_name: prometheus-valence scrape_interval: 5s scrape_timeout: 5s - metrics_path: /metrics + metrics_path: /stats/prometheus scheme: http kubernetes_sd_configs: - role: endpoints metric_relabel_configs: - source_labels: [__name__] - regex: (?i)(promproxy_metric_handler_detailed_requests_count|promproxy_metric_handler_detailed_requests) + regex: (?i)(envoy_http_downstream_rq_time_bucket|envoy_http_downstream_rq_total) action: keep relabel_configs: - source_labels: [__meta_kubernetes_service_label_valence_net_prometheus] diff --git a/valence.yaml b/valence.yaml index a84ebbb..9b9bdad 100644 --- a/valence.yaml +++ b/valence.yaml @@ -5,7 +5,7 @@ metadata: app.kubernetes.io/component: operator app.kubernetes.io/name: valence app.kubernetes.io/part-of: valence - app.kubernetes.io/version: 0.2.11 + app.kubernetes.io/version: 0.3.0 name: valence-system --- apiVersion: apiextensions.k8s.io/v1beta1 @@ -15,7 +15,7 @@ metadata: app.kubernetes.io/component: operator app.kubernetes.io/name: valence app.kubernetes.io/part-of: valence - app.kubernetes.io/version: 0.2.11 + app.kubernetes.io/version: 0.3.0 name: servicelevelobjectives.optimizer.valence.io spec: group: optimizer.valence.io @@ -41,7 +41,7 @@ metadata: app.kubernetes.io/component: operator app.kubernetes.io/name: valence app.kubernetes.io/part-of: valence - app.kubernetes.io/version: 0.2.11 + app.kubernetes.io/version: 0.3.0 name: valence-operator namespace: valence-system --- @@ -83,7 +83,7 @@ metadata: app.kubernetes.io/component: operator app.kubernetes.io/name: valence app.kubernetes.io/part-of: valence - app.kubernetes.io/version: 0.2.11 + app.kubernetes.io/version: 0.3.0 name: valence:optimization-operator rules: - apiGroups: @@ -153,7 +153,7 @@ metadata: app.kubernetes.io/component: operator app.kubernetes.io/name: valence app.kubernetes.io/part-of: valence - app.kubernetes.io/version: 0.2.11 + app.kubernetes.io/version: 0.3.0 name: valence:optimization-operator roleRef: apiGroup: rbac.authorization.k8s.io @@ -185,7 +185,7 @@ data: "gnetId": null, "graphTooltip": 0, "id": 1, - "iteration": 1551793537194, + "iteration": 1559313049824, "links": [], "panels": [ { @@ -477,13 +477,19 @@ data: "steppedLine": false, "targets": [ { - "expr": "sum(increase(promproxy_metric_handler_detailed_requests_count{service=\"$deployment\"}[1m])) / 60", + "expr": "sum(rate(envoy_http_downstream_rq_total{service=\"$deployment\"}[1m]))", "format": "time_series", "instant": false, "interval": "", "intervalFactor": 1, "legendFormat": "HTTP Queries Per Second", "refId": "A" + }, + { + "expr": "", + "format": "time_series", + "intervalFactor": 1, + "refId": "B" } ], "thresholds": [], @@ -563,13 +569,19 @@ data: "steppedLine": false, "targets": [ { - "expr": "avg(rate(promproxy_metric_handler_detailed_requests{service=\"$deployment\", quantile=\"$LatencyPercentile\", code!=\"502\"}[1m]))", + "expr": "avg(histogram_quantile($LatencyPercentile, sum(rate(envoy_http_downstream_rq_time_bucket{service=\"$deployment\"}[1m])) by(le, pod)))", "format": "time_series", "instant": false, "interval": "", "intervalFactor": 1, "legendFormat": "HTTP Request Latency", "refId": "A" + }, + { + "expr": "valence_slo_http_latency{name=\"$deployment\"}", + "format": "time_series", + "intervalFactor": 1, + "refId": "B" } ], "thresholds": [], @@ -591,7 +603,7 @@ data: }, "yaxes": [ { - "format": "s", + "format": "ms", "label": "", "logBase": 1, "max": null, @@ -719,8 +731,9 @@ data: "allValue": null, "current": { "selected": false, - "text": "todo-backend-java", - "value": "todo-backend-java" + "tags": [], + "text": "todo-backend-django", + "value": "todo-backend-django" }, "datasource": "DS_PROM_VALENCE", "hide": 0, @@ -729,7 +742,7 @@ data: "multi": false, "name": "deployment", "options": [], - "query": "label_values(promproxy_metric_handler_detailed_requests, service)", + "query": "label_values(envoy_http_downstream_rq_total, service)", "refresh": 1, "regex": "", "sort": 0, @@ -743,30 +756,49 @@ data: "allValue": null, "current": { "selected": true, + "tags": [], "text": "0.95", "value": "0.95" }, - "datasource": "DS_PROM_VALENCE", "hide": 0, "includeAll": false, "label": "Latency Percentile", "multi": false, "name": "LatencyPercentile", - "options": [], - "query": "label_values(promproxy_metric_handler_detailed_requests, quantile)", - "refresh": 1, - "regex": "", - "sort": 0, - "tagValuesQuery": "", - "tags": [], - "tagsQuery": "", - "type": "query", - "useTags": false + "options": [ + { + "selected": false, + "text": "0.5", + "value": "0.5" + }, + { + "selected": false, + "text": "0.75", + "value": "0.75" + }, + { + "selected": false, + "text": "0.9", + "value": "0.9" + }, + { + "selected": true, + "text": "0.95", + "value": "0.95" + }, + { + "selected": false, + "text": "0.99", + "value": "0.99" + } + ], + "query": "0.5, 0.75, 0.9, 0.95, 0.99", + "type": "custom" } ] }, "time": { - "from": "now-5m", + "from": "now-1h", "to": "now" }, "timepicker": { @@ -955,13 +987,13 @@ data: - job_name: prometheus-valence scrape_interval: 5s scrape_timeout: 5s - metrics_path: /metrics + metrics_path: /stats/prometheus scheme: http kubernetes_sd_configs: - role: endpoints metric_relabel_configs: - source_labels: [__name__] - regex: (?i)(promproxy_metric_handler_detailed_requests_count|promproxy_metric_handler_detailed_requests) + regex: (?i)(envoy_http_downstream_rq_time_bucket|envoy_http_downstream_rq_total) action: keep relabel_configs: - source_labels: [__meta_kubernetes_service_label_valence_net_prometheus] @@ -1102,7 +1134,7 @@ metadata: app.kubernetes.io/component: operator app.kubernetes.io/name: valence app.kubernetes.io/part-of: valence - app.kubernetes.io/version: 0.2.11 + app.kubernetes.io/version: 0.3.0 name: optimization-operator namespace: valence-system spec: @@ -1114,7 +1146,7 @@ spec: app.kubernetes.io/component: operator app.kubernetes.io/name: valence app.kubernetes.io/part-of: valence - app.kubernetes.io/version: 0.2.11 + app.kubernetes.io/version: 0.3.0 type: NodePort --- apiVersion: v1 @@ -1208,7 +1240,7 @@ metadata: app.kubernetes.io/component: operator app.kubernetes.io/name: valence app.kubernetes.io/part-of: valence - app.kubernetes.io/version: 0.2.11 + app.kubernetes.io/version: 0.3.0 name: optimization-operator namespace: valence-system spec: @@ -1218,14 +1250,14 @@ spec: app.kubernetes.io/component: operator app.kubernetes.io/name: valence app.kubernetes.io/part-of: valence - app.kubernetes.io/version: 0.2.11 + app.kubernetes.io/version: 0.3.0 template: metadata: labels: app.kubernetes.io/component: operator app.kubernetes.io/name: valence app.kubernetes.io/part-of: valence - app.kubernetes.io/version: 0.2.11 + app.kubernetes.io/version: 0.3.0 spec: containers: - args: @@ -1239,7 +1271,7 @@ spec: value: "20" - name: PROMETHEUS_URL value: http://prometheus-valence.valence-system.svc:9090 - image: valencenet/valence:0.2.11 + image: valencenet/valence:0.3.0 imagePullPolicy: Always name: optimization-operator resources: