You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For latency: we need to ensure that the nginx ingress is either included or excluded across all three setups. Including it across all 3 is probably easier than excluding it. We will measure the end-time via the bash script as we might not be able to get the latencies for the K8s setup for Bookinfo. This is because the services might only propagate the span context but not actually export spans. (Note that the OpenTelemetry example might actually send spans.)
Latency observations:
Bookinfo with Prose: Presidio does take about 45ms on a cold-start and 20ms on a warm-start. OPA takes about 5ms.
For the three setups: Can do multiple Flux clusters. Namespaces are also an option. Each namespace will have the tested (bookinfo) repo --- use pointers in the Kustomization (instead of duplicating the tested kustomization).
Use a bash script to measure end-to-end latency. Send requests to different front-end features programmatically. We can use tracing to double-check overheads in the Istio and Istio with Prose cases, that they are approximately the sum of the GOlang filter span latencies.
Which test repo: Start with bookinfo. One of OT or Online Boutique. Should have services that send HTTP traffic (over gRPC). We will need to fork the service source code. Services in interpreted languages can be patched dynamically and we would only need to add the Git patch. In compiled languages we would need to rebuild the services in the forked repo.
Injections: Add a GET and a POST request to some third party. Change the purpose label to quickly check the purpose of use violations. Also modify some calls to add extra PII items in order to test Presidio?
VM setup: First step --- have a single script that clones our repository. Second step --- sets up pre-requisites (minikube, kubectl) by Brew or Flux. Third --- runs experiments that ultimately produce graphs. To create the image, we can use VirtualBox or Vagrant. Vagrant might be easier to recreate the images. VirtualBox might have a quicker process for one-time creation. The image will already have the first and second steps done. The artifact reviewer will then just run the experiment steps.
Can also use Docker images. However, we will end up running Minikube (i.e. Docker) within Docker, so need to ensure that that works correctly.
Deploy three copies of bookinfo app into different namespaces with three
different networking setups:
- No envoy service proxy (`plain`)
- With envoy service proxy but not our golang filter (`with-envoy`)
- With envoy service proxy and golang filter (`with-filter`)
Related to #99
Deploy three copies of bookinfo app into different namespaces with three
different networking setups:
- No envoy service proxy (`plain`)
- With envoy service proxy but not our golang filter (`with-envoy`)
- With envoy service proxy and golang filter (`with-filter`)
Related to #99
Bookinfo source code: https://github.com/istio/istio/tree/master/samples/bookinfo
Latency observations:
Presidio: Can pass in span context through headers and use Zipkin Python. Each of the library's internal functions may not be instrumented for tracing
https://microsoft.github.io/presidio/api/analyzer_python/ Can use Opentelemetry's automatic instrumentation to trace the time taken by the server: https://opentelemetry.io/docs/languages/python/automatic/#configuring-the-agent but again, we won't know how long each of the internal functions will take.
It can potentially be configured to log details though https://github.com/microsoft/presidio/blob/733cca26cfd5d8f4f0cc0be2eaf630fe442fce9c/presidio-analyzer/presidio_analyzer/app_tracer.py#L6 Not sure if the logs include timing.
OPA: Consider whether policy logic can be optimized: https://www.openpolicyagent.org/docs/latest/policy-performance/
See OT and Online Boutique examples.
For correctness:
VM setup: First step --- have a single script that clones our repository. Second step --- sets up pre-requisites (minikube, kubectl) by Brew or Flux. Third --- runs experiments that ultimately produce graphs. To create the image, we can use VirtualBox or Vagrant. Vagrant might be easier to recreate the images. VirtualBox might have a quicker process for one-time creation. The image will already have the first and second steps done. The artifact reviewer will then just run the experiment steps.
Can also use Docker images. However, we will end up running Minikube (i.e. Docker) within Docker, so need to ensure that that works correctly.
Test cases
Sockshop only uses HTTP
https://github.com/pixie-labs/sock-shop-microservices-demo/blob/master/internal-docs/design.md
Jaeger Tracing - HotROD probably only uses HTTP
https://github.com/jaegertracing/jaeger/tree/main/examples/hotrod
OT uses HTTP only for 2 services. Email service in Ruby, Quote service in PHP. Both are patchable.
OpenTelemetry Services info:
Architecture https://opentelemetry.io/docs/demo/architecture/ only the currency and quote service accept HTTP traffic
https://opentelemetry.io/docs/demo/services/
The Locust LoadGenerator: https://opentelemetry.io/docs/demo/services/load-generator/
The source code runs certain functionalities: https://github.com/open-telemetry/opentelemetry-demo/blob/main/src/loadgenerator/locustfile.py
It includes certain PII: https://github.com/open-telemetry/opentelemetry-demo/blob/main/src/loadgenerator/people.json
So we can use it to check if we detect each of these PII items
Online Boutique also talks over gRPC
https://github.com/GoogleCloudPlatform/microservices-demo?tab=readme-ov-file#architecture
It also uses a Locust loadgenerator
Pitstop follows an event based architecture
https://github.com/EdwinVW/pitstop/blob/main/src/solution-architecture.png
Jellyfin is a monolith. Might be meaningful to check for requests to third parties.
Handling gRPC traffic
MAD IDEA
Envoy has a series of filter that we can use to go from gRPC client to our HTTP Golang filter then back to the gRPC server
https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/other_protocols/grpc#grpc
https://blog.envoyproxy.io/envoy-and-grpc-web-a-fresh-new-alternative-to-rest-6504ce7eb880
Can modify services to use HTTP instead of gRPC or even just serialize PB messages into JSON.
E.g. serializing PB to JSON in Python https://stackoverflow.com/questions/65242456/convert-protobuf-serialized-messages-to-json-without-precompiling-go-code
E.g. in Golang: https://pkg.go.dev/google.golang.org/protobuf/encoding/protojson
Or alternately, just get the gRPC request sent over HTTP, say it contains serialized PB.
Then deserialize it dynamically: e.g. this. You will need the proto files though and exactly which PB to deserialize to
https://stackoverflow.com/questions/65242456/convert-protobuf-serialized-messages-to-json-without-precompiling-go-code
The text was updated successfully, but these errors were encountered: