diff --git a/docs/misc/inter-thread-context-propagation.md b/docs/misc/inter-thread-context-propagation.md index 54882f28334f..51d043f89817 100644 --- a/docs/misc/inter-thread-context-propagation.md +++ b/docs/misc/inter-thread-context-propagation.md @@ -4,109 +4,114 @@ Take a look at the following two pseudo-code snippets (see below for explanations). -``` -Executor pool = Executors.newFixedThreadPool(10) +```java +Executor pool = Executors.newFixedThreadPool(10); public void doGet(HttpServletRequest request, HttpServletResponse response) { - Future f1 = pool.submit(() -> { - return userRepository.queryShippingAddress(request) - }) - Future f2 = pool.submit(() -> { - return warehouse.currentState(request) - }) - writeResponse(response, f1.get(), f2.get()) + Future f1 = pool.submit(() -> { + return userRepository.queryShippingAddress(request); + }); + Future f2 = pool.submit(() -> { + return warehouse.currentState(request); + }); + writeResponse(response, f1.get(), f2.get()); } ``` -``` -Executor pool = Executors.newFixedThreadPool(10) +```java +Executor pool = Executors.newFixedThreadPool(10); public void doGet(HttpServletRequest request, HttpServletResponse response) { - final AsyncContext acontext = request.startAsync(); - acontext.start(() -> { - String address = userRepository.queryShippingAddress(request) - HttpServletResponse response = acontext.getResponse(); - writeResponse(response, address) - acontext.complete(); - } + final AsyncContext asyncContext = request.startAsync(); + acontext.start(() -> { + String address = userRepository.queryShippingAddress(request); + HttpServletResponse response = asyncContext.getResponse(); + writeResponse(response, address); + asyncContext.complete(); + }); } ``` -In both cases request processing requires some potentially long operation and application developer -wants to do them off the main thread. In the first case this hand-off between request accepting thread -and request processing thread happens manually, by submitting work into some thread pool. -In the second case it is the framework that handles separate thread pool and passing work to it. +In both cases, the request processing requires some potentially long operations and the application +developer wants to do them off the main thread. In the first case this hand-off between the request +accepting thread and the request processing thread happens manually by submitting work into some +thread pool. In the second case it is the framework that handles the separate thread pool and +passing work to it. -In cases like this proper tracing solution should still combine into a single trace all the work -required for request processing, regardless in what thread that work happened. With proper -parent-child relationship between span: span representing shipping address query should be the child -of the span which denotes accepting HTTP request. +In cases like this, a proper tracing solution should still combine all the work required for request +processing into a single trace, regardless of what thread that work happened on. With a proper +parent-child relationship between spans, the span representing the shipping address query should be +the child of the span which denotes accepting HTTP request. ## The solution -Java auto instrumentation uses an obvious solution to the requirement above: we attach current execution -context (represented in the code by `Context`) with each `Runnable`, `Callable` and `ForkJoinTask`. -"Current" means the context active on the thread which calls `Executor.execute` (and its analogues -such as `submit`, `invokeAll` etc) at the moment of that call. Whenever some other thread starts -actual execution of that `Runnable` (or `Callable` or `ForkJoinTask`), that context get restored -on that thread for the duration of the execution. This can be illustrated by the following pseudo-code: - -``` - var job = () -> { - try(Scope scope = this.context.makeCurrent()) { - return userRepository.queryShippingAddress(request) - }} - job.context = Context.current() - Future f1 = pool.submit() - +Java auto instrumentation uses an obvious solution to the requirement above: we attach the current +execution context (represented in the code by `Context`) with each `Runnable`, `Callable` and +`ForkJoinTask`. "Current" means the context that is active on the thread which calls +`Executor.execute` (and its analogues such as `submit`, `invokeAll` etc) at the moment of the call. +Whenever some other thread starts the actual execution of the `Runnable` (or `Callable` or +`ForkJoinTask`), that context get restored on that thread for the duration of the execution. This +can be illustrated by the following pseudo-code: + +```java +var job = () -> { + try(Scope scope = this.context.makeCurrent()) { + return userRepository.queryShippingAddress(request); + } +}; +job.context = Context.current(); +Future f1 = pool.submit(); ``` ## The drawback -Here is a simplified example of what async servlet processing may look like +Here is a simplified example of what async servlet processing may look like: -``` +```java protected void service(HttpServletRequest req, HttpServletResponse resp) { - //This method is instrumented and we start new scope here - AsyncContext context = req.startAsync() - // When the runnable below is being submitted by servlet engine to an executor service - // it will capture the current context (together with the current span) with it - context.start { - // When Runnable starts, we reactive the captured context - // So this method is executed with the same context as the original "service" method - resp.writer.print("Hello world!") - context.complete() - } + // This method is instrumented and we start new scope here + AsyncContext context = req.startAsync(); + // When the runnable below is being submitted by the servlet engine to an executor service + // it will capture the current context (together with the current span) with it + context.start { + // When Runnable starts, we reactivate the captured context + // So this method is executed with the same context as the original "service" method + resp.writer.print("Hello world!"); + context.complete(); + } } ``` -If we now take a look inside `context.complete` method from above it may be implemented like this: +If we now take a look inside the `context.complete` method from above it may be implemented like +this: -``` -//Here we still have the same context from above active -//It gets attached to this new runnable +```java +// Here we still have the same active context from above. +// It then gets attached to this new runnable pool.submit(new AcceptRequestRunnable() { -// The same context from above is propagated here as well -// Thus new reqeust processing will start while having a context active with some span inside -// That span will be used as parent spans for new spans created for a new request - ... -}) + // The same context from above is propagated here as well + // Thus new request processing will start while having a context active with some span inside + // That span will be used as parent spans for new spans created for a new request + ... +}); ``` -This means that mechanism described in the previous section will propagate the execution context -of one request processing to a thread accepting some next, unrelated, request. -This will result in spans representing the accepting and processing of the second request will join -the same trace as those of the first span. This mistakenly correlates unrelated requests and may lead -to huge traces being active for hours and hours. +This means that the mechanism described in the previous section can inadvertently propagate the +execution context of one request to a thread handling an entirely unrelated request. As a result, +the spans representing the acceptance and processing of the second request may be incorrectly linked +to the same trace as those of the first request. This erroneous correlation of unrelated requests +can lead to excessively large traces that remain active for extended periods, potentially lasting +hours. -In addition this makes some of our tests extremely flaky. +In addition, this makes some of our tests extremely flaky. ## The currently accepted trade-offs -We acknowledge the problem with too active context propagation. We still think that out of the box -support for asynchronous multi-threaded traces is very important. We have diagnostics in place to -help us with detecting when we too eagerly propagate the execution context too far. We hope to -gradually find framework-specific countermeasures to such problem and solve them one by one. +We recognize the issue of overly aggressive context propagation. However, we believe that providing +out-of-the-box support for asynchronous multi-threaded traces is crucial. To address this, we have +implemented diagnostics to help detect instances where the execution context is propagated too +eagerly. Our goal is to gradually identify and implement framework-specific countermeasures to +address these issues, resolving them one by one. -In the meantime, processing new incoming request in the given JVM and creating new `SERVER` span -always starts with a clean context. +In the meantime, processing a new incoming request within the given JVM and creating a new `SERVER` +span will always begin with a clean context.