Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] Prevent running interceptor twice #17

Open
r0fls opened this issue Jun 8, 2022 · 4 comments
Open

[Feature] Prevent running interceptor twice #17

r0fls opened this issue Jun 8, 2022 · 4 comments

Comments

@r0fls
Copy link

r0fls commented Jun 8, 2022

It seems this interceptor is run twice for each request. Is it possible to have it run only once, or detect which way the request is going (incoming, outgoing)?

Thanks!

@d5h
Copy link
Collaborator

d5h commented Jun 10, 2022

The interceptor should only run once. There's actually a test for that. Could you post some code and output showing it running twice?

@r0fls
Copy link
Author

r0fls commented Aug 22, 2022

Hi @d5h , sorry for the long delay in responding. I'm seeing this with the below interceptor:

import traceback
import time
from typing import Callable, Any

import grpc
from grpc_interceptor import ServerInterceptor
from grpc_interceptor.exceptions import GrpcException

from api.logger import log

class ExceptionInterceptor(ServerInterceptor):
    def intercept(
        self,
        method: Callable,
        request: Any,
        context: grpc.ServicerContext,
        method_name: str,
    ) -> Any:
        """Catchall exception interceptor. Wraps all unhandled exceptions,
        and extracts the traceback into a field on the log message.
         Args:
             method: The next interceptor, or method implementation.
             request: The RPC request, as a protobuf message.
             context: The ServicerContext pass by gRPC to the service.
             method_name: A string of the form
                 "/protobuf.package.Service/Method"
         Returns:
             This should generally return the result of
             method(request, context), which is typically the RPC
             method response, as a protobuf message. The interceptor
             is free to modify this in some way, however.
         """
        try:
            log.info("Received incoming request")
            # Capture response time
            start = time.time()
            response = method(request, context)
            end = time.time()
            # Convert seconds to ms
            completion_time = (end - start) * 1000
            # Add a field to the log line
            extra = {'elapsed_ms': completion_time}
            log.info(f'Generated response in {completion_time}ms', extra=extra)
            return response
        except Exception as e:
            # TODO: what proto should we return here? This will return None
            error = ''.join(traceback.format_stack())
            log.error('Unknown server error', error=error)

When I generate a request with this, I see:

{"level": "info", "ts": "2022-08-22T17:13:54.115348Z", "name": "deployment-api", "elapsed_ms": 0.051021575927734375, "msg": "Generated response in 0.051021575927734375ms"}
{"level": "info", "ts": "2022-08-22T17:13:54.163390Z", "name": "deployment-api", "msg": "Received incoming request"}
{"level": "info", "ts": "2022-08-22T17:13:54.198927Z", "name": "deployment-api", "elapsed_ms": 35.25853157043457, "msg": "Generated response in 35.25853157043457ms"}

@d5h
Copy link
Collaborator

d5h commented Aug 23, 2022

I don't see anything wrong here. Is this in a production environment with anything like GraphQL or Istio involved? They can make multiple requests to the backend for a single client request.

In order to narrow it down as much as possible, could you run the service on your own machine, and make the request from the same machine with something like grpc_cli? That will rule out any service mesh stuff or other middleware.

I don't think this is caused by the interceptor. I've never seen this myself, and as mentioned above, there's a test that it only runs once. I'm guessing there may be something else that's making redundant calls. I'm also curious why the first call was ~700x faster than the second. Are you sure this is the same request? I'd expect the same request to similar response times, unless there's caching involved. But here the first request is faster so caching doesn't make sense.

@marcindulak
Copy link

The reason for the appearance of a request received by the server twice could be that the first request is a grpc_reflection.v1alpha.reflection_pb2.ServerReflectionRequest.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants