Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

handle batch EVM requests #84

Merged
merged 25 commits into from
Feb 8, 2024
Merged

handle batch EVM requests #84

merged 25 commits into from
Feb 8, 2024

Conversation

pirtleshell
Copy link
Member

@pirtleshell pirtleshell commented Feb 6, 2024

Adds support for batch EVM queries.

Batch queries have bodies with a list of EVMRPCRequestEnvelopes instead of just one. Currently, these fail to decode and then skip the cache, skip metrics, and just get forwarded to the appropriate backend as-is.

This PR handles decoding the batch requests and supports getting responses to their sub-requests from the cache. To do this, it adds two new middlewares:

  • DecodeRequestMiddleware - a rewrite of RequestLoggingMiddleware. now attempts to parse incoming request as a []*EVMRPCRequestEnvelope if it fails to parse as *EVMRPCRequestEnvelope. Forks the middleware sequence if the request is a batch
  • BatchProcessingMiddleware - pulls the batch request out of the context, separates into individual requests that can leverage response caching (and metrics). Then, it concurrently makes the individual requests through the same middleware sequence as the current single request processing. All the individual responses are combined back into a single response to the client.

For minimal changes to existing architecture, the BatchProcessingMiddleware just makes use of the exact same middleware sequence as the handling of single requests. Optimizations can be made in the future to recombine sub-requests that will be directed to the same backend if performance is a problem (we have metrics to now to know!). I excluded those changes for now to minimize potential impact to existing single request response flow.

Opening for code review, will not merge until architecture docs have been updated.

@pirtleshell pirtleshell force-pushed the rp-batch-request-processing branch from 847d788 to 29004a1 Compare February 7, 2024 19:31
@pirtleshell pirtleshell marked this pull request as ready for review February 7, 2024 19:36
@pirtleshell pirtleshell force-pushed the rp-batch-request-processing branch from 29004a1 to 94ec347 Compare February 7, 2024 19:38

// fakeResponseWriter is a custom implementation of http.ResponseWriter that writes all content
// to a buffer.
type fakeResponseWriter struct {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

bufferedResponseWriter a la https://pkg.go.dev/bufio?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think that's for buffered read/writing. as in, ensuring writes happen in chunks no bigger than a certain size, not for just writing to a bytes.Buffer. i think the buffering capabilities are overkill for what's needed here

service/cachemdw/cache.go Outdated Show resolved Hide resolved
proxy service responds 413 if it receives a request with >ProxyMaximumBatchSize sub-requests
instead of arbitrarily sleeping and praying that the metric was created,
wait for the metrics to be created. if the expected metrics are not
created, the test will fail with a timeout.
@pirtleshell pirtleshell merged commit d7f1fa5 into main Feb 8, 2024
4 checks passed
@pirtleshell pirtleshell deleted the rp-batch-request-processing branch February 8, 2024 21:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants