-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf(ScheduleController): Fetch schedules by individual stop for more cache hits #231
Conversation
e0c279a
to
636c01d
Compare
… cache hits As implemented, if any request for a stop fails, the entire request fails. This is to preserve the existing all-or-nothing behavior for fetching schedules for all stops together. If we want to have partial failure states in the future, we'll want to adopt a frontend changes to represent the partial failure state.
53d580f
to
66452db
Compare
66452db
to
f1cb85e
Compare
gc_interval: :timer.hours(2), | ||
allocated_memory: 250_000_000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting to 2 hours based on the recommendation here that ttl should be less than gc_interval
lib/mbta_v3_api/repository.ex
Outdated
@impl true | ||
def alerts(params, opts \\ []), do: all(MBTAV3API.Alert, params, opts) | ||
|
||
@impl true | ||
@decorate cacheable(cache: MBTAV3API.RepositoryCache, on_error: :nothing, opts: [ttl: @ttl]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pulled from dotcom
|> Task.async_stream(fn filter_params -> | ||
{filter_params, fetch_schedules(filter_params)} | ||
end) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make sense to call this with ordered: false
?
Summary
Ticket: Caching schedules data [backend only]
What is this PR for?
As implemented, if any request for a stop fails, the entire request fails. This is to preserve the existing all-or-nothing behavior for fetching schedules for all stops together.
During load testing (notes), there were still big discrepancies between the duration logged for a request from the V3 API vs the duration logged to receive on our backend (602ms vs 4.99s). This PR addresses that problem in the lowest effort way: Adding TTL-based caching to the other static endpoints too to limit the number of concurrent requests to the API.
With these changes (and load testing script changes from #236), there were a handful of failures when fetching schedules due to timeouts, which still represent a low % of overall requests. There is more optimization we can do here, but this PR feels like a meaningful enough improvement on its own.