-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Batch Airnode feed calls to Signed API by small buffer #212
Comments
What are the expected batch sizes with a 50-100 ms period?
This surprises me, considering that Nodary Airnode feed batches all of its calls. Maybe you included the Nodary signed API logs? |
Sorry. I mistyped. It's 1800 per MINUTE not second. This means it makes 30 calls every second. But when you check the link, you can see that the distribution is not really uniform.
I think you are looking at the wrong file. I believe the actual deployment is this file. I spoke to @metobom about this and he mentioned he wants to switch it, but there is some stuff he needs to think about first. I also see 5s
Yeah, I was looking at Signed API logs on Grafana, but I mistyped "minute" instead of "seconds". The idea still stands though.
Say we have 300 feeds, fetchInterval 1 second, none of the calls is batched (300 api calls) and our staggering distributes them equally. Normally we would make a call with 1 signed data request every ~3.3ms (call this |
This would also help with reducing the Signed API costs reported here. |
I think 300 feeds with 1 second fetchInterval is a bit extreme (in that most API keys won't support that). If we think about 150 feeds with 5 second fetchInterval (which is much more realistic), that's a request every 33ms. I think this scheme starts becoming worth it starting at 200ms batching periods. I assume this is a future thing to implement, right? |
I am fine with 200ms as well (or any period we agree on). I quite like the idea of having the upper bound on the Signed API calls.
Yeah, but as I mention in
Because both Airnode feed INFO logs for outgoing Signed API requests and Signed API logs for incoming requests would be decreased. |
Logging every 200ms is still a lot though. Decreasing the logs is not meaningful if it doesn't decrease it to an acceptable level. |
Is it though? The message |
I'll actually propose logging more on the next call :^) |
I'd suggest the following implementation:
Let's sync on this on the call. |
I would keep this in the backlog even if there is no intention to implement it |
As part of investigating Signed API errors from Airnode feeds, This is not that high considering the number of running instances and the time range, but this feature could help reducing this number. |
Currently, Nodary Airnode feed makes
~1800 per second
Signed API calls (counting only those that reach our server). These numbers are similar for many other Airnode feeds. This is because the non-batched feeds make single API call per data feed and after the data is fetched it is pushed to Signed API. Put shortly, we have as many Signed API POST requests as we have data provider requests.There are these issues:
What I propose is to batch the Airnode feed calls by a short period (e.g. 50ms or 100ms). We want to push the data as soon as possible, but such a short delay seems acceptable to me. This sets the upper bound on the number of Signed API calls requests made per second.
If we do this, we might not want to remove the "Request received" message in this issue, but instead log it with how many "signed datas" it received in the batch.
@bbenligiray let me know wdyt.
The text was updated successfully, but these errors were encountered: