-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow data pushing specs to be updated remotely #47
Comments
The signed API should not be trusted so this should ideally come directly from the pusher and be verifiable (e.g. by signature).
I dislike this for all the reasons you mention. I think there are two discussion points:
Alternative ideas
|
Good point
I think we'll add new data feeds very frequently (mostly because each chain integration means adding a dAPI for the native currency and we'll be doing a lot of chain integrations) and even if the effort of redeployment is manageable, redeployments are very error-prone so we'd be taking a reliability hit by requiring more frequent redeployments. That being said, requiring redeployments for each added data feed is definitely not out of the question.
Maybe each signed API controls the remote configuration of extra data that will be signed to itself? That's a bit complex though.
That's already the plan, but the "approval" is them redeploying. I can't imagine a configuration update method that will be simpler than this+the diff frontend.
There are way too many potential feeds for this to be feasible, especially if you consider feeds such as https://github.com/api3dao/tasks/issues/42 It's also pretty common for us to ask API providers to add assets specifically for us so they will not exist at deployment time. |
I think updating just the file can be much simpler. You could do it with a simple POST request or something without going through AWS where you can screw up way more. That said, I don't think the improvement is marginal (and users would actually now have two "redeployment" ways).
I see. I'd revisit this issue in a couple of weeks/months if there is a demand for this. |
Okay. An interim (or even permanent tbh) solution is to depend on the HTTP gateway for this, as in we would run a bot at our end that fetches this kind of data and pushes it to the signed API, and every once in a while the API provider would redeploy and the bot would be wound down |
I'm dropping the v0.2 milestone, @metobom see the discussion above |
Currently, when we need to fetch a new kind of signed data from an API provider, we just call their signed HTTP gateway with the new parameters. In the current state of the signed API pusher, a redeployment would be needed for each new dAPI. This is by design, as it prevents the availability of the signed data from being interrupted by us. However, @metobom thinks this will require far too many redeployments, annoying the user.
A middle ground could be to have two groups of pushed data
This could be implemented by the signed API having an endpoint that signals to the pusher the additional Beacons that it wants to push data for. In the signed API GET interface, for each Beacon, it would be specified (for example with a flag) that the signed data is pushed based on a static config (safe to use) or remotely controlled config (not as safe to use) for transparency.
Note that this can also be implemented with a scheme where the pusher config can be specified to be remote (as in https://api3workspace.slack.com/archives/C05S589E7B4/p1695730031236799) and having the API provider deploy two separate pushers but this approach has some problems
The text was updated successfully, but these errors were encountered: