-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement fetch/retry from multiple sources server side? #38
Comments
@artwyman: ZuPass is probably the best example we have for this. Do you have any figures to share? |
Could maybe be as simple as a static set of directories by version, if you structure the path right? Manageability and size depends how many versions you need to deal with at once.
Good thought. I'll ask what we might have to share, though I don't think we have measurements around artifact download in particular. We might have numbers on overall usage which could give a ballpark. Our peak usage was in Istanbul for DevConnect with ~5K attendees. We had only a small number of ZK proof artifacts at the time, and our service worker pre-downloaded and cached them all in the client, which would've had the effect of smoothing out potential server load (extra downloads when a new client version launches, but not for every proof). |
Unfortunately all of our stats have rolled off of the active window. I found one old chart showing 1000 checkins over a 4 hour period, but checkins are a much more explicit action (what happened at the doors of the DevConnect conference) vs. artifact downloading which is automatic. The highest-rate use of ZK proofs in Zupass would be Semaphore signatures, which we use to authenticate every 5m when fetching feeds. There's caching involved there, though, both of proofs (cached for 1hr) and of artifact downloads (cached forever) so the peak load to the server shouldn't be huge. |
Context
snark-artifacts is about providing a reliable and performant distribution mechanism for snark artifacts.
Challenges
Given 1 and 3, we face rate limitation risks.
Solution
Fetch/retry mechanism client side
In #65, we attempt to mitigate that risk by retrying fetching the artifacts from alternative sources if those tried first fail.
Fetch/retry mechanism server side
Use.g a cloudflare worker (or alternative like AWS lambda@edge).
Static server
Store and serve all artifacts directly from on our server which would be the single source.
Pros of previous solution but with less intermediate layers?
I am not sure how we will manage the versioning in that case. Would we selfhost e.g an unpkg or unpkg-like server?
The text was updated successfully, but these errors were encountered: