Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement fetch/retry from multiple sources server side? #38

Closed
sripwoud opened this issue May 16, 2024 · 4 comments · Fixed by #66
Closed

Implement fetch/retry from multiple sources server side? #38

sripwoud opened this issue May 16, 2024 · 4 comments · Fixed by #66
Labels
enhancement New feature or request

Comments

@sripwoud
Copy link
Member

sripwoud commented May 16, 2024

Context

snark-artifacts is about providing a reliable and performant distribution mechanism for snark artifacts.

Challenges

  1. size of artifacts
  2. different client environments: node/server applications , browser applications, desktop/mobile
  3. number of users, possible concurrent requests, max spikes: ??

Given 1 and 3, we face rate limitation risks.

Solution

Fetch/retry mechanism client side

In #65, we attempt to mitigate that risk by retrying fetching the artifacts from alternative sources if those tried first fail.

  • cons
    • redundant: each client instance implement the fetch/retry logic...so higher changes of rate limiting
    • client code more complex
    • more difficult to ensure consistency due to different client environment
    • not so scalable
  • pros
    • straightforward to implement
    • cheap
    • control: easy to modify behavior in client code

Fetch/retry mechanism server side

Use.g a cloudflare worker (or alternative like AWS lambda@edge).

  • cons
    • an extra layer (trust, centralization concerns) between the artifacts and the clients
    • cost
  • pros
    • less redundancy: fetch/retry code implemented only on workers instances
    • better performance and efficiency: requests would be intercepted at the edge, so workers would reduce the load on origin servers, potentially reducing the number of requests and therefore effectively protect against the rate limitation risks
    • scalability: such a cloud based solution would better handle traffic spikes

Static server

Store and serve all artifacts directly from on our server which would be the single source.
Pros of previous solution but with less intermediate layers?
I am not sure how we will manage the versioning in that case. Would we selfhost e.g an unpkg or unpkg-like server?

@sripwoud
Copy link
Member Author

sripwoud commented May 16, 2024

number of users, possible concurrent requests, max spikes: ??

@artwyman: ZuPass is probably the best example we have for this. Do you have any figures to share?

@artwyman
Copy link

Store and serve all artifacts directly from on our server which would be the single source.
Pros of previous solution but with less intermediate layers?
I am not sure how we will manage the versioning in that case. Would we selfhost e.g an unpkg or unpkg-like server?

Could maybe be as simple as a static set of directories by version, if you structure the path right? Manageability and size depends how many versions you need to deal with at once.

@artwyman: ZuPass is probably the best example we have for this. Do you have any figures to share?

Good thought. I'll ask what we might have to share, though I don't think we have measurements around artifact download in particular. We might have numbers on overall usage which could give a ballpark. Our peak usage was in Istanbul for DevConnect with ~5K attendees. We had only a small number of ZK proof artifacts at the time, and our service worker pre-downloaded and cached them all in the client, which would've had the effect of smoothing out potential server load (extra downloads when a new client version launches, but not for every proof).

@cedoor cedoor added the enhancement New feature or request label May 17, 2024
@cedoor cedoor moved this to ♻️ Grooming in SNARK Artifacts May 17, 2024
@artwyman
Copy link

@artwyman: ZuPass is probably the best example we have for this. Do you have any figures to share?
Good thought. I'll ask what we might have to share, though I don't think we have measurements around artifact download in

Unfortunately all of our stats have rolled off of the active window. I found one old chart showing 1000 checkins over a 4 hour period, but checkins are a much more explicit action (what happened at the doors of the DevConnect conference) vs. artifact downloading which is automatic. The highest-rate use of ZK proofs in Zupass would be Semaphore signatures, which we use to authenticate every 5m when fetching feeds. There's caching involved there, though, both of proofs (cached for 1hr) and of artifact downloads (cached forever) so the peak load to the server shouldn't be huge.

@cedoor
Copy link
Member

cedoor commented May 21, 2024

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: ✔️ Done
Development

Successfully merging a pull request may close this issue.

3 participants