Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug/Suggestion] Better Rate Limit Handling #263

Open
devonzara opened this issue Jun 30, 2024 · 2 comments
Open

[Bug/Suggestion] Better Rate Limit Handling #263

devonzara opened this issue Jun 30, 2024 · 2 comments

Comments

@devonzara
Copy link

Situation
During the early stages of development, before getting around to implementing caching & API throttling, I inadvertently triggered a re-render loop on the frontend. This caused an excessive number of requests to the backend, which in turn spammed the Spotify API.

Consequently, my app was suspended for approximately 24 hours and I began receiving the following responses:

GET /v1/artists/5zixe6AbgXPqt4c1uSl94L HTTP/1.1
Content-Type: application/json
Authorization: Bearer ████████████████████████████████████████████████████████████████████████████████████████████████████████████
User-Agent: PostmanRuntime/7.37.3
Accept: */*
Postman-Token: ███████████████████████████████████
Host: api.spotify.com
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
 
HTTP/1.1 429 Too Many Requests
cache-control: private, max-age=0
retry-after: 74523
access-control-allow-origin: *
access-control-allow-headers: Accept, App-Platform, Authorization, Content-Type, Origin, Retry-After, Spotify-App-Version, X-Cloud-Trace-Context, client-token, content-access-token
access-control-allow-methods: GET, POST, OPTIONS, PUT, DELETE, PATCH
access-control-allow-credentials: true
access-control-max-age: 604800
content-encoding: gzip
strict-transport-security: max-age=31536000
x-content-type-options: nosniff
date: Sun, 30 Jun 2024 02:39:13 GMT
server: envoy
Via: HTTP/2 edgeproxy, 1.1 google
Alt-Svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
Transfer-Encoding: chunked
 
Too many requests

Notice that the retry-after header is 74623 and the response body is not in JSON format as expected. This causes zmb3/spotify to throw a "spotify: couldn't decode error: (17) [Too many requests]" error.

Suggestions

  1. Handle the plain text response body as an edge case.
  2. Implement a maxWait (also suggested in Client is (too) quiet if it has to retry #241) or similar mechanism for retries.
  3. Provide access to the raw response or retry-after header value so the application can decide how to handle the situation.
@ATroschke
Copy link

You can (and very likely should) use contexts to achieve the desired behaviour.

What I did was something like this:

timeoutCtx, cancel := context.WithDeadline(parentCtx, time.Now().Add(30*time.Second))
artist, err := client.GetArtist(timeoutCtx, spotify.ID(event.SpotifyID))
cancel()

Then you can handle the error, which - when using withRetry - should be the latest error the client received for this call.

If you're very fancy you can then block any request to the Spotify API with the client id/secret combo. In my case I have a global service orchestrating things. I forked and played a bit with this package, exposing the retry-after header, and set a value in redis that marks the Spotify API as "invalid" until a specific RFC339 timestamp.

There's sadly no great way around this if you want up-to-date Data in a cached fashion.

In my case, I am requesting Spotify's API once every 24 hours for artists my users have interest in to cache them and do some checks (does the artist have a new release, do I need to update their profile picture ect), and once a Month for artists without any interest from my userbase.

I still get this issue with just ~10 active test users from time to time due to how much Spotify limits in-development Apps.

@Pineapple217
Copy link

The method with context is not ideal, the program will be blocking for 30 seconds when it could already exit (return error). You can know the time of the Retry-After header, you don't have to wait. It would not be that hard to implement a max retry time for the client. I will have a go at it, if this is a feature the maintainers would allow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants