-
-
Notifications
You must be signed in to change notification settings - Fork 368
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Be wary of rate limits #580
Comments
I think another improvement would be the GitHub utils caching results from the API. We run the |
I guess maybe we're already hitting the rate limits on that PR by even fetching them once? |
Hrm, yeah, good point, I didn't consider that it might be coming from your plugins |
Yeah, it's triggering ~100 x 2 simultaneous API requests to GH for file contents, so I'm not too surprised that it's hitting the rate limits |
Maybe I should provide a rate limited fileContents API for that sort of thing in the DSL |
It'd be neat if |
We might be able to just wrap it in |
As it return a promise, so that can be whatever under the hood, so having that be a queued makes sense. |
Yeah, agreed, I'd recommend doing that on your plugins right now, and I'll integrate it into danger properly |
I think it's probably actually easier to just do it in Danger, it should just be a matter of wrapping the |
👉 #582 |
This is still happening on really big PRs with the flow plugin, IMO, the |
The reason it is still happening is because concurrency is not the issue, but the number of requests in a time interval. We should be reading the Here are the Github docs on rate limiting: https://developer.github.com/v3/#rate-limiting |
The problem here is not the regular rate limits but GitHub’s abuse rate limits. Apart from the GitHub search API, the interval for Honestly I’d suggest a concurrency limit of 1. That makes it more likely to work reliably, especially given the possibility of concurrent PRs. Or make it env configurable so developers can tune it themselves. |
We had to remove danger from our pipeline because used like 1000 requests per run (we have very big pr's right now). |
Danger makes at least 12 requests to GitHub API by default. This action is running an empty dangerfile: https://github.com/gpressutto5/danger-api-limit-example/runs/6071421115?check_suite_focus=true Action output (Check the entries prefixed by "Sending:")
And, as we know:
It is a problem on big monorepo projects because we usually have other steps using the same token, and 1,000 requests are too few when you have to use 12 just to get unused data from the API. Can we lazy load this info so it only makes the request if we need them? |
@gpressutto5 it’s clear that there are a few duplicate requests notably Alternatively it’s possible we could rewrite things to use GraphQL and fold a bunch of requests into a single network request, but that feels like it could be a big change due to error handling. |
We have the same problem with monorepo. |
We grab enough info from github/bitbucket/gitlab/etc to make a full DSL which you can use synchronously in dangerfiles. I think it's still the right route to go - if you'd like to take a look at reducing the amount of requests mentioned above, give it a shot but I don't think we'd accept dangerfile breaking changes for this 👍🏻 |
Perhaps if you're doing a lot of manual file reads in your Dangerfile it could be, that's what the OP was doing |
Our
And each of these files just uses |
withspectrum/spectrum#2632
Looks like when a PR gets big enough, danger and peril can hit the rate limits. Will need to slow down diff/comment pagination probably
The text was updated successfully, but these errors were encountered: