-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Docker push to Pulp registry gives 429 #1716
Comments
@git-hyagi, can you take a look at this in case you are available? |
We tend to return 429 in cases where background tasks fail to commit changes to a repository: pulp_container/pulp_container/app/registry_api.py Line 1332 in aa482bc
Can you please verify you are not seeing any add_and_remove "canceled" tasks in your environment?
|
I have noticed this comment on your related issue: pulp/pulp-operator#1308. Does it mean that scaling down the number of pods helped you to resolve the problem? It looks like this is more related to the way how you deploy Pulp. I am not sure we have any best-practices recommendations for larger deployments. |
Yeah, scaling down the number of pods (but increasing the number of gunicorn workers) seems to have mitigated the problems we were seeing (which is kind of strange because it's the same number of worker processes, but maybe Pulp treats it differently somehow). I guess the thing is that 429's are a perfectly acceptable way to handle excessive traffic, and the real problem is that the docker/build-push-action doesn't have retry capability, unlike the Docker CLI. The registry telling clients to relax seems like a reasonable thing to me and this issue was more about finding ways to scale our deployment correctly. So I think Pulp is probably doing an okay thing here, and we can close this issue in favor of providing some kind of guidance around deployment scalability (as in pulp/pulp-operator#1308). |
Version
Deployed via Pulp Operator v1.0.0-beta.4 on K8s 1.26.
Describe the bug
We occasionally see docker pushes fail with
429 Too Many Requests
from the API pod.GitHub Actions logs:
Pulp API logs:
We currently have 10 API pods, 5 content pods, and 10 worker pods, and our throughput on all of them doesn't seem particularly large. I understand it's Django doing this, but I don't see any settings to raise the thresholds for 429s in Django docs. What's the best way to approach this? Do we make the API pods larger, or add more of them (and if we do, do we risk bottlenecking on the PostgreSQL DB or something else)?
To Reproduce
Not sure, probably deploy a Pulp instance and hammer it with docker pushes?
Expected behavior
The Docker push should succeed.
Additional context
N/A
The text was updated successfully, but these errors were encountered: