-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
many "Client.Timeout Exceeded while waiting header" on keda operator log #3610
Comments
The url seems to be the aad-pod-identity instance. Could you check the logs you have there? |
@JorTurFer : that make sense as there is a limitation of max 20 calls concurrently to the IMDS , and we are doing much more. |
so, could this issue be more related with that than KEDA itself? I mean, do you think is KEDA related? We can keep this open till you move from pod identity to workload identity, but just to know if we need to go deeper or not |
I think the ask from the Keda team is maybe to catch this exception and either ignore it if the 2nd / 3rd attempt succeeded |
We should also check if there is a way to optimize the integration to reduce amount of calls, what do you think? |
I don't think that we should ignore the error because we don't retry it in the same cycle. The reconciliation loop fails and it will be retried, but not only the request, the whole reconciliation loop. I mean, we don't know inside the cycle if another cycle will be executed, we could assume it, but we are not 100%, so we cannot trust on future executions. WRT timeouts, this uses the default timeout for every HTTP request inside KEDA (3 seconds). This value (and how to change it) is reflected in the docs , you can modify it just setting the environment variable Finally, related with the verbosity, the log already prints the scaler type (azure_servicebus_scaler), the SO (frameextraction) where it has happened and the namespace (vi-be-map-dev9). ERROR azure_servicebus_scale error {"type": "ScaledObject", "namespace": "vi-be-map-dev9", "name": "frameextraction", "error": "Get \"http://169.254.169.254/metadata/identity/oauth2/token?api-version=2018-02-01&resource=https%3A%2F%2Fservicebus.azure.net%2F\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"} What do you think we could append to be clearer? I mean, the timeout is already known, but maybe something like |
yes that was my thoughts as the go stack is almost meaningless. |
@zroubalik and I have been talking recently about how to apply this. It'll improve amount of calls for sure, but maybe we can try to improve it more, IDK |
do you think that adding something like |
I think that would be helpful, yes |
Report
when Keda pod launches it starts normally and manage to reconcile all queues and all definition correctly.
there are lot of "healthy" info report .
then after few mintues the log is floaded with "Client.Timeout exceeeded while waiting header".
Expected Behavior
Actual Behavior
Steps to Reproduce the Problem
Logs from KEDA operator
KEDA Version
2.7.1
Kubernetes Version
1.23
Platform
Microsoft Azure
Scaler Details
Azure Service Bus
Anything else?
No response
The text was updated successfully, but these errors were encountered: