Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scaler polling active check is crashing our prometheus server #2918

Closed
bamboo12366 opened this issue Apr 19, 2022 · 5 comments
Closed

scaler polling active check is crashing our prometheus server #2918

bamboo12366 opened this issue Apr 19, 2022 · 5 comments
Labels
feature-request All issues for new features that have not been committed to needs-discussion

Comments

@bamboo12366
Copy link
Contributor

Proposal

In our case, most of the scaler is using prometheus scaler. And the prometheus server address is the same

When the scalerObject Controller doing scaler active in check scaler func may cause a lot of burden to the scaler(in our case, prometheus, but I think the scenoir should be the similar). And the hpa-controller is constantly ask for the scaler also. The burden will double from what we expect.

code here:
https://github.com/kedacore/keda/blob/main/pkg/scaling/scale_handler.go#L262

Can we do only one check for those scaler is the same?
Like implement a hash map called scalerActiveMap, introduce a new function to Scaler interface, like scalerHash?
Every time when we need to check scaler is active or not, can get scaler active status from scalerActiveMap

Use-Case

To avoid flooding request to scaler

Anything else?

No response

@bamboo12366 bamboo12366 added feature-request All issues for new features that have not been committed to needs-discussion labels Apr 19, 2022
@zroubalik
Copy link
Member

There are 2 parts, for KEDA Operator you can controller this by setting pollingInterval. For Metrics Server there is opened issue that proposes a solution for this: #2282

@bamboo12366
Copy link
Contributor Author

sure thx, I can close this issue

@zroubalik
Copy link
Member

@bamboo12366 there is currently nobody assigned to tackle #2282, feel free to sing up for it 😄

@bamboo12366
Copy link
Contributor Author

sorry @zroubalik, I think still few things can be discussed

I totally agree the external metrics can be cached in #2282

But for the active status I still recommend to cache separately according to the scaler meta(can be hashed by some of the meta info, decided by each scaler respectively)

consider a case a lot of SO object configured to read information from same external metrics source(can be prometheus, redis, kafka etc..) but with different query/key, the scaler object will check external source for active with thousand times, which may be inappropriate.

And if we increase the polling interval blindly, will also damage the system availability

@zroubalik
Copy link
Member

zroubalik commented Apr 20, 2022

I see what you are trying to achieve, though the Activation is calculated not just from the external metrics source but also from the query/key.
So then you can have 2 SOs pointing to the same Kafka Broker, but to each one to the different topic. One could be active the other could be inactive.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature-request All issues for new features that have not been committed to needs-discussion
Projects
Archived in project
Development

No branches or pull requests

2 participants