Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bugfix/#18545 check druid default rules #102

Closed

Conversation

ljblancoredborder
Copy link
Member

Checking druid coordinator has default rules and they are not loading forever
We want to limit druid duration of all datasources by default to not take everything, just to make druid not to keep infinite data and getting stucked

The test included try to get the rules from the service. The curl won't fail. However, it's stdout can be empty depending on the situation:
If we are testing against the host of the service (ie. 10.0.209.22) the response will be an array of rules and one load rule can't be "forever"
if we are testing against another node (ie. 10.0.209.20) the response will be empty. (AND THE REST OF THE TEST IS SKIPPED)
For now we are accepting both as correct tests. So the question is:
Do all nodes in a cluster behave the same and the tests should be improved to not skipping in any case?

@ljblancoredborder ljblancoredborder added bug Something isn't working help wanted Extra attention is needed question Further information is requested size/S size/S labels Sep 24, 2024
Copy link

the-label-bot bot commented Sep 24, 2024

The Label Bot has predicted the following:

Category Value Confidence Applied Label

@manegron
Copy link
Member

manegron commented Oct 7, 2024

La cierro porque lo de las reglas es algo opcional y no deberíamos forzarlo a nivel de test.

@manegron manegron closed this Oct 7, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working help wanted Extra attention is needed question Further information is requested size/S size/S
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants