-
-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Validate Dask Cluster Names #871
Conversation
This commit introduces cluster name validation in order to avoid the invalid state in which a `DaskCluster` resource with a too-long or RFC-1123-noncompliant name is created but cannot be deleted while the operator retries infinitely to create a scheduler service (see dask#826 for more details on this bug). Issues fixed: dask#870 dask#826
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for raising this!
I agree an admission time check would be cleaner. I wonder if there is a way to use validation rules to achieve this.
We have a long term god to add better support for hooks to the controller. I don't mind adding the dependencies, but the CI and test machinery is the biggest roadblock. Mainly because our workflow is to run the controller outside of the cluster when developing/testing.
In the meantime I'm happy to merge this PR. Would you mind adding some tests that check for clusters with invalid names do go into an error state as expected?
Also it would be nice to move the checking logic somewhere common and also use it in KubeCluster
as a client side check.
…Cluster init, and add tests
Thanks @jacobtomlinson for your quick review! I've added some tests and updated Good point about the validation rules. It looks like you could specify
The only minor downside of that approach is that the 53 would be hard-coded instead of computed using the scheduler service name ( Let me know what you prefer to go with. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jo-migo.
I think given that validation rules are only Kubernetes 1.29+ we should definitely accept the changes here. It leaves us in a much better state than we are currently in.
However I think I would prefer to go down the route of validation rules over an admission webhook in the future. I agree hardcoding the length isn't ideal, perhaps a mitigation to future issues is to add a comment where we set -scheduler
explaining that the CRDs need updating if this is ever changed.
I'll merge this now. @jo-migo do you have any interest in exploring the validation rules change in a follow on PR?
Issues
#870
#826
Details
This PR introduces cluster name validation in order to avoid the invalid state in which a
DaskCluster
resource with a too-long or RFC-1123-noncompliant name is created but cannot be deleted while the operator retries infinitely to create a scheduler service (see #826 for more details).Alternative Suggestion
It would be nicer (from a UX perspective) do this validation with a validation admission webhook which would directly reject the cluster creation with an error message, but this would require adding
cryptography
andcertbuilder
as dependencies to the operator, or mounting user-provided SSL certs into the operator container. In the current approach, the request to create the cluster succeeds, and one has to look into the events to understand why the cluster is then in anError
state.Proof
In those examples, an
error
event is emitted (as shown) and the clusters are able to be manually deleted viakubectl
.