-
Notifications
You must be signed in to change notification settings - Fork 805
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(preconfigured job): Fix for k8s manifest resources limit/request … #4048
base: master
Are you sure you want to change the base?
Conversation
|
||
// Solution patch to resolve spec.template.spec.container.resources.limit | ||
// and spec.template.spec.container.resources.requests parsed by io.kubernetes.client library | ||
if (preconfiguredMap["manifest"] != null) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
PreconfiguredJobStage
is used for kubernete and titus, so it shouldn't have any kubernetes specific code here. I think a good approach would be to replace the line
Map<String, Object> preconfiguredMap = objectMapper.convertValue(preconfiguredJob, Map.class)
With a new method call implemented by PreconfiguredJobStageProperties
that makes the actual conversion. In that way you can have one implementation for Titus in TitusPreconfiguredJobProperties
and the other specific for kubernetes in KubernetesPreconfiguredJobProperties
.
Please take a look at io.kubernetes.client.util.Yaml
for converting between java objects and yaml for kubernetes, we're trying to avoid having special cases like this where we need to manually set fields as parts of transformations. Today is request/limits, but we don't know how many other edge cases there are where using snakeyaml/ObjectMapper produce the wrong results for serializing k8s manifests. Using the official Yaml converter of the kubernetes library could potentially avoid this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I check the required changes and update the code.
Hi - running into this exact issue, will start watching this changeset. |
Fix for #6257
Issue Summary:
Putting limits/request in a custom job through ~/.hal/default/profiles/orca-local.yml, the custom stage breaks with an error as shown below.
About the fix:
In order to format above Map suitable for kubectl, "resources.request/limit.memory/cpu" field must take single value. This can be achieved by intercepting the preconfiguredMap object from class com.netflix.spinnaker.orca.clouddriver.pipeline.job.PreconfiguredJobStage and update it with required "resource" field format before passing it further.