Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

✨ add VersionTemplate support for HelmChartProxy #292

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Dentrax
Copy link

@Dentrax Dentrax commented Oct 11, 2024

This PR introduces a new field called VersionTemplate in the HelmChartProxy CRD.


Background

Most apps that provide Helm Charts include a version compatibility matrix as part of their versioning policy. For example, see the Descheduler and KEDA documentation. Given this, it's important to offer a way to override or set the Helm Chart version by comparing values against cluster metadata (such as version, region, data center, etc.). This ensures that the desired Helm Chart version is installed based on specific requirements.

Motivation

As early adopters of this Helm provider, we've found that adjusting the Helm Chart version based on metadata can be difficult and often requires workarounds. This negatively impacts the overall developer experience when encountering such cases. Currently, there is no straightforward method for managing chart versions effectively.

Current Workarounds

  1. Use #@ tags

In your HelmChartProxy.yaml file, put the following values on top of the file:

#@ load("@ytt:data", "data")
#@ version = data.values.k8s_version
#@ chart_version = ""
#@ if version.startswith("v1.26"):
#@   chart_version = "2.12.1"
#@ elif version.startswith("v1.23"):
#@   chart_version = "2.9.4"
#@ elif version.startswith("v1.20"):
#@   chart_version = "2.8.4"
#@ else:
#@   chart_version = "unsupported"
#@ end

Then pass version: #@ chart_version

  1. Separate kustomization directories by versions

Example:

├── metrics-server-k8s-not-v1.16
│   ├── base
│   │   ├── helmchartproxy.yml
│   │   └── kustomization.yml
│   ├── kustomization.yml
│   └── overlays
│       ├── staging
│       │   ├── kustomization.yml
│       │   └── kustomization.yml
│       └── production
│           ├── kustomization.yml
│           └── patch-helmchartproxy.yml
├── metrics-server-k8s-v1.16
│   ├── base
│   │   ...
│   ├── kustomization.yml
│   └── overlays
│       ...

And then, you will need to set clusterSelector.matchExpressions to add:

- key: version
  operator: <OP>
  values:
   - v1.16

The <OP> should be set to In for metrics-server-k8s-v1.16 and NotIn for metrics-server-k8s-not-v1.16.

Current Limitation

If you manage a flat mono-repo structure for kustomization manifests and need to upgrade a Helm Chart, but some clusters must rely on an older version of the chart, you'll have to completely restructure your folders and kustomization setup to accommodate this. This would create additional complexity and effort in maintaining the repository and HelmChartProxies.

Proposal

This PR introduces a new field to HelmChartProxy CRD, and its called VersionTemplate:

// VersionTemplate is an inline Go template representing the version for the Helm chart. This template supports Go templating
// to reference fields from each selected workload Cluster and programatically create and set the version.
// If the Version is specified, VersionTemplate will take precedence.
// +optional
VersionTemplate string `json:"versionTemplate,omitempty"`

It's quite similar to ValuesTemplate. By leveraging Go templates, we can gain greater control over the current version field.

  • You don't need to deal with #@ tags
  • No need to re-structure of your repository
  • No breaking changes
  • Pretty basic implementation

Use Cases

  • Set the Helm Chart version based on specific conditions in your metadata values.
  • I have an API that manages the metadata labels for the Cluster CR, allowing full automation of which Helm Chart version should be installed on each Kubernetes version. This enables dynamically retrieving the desired version based on the given labels by the API.
  • Both version and values are core Helm flags that are typically set dynamically during the initial installation of an app, so they shouldn't be static fields. To get align with ValuesTemplate.

/kind feature

What this PR does / why we need it:

See above.

Which issue(s) this PR fixes

-

/cc @developer-guy @melikeiremguler

Signed-off-by: Furkan <[email protected]>
Co-authored-by: Batuhan <[email protected]>
Signed-off-by: Furkan <[email protected]>
@k8s-ci-robot
Copy link
Contributor

@Dentrax: GitHub didn't allow me to request PR reviews from the following users: developer-guy.

Note that only kubernetes-sigs members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to this:

This PR introduces a new field called VersionTemplate in the HelmChartProxy CRD.


Background

Most apps that provide Helm Charts include a version compatibility matrix as part of their versioning policy. For example, see the Descheduler and KEDA documentation. Given this, it's important to offer a way to override or set the Helm Chart version by comparing values against cluster metadata (such as version, region, data center, etc.). This ensures that the desired Helm Chart version is installed based on specific requirements.

Motivation

As early adopters of this Helm provider, we've found that adjusting the Helm Chart version based on metadata can be difficult and often requires workarounds. This negatively impacts the overall developer experience when encountering such cases. Currently, there is no straightforward method for managing chart versions effectively.

Current Workarounds

  1. Use #@ tags

In your HelmChartProxy.yaml file, put the following values on top of the file:

#@ load("@ytt:data", "data")
#@ version = data.values.k8s_version
#@ chart_version = ""
#@ if version.startswith("v1.26"):
#@   chart_version = "2.12.1"
#@ elif version.startswith("v1.23"):
#@   chart_version = "2.9.4"
#@ elif version.startswith("v1.20"):
#@   chart_version = "2.8.4"
#@ else:
#@   chart_version = "unsupported"
#@ end

Then pass version: #@ chart_version

  1. Separate kustomization directories by versions

Example:

├── metrics-server-k8s-not-v1.16
│   ├── base
│   │   ├── helmchartproxy.yml
│   │   └── kustomization.yml
│   ├── kustomization.yml
│   └── overlays
│       ├── staging
│       │   ├── kustomization.yml
│       │   └── kustomization.yml
│       └── production
│           ├── kustomization.yml
│           └── patch-helmchartproxy.yml
├── metrics-server-k8s-v1.16
│   ├── base
│   │   ...
│   ├── kustomization.yml
│   └── overlays
│       ...

And then, you will need to set clusterSelector.matchExpressions to add:

- key: version
 operator: <OP>
 values:
  - v1.16

The <OP> should be set to In for metrics-server-k8s-v1.16 and NotIn for metrics-server-k8s-not-v1.16.

Current Limitation

If you manage a flat mono-repo structure for kustomization manifests and need to upgrade a Helm Chart, but some clusters must rely on an older version of the chart, you'll have to completely restructure your folders and kustomization setup to accommodate this. This would create additional complexity and effort in maintaining the repository and HelmChartProxies.

Proposal

This PR introduces a new field to HelmChartProxy CRD, and its called VersionTemplate:

// VersionTemplate is an inline Go template representing the version for the Helm chart. This template supports Go templating
// to reference fields from each selected workload Cluster and programatically create and set the version.
// If the Version is specified, VersionTemplate will take precedence.
// +optional
VersionTemplate string `json:"versionTemplate,omitempty"`

It's quite similar to ValuesTemplate. By leveraging Go templates, we can gain greater control over the current version field.

  • You don't need to deal with #@ tags
  • No need to re-structure of your repository
  • No breaking changes
  • Pretty basic implementation

*Use Cases

  • Set the Helm Chart version based on specific conditions in your metadata values.
  • I have an API that manages the metadata labels for the Cluster CR, allowing full automation of which Helm Chart version should be installed on each Kubernetes version. This enables dynamically retrieving the desired version based on the given labels by the API.
  • Both version and values are core Helm flags that are typically set dynamically during the initial installation of an app, so they shouldn't be static fields. To get align with ValuesTemplate.

/kind feature

What this PR does / why we need it:

See above.

Which issue(s) this PR fixes

-

/cc @developer-guy

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Oct 11, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: Dentrax
Once this PR has been reviewed and has the lgtm label, please assign sbueringer for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot
Copy link
Contributor

Welcome @Dentrax!

It looks like this is your first PR to kubernetes-sigs/cluster-api-addon-provider-helm 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes-sigs/cluster-api-addon-provider-helm has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Oct 11, 2024
@k8s-ci-robot
Copy link
Contributor

Hi @Dentrax. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Oct 11, 2024
@Jont828
Copy link
Contributor

Jont828 commented Oct 14, 2024

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Oct 14, 2024
@k8s-ci-robot
Copy link
Contributor

@Dentrax: The following test failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
pull-cluster-api-addon-provider-helm-e2e b8575d7 link true /test pull-cluster-api-addon-provider-helm-e2e

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@Jont828
Copy link
Contributor

Jont828 commented Oct 14, 2024

Thanks for the contribution! This is definitely an interesting idea. I'm wondering, however, if we can simplify this. If all we need is to map the compatibility matrix of K8s versions to Helm chart versions, it would be pretty complicated to have a ton of statements like {{ if eq .Cluster.metadata.labels.version "1.26" }}v2{{ else }}v1{{ end }}. Instead, I think we might be better off creating a map, and looking up the K8s version of the cluster from the field .ControlPlane.spec.version, something like this:

spec:
  versionMap:
    v1.27: v0.27
    v1.26: v0.26
    v1.25: v0.25

Do you have any use case where we would need to look up any field other than the K8s version to determine which Helm chart version to install?

@Jont828
Copy link
Contributor

Jont828 commented Oct 14, 2024

@mboersma @jackfrancis Any thoughts?

@Dentrax
Copy link
Author

Dentrax commented Oct 14, 2024

Thanks @Jont828 for the versionMap alternative idea! Some questions:

  1. Wouldn't we be dependent only on the 'version' if we applied this method? What if I want to make it more dynamic based on zone info, custom label, environment, etc?
  2. How would you handle version ranges with this method? For example, less than 1.26 but greater than or equal to 1.23?

@Jont828
Copy link
Contributor

Jont828 commented Oct 15, 2024

My thinking was that if our goal is to only support the compatibility matrix like the one linked for Descheduler, we might want to a design a scoped solution around that use-case. For ranges, my thinking was that we could try to leverage the semver package, which supports Compare(). We could then take the K8s version and compare it to the values in a map like this by iterating through the chart versions and seeing if the K8s version either is a direct match or falls in the inclusive range.

versionMap:
- v0.10: v1.17+
- v0.9: v1.9-v1.16
- v0.4: v1.8
- v0.3: v1.7

This is, however, just some brainstorming and the idea would need to be fleshed out if we wanted to implement something similar.

@Jont828
Copy link
Contributor

Jont828 commented Oct 15, 2024

I think if we need information other than just the K8s value, however, we should use a more generic approach like yours. Is it possible to use semver the valuesTemplate field you propose to handle ranges, or did you have any other ideas? I think it would be nice to leverage an existing solution for version validation and comparison.

@Dentrax
Copy link
Author

Dentrax commented Oct 21, 2024

I think if we need information other than just the K8s value, however, we should use a more generic approach like yours. Is it possible to use semver the valuesTemplate field you propose to handle ranges, or did you have any other ideas? I think it would be nice to leverage an existing solution for version validation and comparison.

I'm not sure if I'm following you here. Could you provide some example as above but for valuesTemplate: (string) field?

Since valuesTemplate is a gotemplate, we would need a ReGeX parsing to determine given input is whether in range format or not. Eventually we can evaluate the ranges with semver.

@Jont828 Jont828 self-requested a review October 21, 2024 17:49
@Jont828
Copy link
Contributor

Jont828 commented Oct 21, 2024

Sure thing. I'm thinking that because of how Go templates work, it can get pretty long to try to encode the version selection logic. Also, since there aren't that many functions available, we would be limited on what we can express. For example, in your use case here, we only have the function eq to compare the versions and would be unable to use the startswith (unless there's a way I don't know about).

#@ load("@ytt:data", "data")
#@ version = data.values.k8s_version
#@ chart_version = ""
#@ if version.startswith("v1.26"):
#@   chart_version = "2.12.1"
#@ elif version.startswith("v1.23"):
#@   chart_version = "2.9.4"
#@ elif version.startswith("v1.20"):
#@   chart_version = "2.8.4"
#@ else:
#@   chart_version = "unsupported"
#@ end

Also, it's not very easy to handle version ranges. It would be nice to be able to do something like this, but that would require us defining new functions since the basic geq wouldn't work on semver.

{{ if versionGeq .ControlPlane.spec.version "1.30" }}
v2.5
{{ else if versionGe .ControlPlane.spec.version "1.30" and versionLeq .ControlPlane.spec.version "1.26"}}
v2
{{ else }}
v1
{{ end }}

I think that the version templating worked well for valuesTemplate because it primarily involves basic value substitution and maybe an if statement. But I think in order to make templates work for determining the chart version, we would need to find a way to make it easier to use.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Nov 1, 2024
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants