-
Notifications
You must be signed in to change notification settings - Fork 115
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Helm Release shows spurious diff when inputs are unknown #2660
Comments
Hi @ffMathy I'm still going to ask you to provide a repro example. Issues without a code snippet are not actionable for us. Maybe you can isolate a small program from your NDA-covered code and remove all proprietary details. Thank you for you understanding. |
Hi @ffMathy, will catch up on you with this and will create a example without the need of snippets of code from you! |
Wow, that sounds great! Thank you. |
@mjeffryes which commit was this fixed in? Or what release? |
My apologies @ffMathy, was just grooming tickets that have been awaiting-feedback for more than 2 weeks; missed that the ball is actually in our court for this one! |
@mjeffryes we meet tomorrow with @ffMathy to have a deeper look into this! |
I have a possible explanation, it is that some of the inputs - maybe one of the chart values - contain unknowns. In this case, the provider behaves differently and in a way that would produce the above diff. I would advocate for a fix to the To explain further, here's the pulumi-kubernetes/provider/pkg/provider/helm_release.go Lines 357 to 375 in 1944a52
Simply put, |
That sounds very plausible! None of the values that are in the diff are specified by us. It be great if we could also seal this off with a unit test. |
@EronWright had the session with @ffMathy and this a code you can use, which is very very close to the setup of @ffMathy: import * as pulumi from "@pulumi/pulumi";
import * as kubernetes from "@pulumi/kubernetes";
import * as command from "@pulumi/command";
const someNamespace = new kubernetes.core.v1.Namespace('some-namespace', {
metadata: {
name: 'some-namespace'
}
});
const x = new command.local.Command('some-command', {
update: "ls -la",
create: "ls -la",
});
export const commandResult = x.stdout
new kubernetes.helm.v3.Release('some-chart', {
chart: 'oci://ghcr.io/dirien/charts/minecraft-exporter',
version: ' 0.11.1',
name: 'some-stuff',
timeout: 60 * 60 * 3,
namespace: someNamespace.metadata.name,
atomic: true,
cleanupOnFail: true,
description: x.stdout,
values: {
replicaCount: 1,
dsd: 1,
sds: 12,
},
}, {
customTimeouts: {
create: '30m',
update: '6h',
},
}); This results in the pulumi preview to: ➜ pulumi preview
Previewing update (dev)
View in Browser (Ctrl+O): https://app.pulumi.com/dirien/lego-helm/dev/previews/26dabdc2-8f56-4736-b405-fdec73ce526c
Type Name Plan Info
pulumi:pulumi:Stack lego-helm-dev
+ ├─ command:local:Command some-command create
~ └─ kubernetes:helm.sh/v3:Release some-chart update [diff: +compat-allowNullValues,apiVersion,checksum,createNamespace,dependencyUpdate,devel,disableCRDHooks,disableOpenapiValidation,disableWebhooks,forceUpdate,keyring,kind,lint,pos
Outputs:
+ commandResult: output<string>
Resources:
+ 1 to create
~ 1 to update
2 changes. 2 unchanged So using a resource output from a computed field as input in the Release object results in the situation @ffMathy reported! |
Is there an ETA on this? Right now, it causes our Pulumi to reprovision the full Helm release in production every time, leading to around 3 - 5 minutes of downtime per deploy. |
CC @dirien. This is quite critical to us. |
I second that. I'm currently encountering the same behavior. |
Maybe we need to tag @mjeffryes to get an ETA instead. Not sure if this has been forgotten. At some point it seemed to be progressing, but now it seems to have stagnated. |
To provide an update, this is my current task and I expect to deliver a fix next week. |
@ffMathy would you clarify what you think the expected behavior should be? In the repro case, the |
I'd just like to know which of the values are varying. Because right now it shouldn't deploy every time. At first glance, these values shouldn't change. So fixing the diff could be enough. Then at least I'll understand the cause of it. Could also be that logging some warnings could help. But that might create more confusion than clarity. |
Brief update, this is still my main work task. Some framework code was first needed to make this issue be practical to solve. The root cause is that the handling of unknown inputs is very coarse-grained and further aggravated by a bug. |
Interesting. Thanks for the updates. Please keep that coming! 😍 |
I posted a PR to solve the issue: #2822 |
Awesome work! How long does it usually take for it to be released after merge? |
We plan to cut a new release on Tuesday. |
The fix is now available in v4.8.0, enjoy! |
Yay, thanks! Great work. |
I'd hate to spoil the party, but for me, this issue persists even in version
First time I'm running this, I'm getting the expected plan to install the cluster autoscaler. Then, I change nothing and run
This is going to constantly offer a change, notice the Please advise, |
Hi @IdoOzeri, this issue is a few months old now, so your comment is likely to get lost here; I suggest opening a new issue and linking to this one. |
What happened?
Getting the following for my Helm release:
It always happens after doing a
refresh
and then anup
. For those exact values.Example
No example. Under NDA.
Output of
pulumi about
CLI
Version 3.92.0
Go Version go1.21.3
Go Compiler gc
Host
OS debian
Version 12.2
Arch aarch64
Additional context
No response
Contributing
Vote on this issue by adding a 👍 reaction.
To contribute a fix for this issue, leave a comment (and link to your pull request, if you've opened one already).
The text was updated successfully, but these errors were encountered: