-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add aks cluster extension feature azurerm_kubernetes_cluster_extension #805
Add aks cluster extension feature azurerm_kubernetes_cluster_extension #805
Conversation
06797ad
to
30726e8
Compare
11c9e10
to
efcdda3
Compare
Pipeline run failure
Then eventually the field gets populated:
But then there is not enough time for the extension to get created: In this run the field wasn't populated whatsoever. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your contribution @drew0ps, I left a few comments for you to consider.
examples/kubernetesconfiguration/v1beta1/kubernetesclusterextension.yaml
Outdated
Show resolved
Hide resolved
examples/kubernetesconfiguration/v1beta1/kubernetesclusterextension.yaml
Outdated
Show resolved
Hide resolved
examples/kubernetesconfiguration/v1beta1/kubernetesclusterextension.yaml
Outdated
Show resolved
Hide resolved
examples/kubernetesconfiguration/v1beta1/kubernetesclusterextension.yaml
Outdated
Show resolved
Hide resolved
/test-examples="examples/containerservice/v1beta1/kubernetescluster.yaml" |
Regardless of the problem of not being able to create the Kubernetes cluster, what we need to pay attention to in this PR is the group name of the resource we added. We should add
Could you also squash your commits? It would help keep the commit history cleaner and more organized. |
5537229
to
45d3e5a
Compare
Shortgroup config fixed here, commits squashed. Will squash them again when it's ready for merge. Regenerating code and testing again. |
f9d30ae
to
9f4f9dd
Compare
@turkenf I ran uptest locally with make e2e and that successfully created my resources - although it seems make e2e doesn't work as I expected as it produces an error
And the error it produces:
|
f46893c
to
eb7e3ec
Compare
/test-examples="examples/containerservice/v1beta1/kubernetesclusterextension.yaml" |
I stopped the kind cluster and recreated it with make e2e - now I get a different behaviour. My resources are still created successfully:
but I no longer see the error message that is complaining about a missing resource. Instead, it just hangs and then times out:
|
Fixed issue creating Kubernetes Cluster resource. Could you please add the |
eb7e3ec
to
526ae9b
Compare
@turkenf Thanks for fixing the AKS deployment topic. I'm afraid this would still not unblock this particular PR - the extension is stuck in creating state seemingly indefinitely. This is really weird because this issue is not present when I apply with make run or even with make e2e - (even though the annotation command seems to be broken in the make e2e case therefore local uptests can never succeed.) |
@turkenf Thank you for the fix regarding the missing error messages, indeed the last run is more vocal about the issue:
|
/test-examples="examples/containerservice/v1beta1/kubernetesclusterextension.yaml" |
/test-examples="examples/containerservice/v1beta1/kubernetescluster.yaml" |
1 similar comment
/test-examples="examples/containerservice/v1beta1/kubernetescluster.yaml" |
The tests never get past the apply stage, even for an already merged resource examples/containerservice/v1beta1/kubernetescluster.yaml, even though all the statuses are set to true and ready. I was able to reproduce this issue using make e2e on my own Azure sub and I saw the error event in the resource group level activity logs complaining about the following:
I suspect the same thing is happening with the pipeline. |
/test-examples="examples/containerservice/v1beta2/kubernetesclusterextension.yaml" Edit: This was an error. |
/test-examples="examples/containerservice/v1beta2/kubernetescluster.yaml" |
Thanks @sergenyalcin for analyzing the Uptest control plane dump. Let me report his findings that he shared in private with me. 2024-09-11T16:20:48Z DEBUG provider-azure Diff detected {"uid": "0fef3b05-9b1d-425f-ac87-94f6bd4bf38c", "name": "example-kc-kce", "gvk": "containerservice.azure.upbound.io/v1beta1, Kind=KubernetesCluster", "instanceDiff": "*terraform.InstanceDiff{mu:sync.Mutex{state:0, sema:0x0}, Attributes:map[string]*terraform.ResourceAttrDiff{\"default_node_pool.0.upgrade_settings.#\":*terraform.ResourceAttrDiff{Old:\"1\", New:\"0\", NewComputed:false, NewRemoved:false, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}, \"default_node_pool.0.upgrade_settings.0.max_surge\":*terraform.ResourceAttrDiff{Old:\"10%\", New:\"\", NewComputed:false, NewRemoved:true, NewExtra:interface {}(nil), RequiresNew:false, Sensitive:false, Type:0x0}}, Destroy:false, DestroyDeposed:false, DestroyTainted:false, RawConfig:cty.NilVal, RawState:cty.NilVal, RawPlan:cty.NilVal, Meta:map[string]interface {}(nil)}"} After @sergenyalcin noted that a Terraform provider version bump might have introduced the regression, if the Uptests were ever successful. I triggered another Uptest for the cluster resource, this time with |
Of course the Uptest failed, because of the region error 😒 Creating a free tier cluster is unavailable at this time in region westeurope. |
@mergenci Thanks a lot for following up and sharing insights - It seems that we are unable to proceed with the uptests because of the issue your described. What's weird to me is that even though the update loop occurs and the tests fail the KubernetesCluster still reports ready state in the status field. |
fbdc471
to
e5caa7a
Compare
Way better state now - uptests progressed until case-2 import, there I ran into an issue though:
Looking at past issues, Ive noticed that this usually indicates an issue with externalname config. However, it seems good to me. TF doc:
externalname config: Which seems to be correct. |
f561813
to
a28de04
Compare
…s to the examples folder and made slight changes. Added a simple flux extension install for automated tests. Specifying maxSurge in the AKS cluster spec precents the update loop of the cluster and unblocks tests. Signed-off-by: drew0ps <[email protected]>
18abe85
to
96ee046
Compare
/test-examples="examples/containerservice/v1beta1/kubernetesclusterextension.yaml" |
closing this in favor of a way cleaner #818 |
Description of your changes
Added the resource
azurerm_kubernetes_cluster_extension
which is already present in the underlying terraform release but was not made available through the provider.Fixes #731
I have:
make reviewable
to ensure this PR is ready for review.backport release-x.y
labels to auto-backport this PR if necessary.How has this code been tested
Manual Test according to the process ran through. Caveat: Used clusterId instead of a clusterIdSelector.
Creation:
Deletion:
Resource is in k8s until the external resource is deleted as expected.
Edit:
This does not seem to work however. I added certain diffs under non-immutable field configurationSettings. The monolith says the following debug messages: