Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

clusterctl upgrade tests are flaky #9688

Closed
killianmuldoon opened this issue Nov 8, 2023 · 41 comments
Closed

clusterctl upgrade tests are flaky #9688

killianmuldoon opened this issue Nov 8, 2023 · 41 comments
Assignees
Labels
area/clusterctl Issues or PRs related to clusterctl area/e2e-testing Issues or PRs related to e2e testing help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@killianmuldoon
Copy link
Contributor

The clusterctl upgrade tests have been significantly flaky in the last couple of weeks, with flakes occurring on main release-1.4 and release-1.5.

The flakes are occurring across many forms of the clusterctl upgrade tests including v0.4=>current, v1.3=>current and v1.0=>current.

The failures take a number of forms, including but not limited to:

There's an overall triage for tests with clusterctl upgrades in the name here: https://storage.googleapis.com/k8s-triage/index.html?date=2023-11-08&job=.*-cluster-api-.*&test=.*clusterctl%20upgrades.*&xjob=.*-provider-.*

/kind flake

@k8s-ci-robot k8s-ci-robot added kind/flake Categorizes issue or PR as related to a flaky test. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Nov 8, 2023
@killianmuldoon
Copy link
Contributor Author

@kubernetes-sigs/cluster-api-release-team These flakes are very disruptive to the test signal right now. It would be great if someone could prioritize investigating and fixing them out ahead of the releases.

/triage accepted

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Nov 8, 2023
@killianmuldoon
Copy link
Contributor Author

/help

@k8s-ci-robot
Copy link
Contributor

@killianmuldoon:
This request has been marked as needing help from a contributor.

Guidelines

Please ensure that the issue body includes answers to the following questions:

  • Why are we solving this issue?
  • To address this issue, are there any code changes? If there are code changes, what needs to be done in the code and what places can the assignee treat as reference points?
  • Does this issue have zero to low barrier of entry?
  • How can the assignee reach out to you for help?

For more details on the requirements of such an issue, please see here and ensure that they are met.

If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.

In response to this:

/help

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Nov 8, 2023
@killianmuldoon
Copy link
Contributor Author

Note that each branch has a different number of variants, enumerated below, of this test which may be responsible for some unevenness in the signal:

  • release-1.4: 7
  • release-1.5: 6
  • main: 5

@adilGhaffarDev
Copy link
Contributor

I am looking into this one.

@furkatgofurov7
Copy link
Member

I will be pairing up with @adilGhaffarDev on this one since it is happening more frequently.

/assign @adilGhaffarDev

@adilGhaffarDev
Copy link
Contributor

Adding a bit more explanation regarding failures. We have three failures in clusterctl upgrade:

  • exec.ExitError This one happens at Applying the cluster template yaml to the cluster, I opened PR in release 1.4 and changed KubectlApply to ControllerRuntime create and also added ignore for alreadyExists as @killianmuldoon suggested, so far I haven't seen this failure on my 1.4 PR(ref: 🌱 Adding to the test framework the equivalent to kubectl create -f. #9731). It still fails but not at Apply, I think changing kubectlApply to Create and adding ignore on alreadyExists fixes this one. I will create PR on the main too.
  • failed to discovery ownerGraph types this one happens at Running Post-upgrade steps against the management cluster . I have looked into logs and I am seeing this error:
{"ts":1700405055471.4797,"caller":"builder/webhook.go:184","msg":"controller-runtime/builder: Conversion webhook enabled","v":0,"GVK":"infrastructure.cluster.x-k8s.io/v1beta1, Kind=DockerClusterTemplate"}
{"ts":1700405055471.7637,"caller":"builder/webhook.go:139","msg":"controller-runtime/builder: skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called","v":0,"GVK":"infrastructure.cluster.x-k8s.io/v1beta1, Kind=DockerMachinePool"}
{"ts":1700405055472.0557,"caller":"builder/webhook.go:168","msg":"controller-runtime/builder: skip registering a validating webhook, object does not implement admission.Validator or WithValidator wasn't called","v":0,"GVK":"infrastructure.cluster.x-k8s.io/v1beta1, Kind=DockerMachinePool"}

It might be something related to DockerMachinePool, we might need to backport the recent fixes related to DockerMachinePool. Another interesting thing is I don't see this failure on main this is only happening on v1.4 and v1.5.

  • failed to find releases this one happens at clusterctl init. I am still looking into this one.

@sbueringer
Copy link
Member

sbueringer commented Nov 20, 2023

I have looked into logs and I am seeing this error:

{"ts":1700405055471.4797,"caller":"builder/webhook.go:184","msg":"controller-runtime/builder: Conversion webhook enabled","v":0,"GVK":"infrastructure.cluster.x-k8s.io/v1beta1, Kind=DockerClusterTemplate"}
{"ts":1700405055471.7637,"caller":"builder/webhook.go:139","msg":"controller-runtime/builder: skip registering a mutating webhook, object does not implement admission.Defaulter or WithDefaulter wasn't called","v":0,"GVK":"infrastructure.cluster.x-k8s.io/v1beta1, Kind=DockerMachinePool"}
{"ts":1700405055472.0557,"caller":"builder/webhook.go:168","msg":"controller-runtime/builder: skip registering a validating webhook, object does not implement admission.Validator or WithValidator wasn't called","v":0,"GVK":"infrastructure.cluster.x-k8s.io/v1beta1, Kind=DockerMachinePool"}

This is not an error. These are just info messages that surface that we are calling ctrl.NewWebhookManagedBy(mgr).For(c).Complete() for an object that has no validating or defaulting webhooks (we still get the same on main as we should)

@adilGhaffarDev
Copy link
Contributor

Update on this issue.
I am not seeing following flakes anymore:

  • exec.ExitError
  • failed to find releases

failed to discovery ownerGraph types flake is still happening but only when upgrading from (v0.4=>current)

Ref: https://storage.googleapis.com/k8s-triage/index.html?job=.*-cluster-api-.*&xjob=.*-provider-.*#3c64d10ff3eda504da75

@fabriziopandini fabriziopandini added area/clusterctl Issues or PRs related to clusterctl area/e2e-testing Issues or PRs related to e2e testing labels Jan 19, 2024
@sbueringer
Copy link
Member

sbueringer commented Jan 22, 2024

@adilGhaffarDev So the clusterctl upgrade test is 100% stable apart from "failed to discovery ownerGraph types flake is still happening but only when upgrading from (v0.4=>current)"?

Ref: https://storage.googleapis.com/k8s-triage/index.html?job=.*-cluster-api-.*&xjob=.*-provider-.*#3c64d10ff3eda504da75

Is not showing anything for me

@adilGhaffarDev
Copy link
Contributor

@adilGhaffarDev So the clusterctl upgrade test is 100% stable apart from "failed to discovery ownerGraph types flake is still happening but only when upgrading from (v0.4=>current)"?

sorry for the bad link, here is more persitent link: https://storage.googleapis.com/k8s-triage/index.html?job=.*-cluster-api-.*&test=clusterctl%20upgrades%20&xjob=.*-provider-.*

Maybe not 100% stable there are very minor flakes that happen sometimes. But failed to find releases and exec.ExitError are not happening anymore.

@sbueringer
Copy link
Member

sbueringer commented Jan 22, 2024

@adilGhaffarDev exec.ExitError does not occur anymore because I improved the error output here:

return pkgerrors.New(fmt.Sprintf("%s: stderr: %s", err.Error(), exitErr.Stderr))

(https://github.com/kubernetes-sigs/cluster-api/pull/9737/files)

That doesn't mean the underlying errors are fixed unfortunately.

@adilGhaffarDev
Copy link
Contributor

@adilGhaffarDev exec.ExitError does not occur anymore because I improved the error output here:

exec.ExitError was happening at step: INFO: Applying the cluster template yaml to the cluster I don't see any failure that is happening at the same step where exec.ExitError was happening. Do you see any failure on triage that is related to that? I am unable to find it.

@sbueringer
Copy link
Member

Sounds good! Nope I didn't see any. Just wanted to clarify that the errors would look different now. But if the same step works now, it should be fine.

Just not sure what changed as I don't remember fixing/changing anything there.

@adilGhaffarDev
Copy link
Contributor

Just not sure what changed as I don't remember fixing/changing anything there.

This is the new error that was happening after your PR, it seems like it stopped happening after 07-12-2023.
https://storage.googleapis.com/k8s-triage/index.html?date=2023-12-10&job=.*-cluster-api-.*&xjob=.*-provider-.*#6710a9c85a9bbdb4d278

Only PR on 07-12-2023 that might have fixed this seemed to be this one: #9819 , but I am not sure.

@sbueringer
Copy link
Member

sbueringer commented Jan 23, 2024

#9819 Should not be related. This func is called later in clusterctl_upgrade.go (l.516). While the issue happens in l.389.

So this is the error we get there

{Expected success, but got an error:
    <*errors.fundamental | 0xc000912948>: 
    exit status 1: stderr: 
    {
        msg: "exit status 1: stderr: ",
        stack: [0x1f3507a, 0x2010aa2, 0x84e4db, 0x862a98, 0x4725a1],
    } failed [FAILED] Expected success, but got an error:
    <*errors.fundamental | 0xc000912948>: 
    exit status 1: stderr: 
    {
        msg: "exit status 1: stderr: ",
        stack: [0x1f3507a, 0x2010aa2, 0x84e4db, 0x862a98, 0x4725a1],
    }

This is the corresponding output (under "open stdout")

Running kubectl apply --kubeconfig /tmp/e2e-kubeconfig3133952171 -f -
stderr:
Unable to connect to the server: net/http: request canceled (Client.Timeout exceeded while awaiting headers)

stdout:

So looks like the mgmt cluster was not reachable.

Thx for digging into this. I would say let's ignore this error for now as it's not occurring anymore. Good enough for me to know the issue stopped happening (I assumed it might be still there and just looks different).

@adilGhaffarDev
Copy link
Contributor

Little more explanation to clusterctl upgrade failure. Now we are seeing only one flake when upgrading from 0.4->1.4 or 0.4->1.5, as mentioned before. Its failing with following error:

failed to discovery ownerGraph types: action failed after 9 attempts: failed to list "infrastructure.cluster.x-k8s.io/v1beta1, Kind=DockerCluster" resources: conversion webhook for infrastructure.cluster.x-k8s.io/v1alpha4, Kind=DockerCluster failed: Post "https://capd-webhook-service.capd-system.svc:443/convert?timeout=30s": x509: certificate signed by unknown authority

This failure happens in post upgrade step, where we are are calling ValidateOwnerReferencesOnUpdate . We have this post upgrade step only when we upgrading from v1alpha to v1beta. I believe @killianmuldoon you have worked on it, can you check this when you get time.

@chrischdi
Copy link
Member

Little more explanation to clusterctl upgrade failure. Now we are seeing only one flake when upgrading from 0.4->1.4 or 0.4->1.5, as mentioned before. Its failing with following error:

failed to discovery ownerGraph types: action failed after 9 attempts: failed to list "infrastructure.cluster.x-k8s.io/v1beta1, Kind=DockerCluster" resources: conversion webhook for infrastructure.cluster.x-k8s.io/v1alpha4, Kind=DockerCluster failed: Post "https://capd-webhook-service.capd-system.svc:443/convert?timeout=30s": x509: certificate signed by unknown authority

This failure happens in post upgrade step, where we are are calling ValidateOwnerReferencesOnUpdate . We have this post upgrade step only when we upgrading from v1alpha to v1beta. I believe @killianmuldoon you have worked on it, can you check this when you get time.

🤔 : may be helpful to collect cert-manager resources + logs to analyse this. Or is this locally reproducible?

@adilGhaffarDev
Copy link
Contributor

🤔 : may be helpful to collect cert-manager resources + logs to analyse this. Or is this locally reproducible?

I haven't been able to reproduce locally. I have ran it multiple times.

@adilGhaffarDev
Copy link
Contributor

@chrischdi thank you for working on it, now we are not seeing this flake too much, nice work. On k8s triage I can see that now ownergraph flake is only happening in (v0.4=>v1.6=>current) tests, the other flakes seem to be fixed or they are much less flaky.
ref: https://storage.googleapis.com/k8s-triage/index.html?job=.*-cluster-api-.*&xjob=.*-provider-.*#4f4c67c927112191922f

@chrischdi
Copy link
Member

@chrischdi thank you for working on it, now we are not seeing this flake too much, nice work. On k8s triage I can see that now ownergraph flake is only happening in (v0.4=>v1.6=>current) tests, the other flakes seem to be fixed or they are much less flaky. ref: storage.googleapis.com/k8s-triage/index.html?job=.-cluster-api-.&xjob=.-provider-.#4f4c67c927112191922f

Note: this is a different flake, not directly ownergraph but similar. It happens at a different place though.

Consistently(func() bool {
postUpgradeMachineList := &unstructured.UnstructuredList{}
postUpgradeMachineList.SetGroupVersionKind(schema.GroupVersionKind{
Group: clusterv1.GroupVersion.Group,
Version: coreCAPIStorageVersion,
Kind: "MachineList",
})
err = managementClusterProxy.GetClient().List(
ctx,
postUpgradeMachineList,
client.InNamespace(workloadCluster.GetNamespace()),
client.MatchingLabels{clusterv1.ClusterNameLabel: workloadCluster.GetName()},
)
Expect(err).ToNot(HaveOccurred())
return validateMachineRollout(preUpgradeMachineList, postUpgradeMachineList)
}, "3m", "30s").Should(BeTrue(), "Machines should remain the same after the upgrade")

We could propably also ignore the x509 errors here and ensure that the last try in Consistently succeeded (by storing and checking the last error outside of Consistently)

@sbueringer
Copy link
Member

sbueringer commented Mar 13, 2024

We could propably also ignore the x509 errors here and ensure that the last try in Consistently succeeded (by storing and checking the last error outside of Consistently)

We could also add an Eventually before to wait until the List call works and then keep the Consistently the same

Btw, thx folks, really nice work on this issue!

@adilGhaffarDev
Copy link
Contributor

We could also add an Eventually before to wait until the List call works and then keep the Consistently the same

I will open a PR with your suggestion

@adilGhaffarDev
Copy link
Contributor

#10301 did not fix the issue, failure is still there for (v0.4=>v1.6=>current) tests. I will try to reproduce it locally.

@fabriziopandini
Copy link
Member

/priority important-soon

@chrischdi
Copy link
Member

I implemented a fix at #10469 which should fix the situation.

@k8s-triage-robot
Copy link

This issue is labeled with priority/important-soon but has not been updated in over 90 days, and should be re-triaged.
Important-soon issues must be staffed and worked on either currently, or very soon, ideally in time for the next release.

You can:

  • Confirm that this issue is still relevant with /triage accepted (org members only)
  • Deprioritize it with /priority important-longterm or /priority backlog
  • Close this issue with /close

For more details on the triage process, see https://www.kubernetes.dev/docs/guide/issue-triage/

/remove-triage accepted

@k8s-ci-robot k8s-ci-robot added needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. and removed triage/accepted Indicates an issue or PR is ready to be actively worked on. labels Jul 22, 2024
@fabriziopandini
Copy link
Member

/triage accepted
Let's also consider to close and open a new one with the current state

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jul 31, 2024
@cahillsf
Copy link
Member

cahillsf commented Aug 17, 2024

/triage accepted Let's also consider to close and open a new one with the current state

agreed, i think a new issue would be helpful. the incoming release CI team can prioritize this. @chandankumar4 @adilGhaffarDev @Sunnatillo is there a summary of where we stand? if not, ill take a shot at refreshing the investigation and can open the new issue

seems like we do have flakes on main with a few different patterns shown for today: https://storage.googleapis.com/k8s-triage/index.html?date=2024-08-17&job=.*periodic-cluster-api-e2e.*&test=.*clusterctl%20upgrades.*

@Sunnatillo
Copy link
Contributor

Sunnatillo commented Aug 19, 2024

/triage accepted Let's also consider to close and open a new one with the current state

agreed, i think a new issue would be helpful. the incoming release CI team can prioritize this. @chandankumar4 @adilGhaffarDev @Sunnatillo is there a summary of where we stand? if not, ill take a shot at refreshing the investigation and can open the new issue

seems like we do have flakes on main with a few different patterns shown for today: https://storage.googleapis.com/k8s-triage/index.html?date=2024-08-17&job=.*periodic-cluster-api-e2e.*&test=.*clusterctl%20upgrades.*

From my observation I would say there are two main flakes that are occuring in clusterctl upgrade tests:

  1. https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/periodic-cluster-api-e2e-mink8s-main/1824761651627298816
{Expected success, but got an error:
    <errors.aggregate | len:3, cap:4>: 
    [Internal error occurred: failed calling webhook "default.dockercluster.infrastructure.cluster.x-k8s.io": failed to call webhook: Post "https://capd-webhook-service.capd-system.svc:443/mutate-infrastructure-cluster-x-k8s-io-v1beta1-dockercluster?timeout=10s": dial tcp 10.96.44.105:443: connect: connection refused, Internal error occurred: failed calling webhook "validation.dockermachinetemplate.infrastructure.cluster.x-k8s.io": failed to call webhook: Post "https://capd-webhook-service.capd-system.svc:443/validate-infrastructure-cluster-x-k8s-io-v1beta1-dockermachinetemplate?timeout=10s": dial tcp 10.96.44.105:443: connect: connection refused]
  1. https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/periodic-cluster-api-e2e-mink8s-release-1-8/1823102356049367040
Timed out after 300.001s.
Timed out waiting for all Machines to exist
Expected
    <int64>: 0
to equal
    <int64>: 2
[FAILED] Timed out after 300.001s.
Timed out waiting for all Machines to exist
Expected
    <int64>: 0
to equal
    <int64>: 2
In [It] at: /home/prow/go/src/sigs.k8s.io/cluster-api/test/e2e/clusterctl_upgrade.go:500 

First flake happening more often and when upgrading from latest versions, second flake happening mostly when uplifting from older releases.

I agree that we should close this issue and open new one separately for each flake.

@sbueringer
Copy link
Member

@chrischdi Was looking into some of these issues and is about to write an update here. Let's wait for that before closing this issue

@chrischdi
Copy link
Member

chrischdi commented Sep 2, 2024

Sorry folks, took longer than expected.

According to aggregated failures of the last two weeks, we still have some flakyness on our clusterctl upgrade tests.

But it looks like none of them are the ones in the initial post:

  • 36 failures: Timed out waiting for all Machines to exist

    • Component: unknown
    • Branches:
      • main
      • release-1.8
      • release-1.7
  • 16 Failures: Failed to create kind cluster

    • Component: e2e setup
    • Branches:
      • main
      • release-1.7
  • 14 Failures: Internal error occurred: failed calling webhook [...] connect: connection refused

    • Component: CAPD
    • Branches:
      • main
      • release-1.8
  • 7 Failures: x509: certificate signed by unknown authority

    • Component: unknown
    • Branches:
      • main
      • release-1.8
      • release-1.7
  • 5 Failures: Timed out waiting for Machine Deployment clusterctl-upgrade/clusterctl-upgrade-workload-... to have 2 replicas

    • Component: unknown
    • Branches:
      • release-1.8
      • main
  • 2 Failures: Timed out waiting for Cluster clusterctl-upgrade/clusterctl-upgrade-workload-... to provision

    • Component: unknown
    • Branches:
      • release-1.8
      • main

Link to check if messages changed or we have new flakes on clusterctl upgrade tests: here

@cahillsf
Copy link
Member

cahillsf commented Sep 3, 2024

thank you for putting this together @chrischdi -- you mind if i copy paste this refreshed summary into a new issue and close the current one?

@sbueringer
Copy link
Member

Feel free to go ahead with that

@sbueringer
Copy link
Member

Doesn't hurt to start with a clean slate to reduce confusion :)

@cahillsf
Copy link
Member

cahillsf commented Sep 3, 2024

/close

in favor of #11133

@k8s-ci-robot
Copy link
Contributor

@cahillsf: Closing this issue.

In response to this:

/close

in favor of #11133

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/clusterctl Issues or PRs related to clusterctl area/e2e-testing Issues or PRs related to e2e testing help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
Development

No branches or pull requests

10 participants