Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

relax dependabot config #517

Merged
merged 1 commit into from
Mar 21, 2024

relax dependabot config

c60a2cb
Select commit
Loading
Failed to load commit list.
Merged

relax dependabot config #517

relax dependabot config
c60a2cb
Select commit
Loading
Failed to load commit list.
Google Cloud Build / kne-presubmit (kne-external) failed Mar 21, 2024 in 9m 24s

Summary

Build Information

Trigger kne-presubmit
Build a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13
Start 2024-03-21T09:29:04-07:00
Duration 8m41.204s
Status FAILURE

Steps

Step Status Duration
kne_test FAILURE 8m37.188s
vendors_test CANCELLED 8m37.748s

Details

starting build "a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13"

FETCHSOURCE
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint: 
hint: 	git config --global init.defaultBranch <name>
hint: 
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint: 
hint: 	git branch -m <name>
Initialized empty Git repository in /workspace/.git/
From https://github.com/openconfig/kne
 * branch            c60a2cbd3df6158dc447a7c601a34fda3bd583a3 -> FETCH_HEAD
HEAD is now at c60a2cb relax dependabot config
BUILD
Starting Step #1 - "vendors_test"
Starting Step #0 - "kne_test"
Step #1 - "vendors_test": Pulling image: gcr.io/kne-external/remote-builder
Step #0 - "kne_test": Pulling image: gcr.io/kne-external/remote-builder
Step #0 - "kne_test": Using default tag: latest
Step #1 - "vendors_test": Using default tag: latest
Step #0 - "kne_test": latest: Pulling from kne-external/remote-builder
Step #0 - "kne_test": 86467c57892b: Pulling fs layer
Step #0 - "kne_test": f77e78017e9a: Pulling fs layer
Step #0 - "kne_test": f8aba497fc29: Pulling fs layer
Step #0 - "kne_test": 6f2753bac371: Pulling fs layer
Step #0 - "kne_test": ac2cfad852ec: Pulling fs layer
Step #0 - "kne_test": b23e897de184: Pulling fs layer
Step #0 - "kne_test": a9ce5a33dc9c: Pulling fs layer
Step #0 - "kne_test": 30c9afd0e435: Pulling fs layer
Step #0 - "kne_test": ac6efb705cfa: Pulling fs layer
Step #0 - "kne_test": 6f2753bac371: Waiting
Step #0 - "kne_test": ac2cfad852ec: Waiting
Step #0 - "kne_test": b23e897de184: Waiting
Step #0 - "kne_test": a9ce5a33dc9c: Waiting
Step #0 - "kne_test": 30c9afd0e435: Waiting
Step #0 - "kne_test": ac6efb705cfa: Waiting
Step #1 - "vendors_test": latest: Pulling from kne-external/remote-builder
Step #1 - "vendors_test": 86467c57892b: Pulling fs layer
Step #1 - "vendors_test": f77e78017e9a: Pulling fs layer
Step #1 - "vendors_test": f8aba497fc29: Pulling fs layer
Step #1 - "vendors_test": 6f2753bac371: Pulling fs layer
Step #1 - "vendors_test": ac2cfad852ec: Pulling fs layer
Step #1 - "vendors_test": b23e897de184: Pulling fs layer
Step #1 - "vendors_test": a9ce5a33dc9c: Pulling fs layer
Step #1 - "vendors_test": 30c9afd0e435: Pulling fs layer
Step #1 - "vendors_test": ac6efb705cfa: Pulling fs layer
Step #1 - "vendors_test": ac6efb705cfa: Waiting
Step #1 - "vendors_test": 6f2753bac371: Waiting
Step #1 - "vendors_test": ac2cfad852ec: Waiting
Step #1 - "vendors_test": b23e897de184: Waiting
Step #1 - "vendors_test": a9ce5a33dc9c: Waiting
Step #1 - "vendors_test": 30c9afd0e435: Waiting
Step #1 - "vendors_test": f8aba497fc29: Verifying Checksum
Step #1 - "vendors_test": f8aba497fc29: Download complete
Step #0 - "kne_test": f8aba497fc29: Verifying Checksum
Step #0 - "kne_test": f8aba497fc29: Download complete
Step #1 - "vendors_test": f77e78017e9a: Verifying Checksum
Step #1 - "vendors_test": f77e78017e9a: Download complete
Step #0 - "kne_test": f77e78017e9a: Verifying Checksum
Step #0 - "kne_test": f77e78017e9a: Download complete
Step #0 - "kne_test": 86467c57892b: Verifying Checksum
Step #0 - "kne_test": 86467c57892b: Download complete
Step #1 - "vendors_test": 86467c57892b: Verifying Checksum
Step #1 - "vendors_test": 86467c57892b: Download complete
Step #0 - "kne_test": b23e897de184: Verifying Checksum
Step #0 - "kne_test": b23e897de184: Download complete
Step #1 - "vendors_test": b23e897de184: Verifying Checksum
Step #1 - "vendors_test": b23e897de184: Download complete
Step #1 - "vendors_test": ac2cfad852ec: Verifying Checksum
Step #1 - "vendors_test": ac2cfad852ec: Download complete
Step #0 - "kne_test": ac2cfad852ec: Verifying Checksum
Step #0 - "kne_test": ac2cfad852ec: Download complete
Step #0 - "kne_test": a9ce5a33dc9c: Verifying Checksum
Step #0 - "kne_test": a9ce5a33dc9c: Download complete
Step #1 - "vendors_test": a9ce5a33dc9c: Verifying Checksum
Step #1 - "vendors_test": a9ce5a33dc9c: Download complete
Step #1 - "vendors_test": 30c9afd0e435: Verifying Checksum
Step #1 - "vendors_test": 30c9afd0e435: Download complete
Step #0 - "kne_test": 30c9afd0e435: Verifying Checksum
Step #0 - "kne_test": 30c9afd0e435: Download complete
Step #0 - "kne_test": ac6efb705cfa: Download complete
Step #1 - "vendors_test": ac6efb705cfa: Download complete
Step #1 - "vendors_test": 86467c57892b: Pull complete
Step #0 - "kne_test": 86467c57892b: Pull complete
Step #1 - "vendors_test": f77e78017e9a: Pull complete
Step #0 - "kne_test": f77e78017e9a: Pull complete
Step #0 - "kne_test": f8aba497fc29: Pull complete
Step #1 - "vendors_test": f8aba497fc29: Pull complete
Step #0 - "kne_test": 6f2753bac371: Verifying Checksum
Step #0 - "kne_test": 6f2753bac371: Download complete
Step #1 - "vendors_test": 6f2753bac371: Verifying Checksum
Step #1 - "vendors_test": 6f2753bac371: Download complete
Step #1 - "vendors_test": 6f2753bac371: Pull complete
Step #0 - "kne_test": 6f2753bac371: Pull complete
Step #1 - "vendors_test": ac2cfad852ec: Pull complete
Step #0 - "kne_test": ac2cfad852ec: Pull complete
Step #1 - "vendors_test": b23e897de184: Pull complete
Step #0 - "kne_test": b23e897de184: Pull complete
Step #0 - "kne_test": a9ce5a33dc9c: Pull complete
Step #1 - "vendors_test": a9ce5a33dc9c: Pull complete
Step #0 - "kne_test": 30c9afd0e435: Pull complete
Step #1 - "vendors_test": 30c9afd0e435: Pull complete
Step #1 - "vendors_test": ac6efb705cfa: Pull complete
Step #0 - "kne_test": ac6efb705cfa: Pull complete
Step #1 - "vendors_test": Digest: sha256:e0a4ca8bd58caa14035ce437dd361442dd72a76380d065902d44c182b64dbedc
Step #0 - "kne_test": Digest: sha256:e0a4ca8bd58caa14035ce437dd361442dd72a76380d065902d44c182b64dbedc
Step #0 - "kne_test": Status: Downloaded newer image for gcr.io/kne-external/remote-builder:latest
Step #1 - "vendors_test": Status: Downloaded newer image for gcr.io/kne-external/remote-builder:latest
Step #0 - "kne_test": gcr.io/kne-external/remote-builder:latest
Step #1 - "vendors_test": gcr.io/kne-external/remote-builder:latest
Step #0 - "kne_test": + '[' -z 'source /tmp/workspace/cloudbuild/kne_test.sh 2>&1' ']'
Step #0 - "kne_test": + USERNAME=user
Step #0 - "kne_test": + REMOTE_WORKSPACE=/tmp/workspace
Step #0 - "kne_test": + INSTANCE_NAME=kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13
Step #0 - "kne_test": + ZONE=us-central1-a
Step #0 - "kne_test": + INSTANCE_ARGS='--network cloudbuild-workers --image-project gep-kne --image-family kne --machine-type e2-standard-4 --boot-disk-size 200GB'
Step #0 - "kne_test": + SSH_ARGS='--internal-ip --ssh-key-expire-after=1d'
Step #0 - "kne_test": + GCLOUD=gcloud
Step #0 - "kne_test": + RETRIES=10
Step #0 - "kne_test": + gcloud config set compute/zone us-central1-a
Step #1 - "vendors_test": + '[' -z 'source /tmp/workspace/cloudbuild/vendors_test.sh 2>&1' ']'
Step #1 - "vendors_test": + USERNAME=user
Step #1 - "vendors_test": + REMOTE_WORKSPACE=/tmp/workspace
Step #1 - "vendors_test": + INSTANCE_NAME=kne-presubmit-vendors-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13
Step #1 - "vendors_test": + ZONE=us-central1-a
Step #1 - "vendors_test": + INSTANCE_ARGS='--network cloudbuild-workers --image-project gep-kne --image-family kne --machine-type n2-standard-32 --boot-disk-size 200GB --enable-nested-virtualization'
Step #1 - "vendors_test": + SSH_ARGS='--internal-ip --ssh-key-expire-after=1d'
Step #1 - "vendors_test": + GCLOUD=gcloud
Step #1 - "vendors_test": + RETRIES=10
Step #1 - "vendors_test": + gcloud config set compute/zone us-central1-a
Step #0 - "kne_test": Updated property [compute/zone].
Step #1 - "vendors_test": Updated property [compute/zone].
Step #0 - "kne_test": + KEYNAME=builder-key
Step #0 - "kne_test": + ssh-keygen -t rsa -N '' -f builder-key -C user
Step #0 - "kne_test": Generating public/private rsa key pair.
Step #0 - "kne_test": Your identification has been saved in builder-key.
Step #0 - "kne_test": Your public key has been saved in builder-key.pub.
Step #0 - "kne_test": The key fingerprint is:
Step #0 - "kne_test": SHA256:8E54V0D2oJM+G45GWaaA78FCBJhoxkne/DeRaAugZvM user
Step #0 - "kne_test": The key's randomart image is:
Step #0 - "kne_test": +---[RSA 2048]----+
Step #0 - "kne_test": |*=o      .=      |
Step #0 - "kne_test": |**+.  . .+ +     |
Step #0 - "kne_test": |+=o+.o.o*   o    |
Step #0 - "kne_test": |o.oo+..X.. .     |
Step #0 - "kne_test": |  .E+o=oS .      |
Step #0 - "kne_test": |   o o.*.=       |
Step #0 - "kne_test": |    . o +        |
Step #0 - "kne_test": |     .           |
Step #0 - "kne_test": |                 |
Step #0 - "kne_test": +----[SHA256]-----+
Step #0 - "kne_test": + chmod 400 builder-key builder-key.pub
Step #0 - "kne_test": + cat
Step #0 - "kne_test": ++ cat builder-key.pub
Step #0 - "kne_test": + gcloud compute instances create --network cloudbuild-workers --image-project gep-kne --image-family kne --machine-type e2-standard-4 --boot-disk-size 200GB kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13 --metadata block-project-ssh-keys=TRUE --metadata-from-file ssh-keys=ssh-keys
Step #1 - "vendors_test": + KEYNAME=builder-key
Step #1 - "vendors_test": + ssh-keygen -t rsa -N '' -f builder-key -C user
Step #1 - "vendors_test": Generating public/private rsa key pair.
Step #1 - "vendors_test": builder-key already exists.
Step #1 - "vendors_test": + true
Step #1 - "vendors_test": + chmod 400 builder-key builder-key.pub
Step #1 - "vendors_test": + cat
Step #1 - "vendors_test": ++ cat builder-key.pub
Step #1 - "vendors_test": + gcloud compute instances create --network cloudbuild-workers --image-project gep-kne --image-family kne --machine-type n2-standard-32 --boot-disk-size 200GB --enable-nested-virtualization kne-presubmit-vendors-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13 --metadata block-project-ssh-keys=TRUE --metadata-from-file ssh-keys=ssh-keys
Step #0 - "kne_test": Created [https://www.googleapis.com/compute/v1/projects/kne-external/zones/us-central1-a/instances/kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13].
Step #0 - "kne_test": WARNING: Some requests generated warnings:
Step #0 - "kne_test":  - Disk size: '200 GB' is larger than image size: '50 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.
Step #0 - "kne_test": 
Step #0 - "kne_test": NAME                                                    ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP    STATUS
Step #0 - "kne_test": kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13  us-central1-a  e2-standard-4               10.128.0.11  146.148.67.21  RUNNING
Step #0 - "kne_test": + trap cleanup EXIT
Step #0 - "kne_test": + RETRY_COUNT=1
Step #0 - "kne_test": ++ ssh 'printf pass'
Step #0 - "kne_test": ++ gcloud compute ssh --internal-ip --ssh-key-expire-after=1d --ssh-key-file=builder-key user@kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13 -- printf pass
Step #0 - "kne_test": Updating instance ssh metadata...
Step #0 - "kne_test": .................Updated [https://www.googleapis.com/compute/v1/projects/kne-external/zones/us-central1-a/instances/kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13].
Step #0 - "kne_test": .done.
Step #0 - "kne_test": Waiting for SSH key to propagate.
Step #1 - "vendors_test": Created [https://www.googleapis.com/compute/v1/projects/kne-external/zones/us-central1-a/instances/kne-presubmit-vendors-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13].
Step #1 - "vendors_test": WARNING: Some requests generated warnings:
Step #1 - "vendors_test":  - Disk size: '200 GB' is larger than image size: '50 GB'. You might need to resize the root repartition manually if the operating system does not support automatic resizing. See https://cloud.google.com/compute/docs/disks/add-persistent-disk#resize_pd for details.
Step #1 - "vendors_test": 
Step #1 - "vendors_test": Overwrite (y/n)? NAME                                                        ZONE           MACHINE_TYPE    PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
Step #1 - "vendors_test": kne-presubmit-vendors-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13  us-central1-a  n2-standard-32               10.128.0.15  35.222.231.237  RUNNING
Step #1 - "vendors_test": + trap cleanup EXIT
Step #1 - "vendors_test": + RETRY_COUNT=1
Step #1 - "vendors_test": ++ ssh 'printf pass'
Step #1 - "vendors_test": ++ gcloud compute ssh --internal-ip --ssh-key-expire-after=1d --ssh-key-file=builder-key user@kne-presubmit-vendors-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13 -- printf pass
Step #1 - "vendors_test": Updating instance ssh metadata...
Step #1 - "vendors_test": .......................Updated [https://www.googleapis.com/compute/v1/projects/kne-external/zones/us-central1-a/instances/kne-presubmit-vendors-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13].
Step #1 - "vendors_test": .done.
Step #1 - "vendors_test": Waiting for SSH key to propagate.
Step #0 - "kne_test": ssh: connect to host 10.128.0.11 port 22: Connection refused
Step #1 - "vendors_test": ssh: connect to host 10.128.0.15 port 22: Connection refused
Step #0 - "kne_test": ssh: connect to host 10.128.0.11 port 22: Connection refused
Step #1 - "vendors_test": ssh: connect to host 10.128.0.15 port 22: Connection refused
Step #0 - "kne_test": Failed to add the host to the list of known hosts (/builder/home/.ssh/google_compute_known_hosts).
Step #1 - "vendors_test": ssh: connect to host 10.128.0.15 port 22: Connection refused
Step #0 - "kne_test": Pseudo-terminal will not be allocated because stdin is not a terminal.
Step #0 - "kne_test": Failed to add the host to the list of known hosts (/builder/home/.ssh/google_compute_known_hosts).
Step #0 - "kne_test": + '[' pass '!=' pass ']'
Step #0 - "kne_test": ++ pwd
Step #0 - "kne_test": + gcloud compute scp --internal-ip --ssh-key-expire-after=1d --compress --recurse /workspace user@kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13:/tmp/workspace --ssh-key-file=builder-key
Step #0 - "kne_test": Updating instance ssh metadata...
Step #1 - "vendors_test": Failed to add the host to the list of known hosts (/builder/home/.ssh/google_compute_known_hosts).
Step #0 - "kne_test": ...............Updated [https://www.googleapis.com/compute/v1/projects/kne-external/zones/us-central1-a/instances/kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13].
Step #1 - "vendors_test": Pseudo-terminal will not be allocated because stdin is not a terminal.
Step #1 - "vendors_test": Failed to add the host to the list of known hosts (/builder/home/.ssh/google_compute_known_hosts).
Step #0 - "kne_test": .done.
Step #0 - "kne_test": Waiting for SSH key to propagate.
Step #0 - "kne_test": Failed to add the host to the list of known hosts (/builder/home/.ssh/google_compute_known_hosts).
Step #1 - "vendors_test": + '[' pass '!=' pass ']'
Step #1 - "vendors_test": ++ pwd
Step #1 - "vendors_test": + gcloud compute scp --internal-ip --ssh-key-expire-after=1d --compress --recurse /workspace user@kne-presubmit-vendors-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13:/tmp/workspace --ssh-key-file=builder-key
Step #0 - "kne_test": Failed to add the host to the list of known hosts (/builder/home/.ssh/google_compute_known_hosts).
Step #0 - "kne_test": + ssh 'source /tmp/workspace/cloudbuild/kne_test.sh 2>&1'
Step #0 - "kne_test": + gcloud compute ssh --internal-ip --ssh-key-expire-after=1d --ssh-key-file=builder-key user@kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13 -- source /tmp/workspace/cloudbuild/kne_test.sh '2>&1'
Step #1 - "vendors_test": Updating instance ssh metadata...
Step #0 - "kne_test": Writing 3 keys to /builder/home/.ssh/google_compute_known_hosts
Step #0 - "kne_test": Updating instance ssh metadata...
Step #1 - "vendors_test": .....................Updated [https://www.googleapis.com/compute/v1/projects/kne-external/zones/us-central1-a/instances/kne-presubmit-vendors-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13].
Step #1 - "vendors_test": .done.
Step #1 - "vendors_test": Waiting for SSH key to propagate.
Step #1 - "vendors_test": Warning: Permanently added 'compute.8759564234008944198' (ECDSA) to the list of known hosts.
Step #0 - "kne_test": ...............Updated [https://www.googleapis.com/compute/v1/projects/kne-external/zones/us-central1-a/instances/kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13].
Step #0 - "kne_test": ..done.
Step #0 - "kne_test": Waiting for SSH key to propagate.
Step #0 - "kne_test": Pseudo-terminal will not be allocated because stdin is not a terminal.
Step #0 - "kne_test": ++ export PATH=/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/local/go/bin
Step #0 - "kne_test": ++ PATH=/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/local/go/bin
Step #0 - "kne_test": +++ go env GOPATH
Step #0 - "kne_test": ++ gopath=/home/user/go
Step #0 - "kne_test": ++ export PATH=/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/local/go/bin:/home/user/go/bin
Step #0 - "kne_test": ++ PATH=/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/local/go/bin:/home/user/go/bin
Step #0 - "kne_test": ++ rm -r /home/user/kne
Step #1 - "vendors_test": + ssh 'source /tmp/workspace/cloudbuild/vendors_test.sh 2>&1'
Step #1 - "vendors_test": + gcloud compute ssh --internal-ip --ssh-key-expire-after=1d --ssh-key-file=builder-key user@kne-presubmit-vendors-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13 -- source /tmp/workspace/cloudbuild/vendors_test.sh '2>&1'
Step #0 - "kne_test": ++ cp -r /tmp/workspace /home/user/kne
Step #0 - "kne_test": ++ pushd /home/user/kne/kne_cli
Step #0 - "kne_test": ~/kne/kne_cli ~
Step #0 - "kne_test": ++ go build -o kne
Step #1 - "vendors_test": Existing host keys found in /builder/home/.ssh/google_compute_known_hosts
Step #1 - "vendors_test": Updating instance ssh metadata...
Step #1 - "vendors_test": .....................Updated [https://www.googleapis.com/compute/v1/projects/kne-external/zones/us-central1-a/instances/kne-presubmit-vendors-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13].
Step #1 - "vendors_test": ..done.
Step #1 - "vendors_test": Waiting for SSH key to propagate.
Step #1 - "vendors_test": Pseudo-terminal will not be allocated because stdin is not a terminal.
Step #1 - "vendors_test": ++ export PATH=/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/local/go/bin
Step #1 - "vendors_test": ++ PATH=/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/local/go/bin
Step #1 - "vendors_test": +++ go env GOPATH
Step #1 - "vendors_test": ++ gopath=/home/user/go
Step #1 - "vendors_test": ++ export PATH=/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/local/go/bin:/home/user/go/bin
Step #1 - "vendors_test": ++ PATH=/usr/local/bin:/usr/bin:/bin:/usr/games:/usr/local/go/bin:/home/user/go/bin
Step #1 - "vendors_test": ++ rm -r /home/user/kne
Step #1 - "vendors_test": ++ cp -r /tmp/workspace /home/user/kne
Step #1 - "vendors_test": ++ pushd /home/user/kne/kne_cli
Step #1 - "vendors_test": ~/kne/kne_cli ~
Step #1 - "vendors_test": ++ go build -o kne
Step #1 - "vendors_test": ++ cli=/home/user/kne/kne_cli/kne
Step #1 - "vendors_test": ++ popd
Step #1 - "vendors_test": ~
Step #1 - "vendors_test": ++ pushd /home/user
Step #1 - "vendors_test": ~ ~
Step #1 - "vendors_test": ++ /home/user/kne/kne_cli/kne deploy kne/cloudbuild/vendors/deployment.yaml --report_usage=false
Step #1 - "vendors_test": I0321 16:32:32.933772    2660 deploy.go:191] Deploying cluster...
Step #1 - "vendors_test": I0321 16:32:33.324872    2660 deploy.go:590] kind version valid: got 0.22.0 want 0.17.0
Step #1 - "vendors_test": I0321 16:32:33.324968    2660 deploy.go:601] Attempting to recycle existing cluster "kne"...
Step #1 - "vendors_test": W0321 16:32:34.818132    2660 run.go:29] (kubectl): error: context "kind-kne" does not exist
Step #1 - "vendors_test": I0321 16:32:34.819191    2660 deploy.go:626] Creating kind cluster with: [create cluster --name kne --image kindest/node:v1.26.0 --config /home/user/kne/manifests/kind/config.yaml]
Step #1 - "vendors_test": W0321 16:32:38.604901    2660 run.go:29] (kind): Creating cluster "kne" ...
Step #1 - "vendors_test": W0321 16:32:38.604922    2660 run.go:29] (kind):  • Ensuring node image (kindest/node:v1.26.0) 🖼  ...
Step #0 - "kne_test": ++ cli=/home/user/kne/kne_cli/kne
Step #0 - "kne_test": ++ popd
Step #0 - "kne_test": ~
Step #0 - "kne_test": ++ pushd /home/user
Step #0 - "kne_test": ~ ~
Step #0 - "kne_test": ++ /home/user/kne/kne_cli/kne deploy kne/deploy/kne/kind-bridge.yaml --report_usage=false
Step #0 - "kne_test": I0321 16:32:41.475738    1642 deploy.go:191] Deploying cluster...
Step #0 - "kne_test": I0321 16:32:41.853542    1642 deploy.go:590] kind version valid: got 0.22.0 want 0.17.0
Step #0 - "kne_test": I0321 16:32:41.853667    1642 deploy.go:601] Attempting to recycle existing cluster "kne"...
Step #0 - "kne_test": W0321 16:32:44.807210    1642 run.go:29] (kubectl): error: context "kind-kne" does not exist
Step #0 - "kne_test": I0321 16:32:44.808677    1642 deploy.go:626] Creating kind cluster with: [create cluster --name kne --image kindest/node:v1.26.0 --config /home/user/kne/manifests/kind/config.yaml]
Step #1 - "vendors_test": W0321 16:32:48.156252    2660 run.go:29] (kind):  ✓ Ensuring node image (kindest/node:v1.26.0) 🖼
Step #1 - "vendors_test": W0321 16:32:48.156271    2660 run.go:29] (kind):  • Preparing nodes 📦   ...
Step #0 - "kne_test": W0321 16:32:48.838546    1642 run.go:29] (kind): Creating cluster "kne" ...
Step #0 - "kne_test": W0321 16:32:48.838574    1642 run.go:29] (kind):  • Ensuring node image (kindest/node:v1.26.0) 🖼  ...
Step #1 - "vendors_test": W0321 16:33:01.554206    2660 run.go:29] (kind):  ✓ Preparing nodes 📦 
Step #1 - "vendors_test": W0321 16:33:01.580273    2660 run.go:29] (kind):  • Writing configuration 📜  ...
Step #1 - "vendors_test": W0321 16:33:01.804781    2660 run.go:29] (kind):  ✓ Writing configuration 📜
Step #1 - "vendors_test": W0321 16:33:01.804797    2660 run.go:29] (kind):  • Starting control-plane 🕹️  ...
Step #0 - "kne_test": W0321 16:33:02.258909    1642 run.go:29] (kind):  ✓ Ensuring node image (kindest/node:v1.26.0) 🖼
Step #0 - "kne_test": W0321 16:33:02.258935    1642 run.go:29] (kind):  • Preparing nodes 📦   ...
Step #1 - "vendors_test": W0321 16:33:13.481352    2660 run.go:29] (kind):  ✓ Starting control-plane 🕹️
Step #1 - "vendors_test": W0321 16:33:13.481376    2660 run.go:29] (kind):  • Installing StorageClass 💾  ...
Step #1 - "vendors_test": W0321 16:33:14.082052    2660 run.go:29] (kind):  ✓ Installing StorageClass 💾
Step #1 - "vendors_test": W0321 16:33:14.306789    2660 run.go:29] (kind): Set kubectl context to "kind-kne"
Step #1 - "vendors_test": W0321 16:33:14.306808    2660 run.go:29] (kind): You can now use your cluster with:
Step #1 - "vendors_test": W0321 16:33:14.306811    2660 run.go:29] (kind): kubectl cluster-info --context kind-kne
Step #1 - "vendors_test": W0321 16:33:14.306814    2660 run.go:29] (kind): Have a question, bug, or feature request? Let us know! https://kind.sigs.k8s.io/#community 🙂
Step #1 - "vendors_test": I0321 16:33:14.307566    2660 deploy.go:638] Deployed kind cluster: kne
Step #1 - "vendors_test": W0321 16:33:14.367882    2660 run.go:29] (/home/user/kne-internal/set_pid_max.sh): + sudo sysctl kernel.pid_max=1048575
Step #1 - "vendors_test": I0321 16:33:14.524871    2660 run.go:26] (/home/user/kne-internal/set_pid_max.sh): kernel.pid_max = 1048575
Step #1 - "vendors_test": I0321 16:33:14.525582    2660 deploy.go:661] Found manifest "/home/user/kne/manifests/kind/bridge.yaml"
Step #1 - "vendors_test": I0321 16:33:15.027918    2660 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/kindnet created
Step #1 - "vendors_test": I0321 16:33:15.033660    2660 run.go:26] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/kindnet created
Step #1 - "vendors_test": I0321 16:33:15.038013    2660 run.go:26] (kubectl): serviceaccount/kindnet created
Step #1 - "vendors_test": I0321 16:33:15.045107    2660 run.go:26] (kubectl): daemonset.apps/kindnet created
Step #1 - "vendors_test": I0321 16:33:15.047900    2660 deploy.go:668] Setting up GAR access for [us-west1-docker.pkg.dev]
Step #1 - "vendors_test": W0321 16:33:15.766669    2660 run.go:29] (docker): WARNING! Your password will be stored unencrypted in /tmp/kne_kind_docker2315054904/config.json.
Step #1 - "vendors_test": W0321 16:33:15.766703    2660 run.go:29] (docker): Configure a credential helper to remove this warning. See
Step #1 - "vendors_test": W0321 16:33:15.766708    2660 run.go:29] (docker): https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Step #1 - "vendors_test": I0321 16:33:15.766864    2660 run.go:26] (docker): Login Succeeded
Step #1 - "vendors_test": I0321 16:33:16.099283    2660 kind.go:127] Setup GAR access for [us-west1-docker.pkg.dev]
Step #1 - "vendors_test": I0321 16:33:16.099428    2660 deploy.go:195] Cluster deployed
Step #1 - "vendors_test": I0321 16:33:16.169263    2660 run.go:26] (kubectl): �[0;32mKubernetes control plane�[0m is running at �[0;33mhttps://127.0.0.1:35313�[0m
Step #1 - "vendors_test": I0321 16:33:16.169285    2660 run.go:26] (kubectl): �[0;32mCoreDNS�[0m is running at �[0;33mhttps://127.0.0.1:35313/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy�[0m
Step #1 - "vendors_test": I0321 16:33:16.169293    2660 run.go:26] (kubectl): To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Step #1 - "vendors_test": I0321 16:33:16.170668    2660 deploy.go:199] Cluster healthy
Step #1 - "vendors_test": I0321 16:33:16.171761    2660 deploy.go:210] Validating kubectl version
Step #1 - "vendors_test": I0321 16:33:16.240011    2660 deploy.go:242] Deploying ingress...
Step #1 - "vendors_test": I0321 16:33:16.240059    2660 deploy.go:845] Creating metallb namespace
Step #1 - "vendors_test": I0321 16:33:16.240064    2660 deploy.go:864] Deploying MetalLB from: /home/user/kne/manifests/metallb/manifest.yaml
Step #1 - "vendors_test": I0321 16:33:16.385570    2660 run.go:26] (kubectl): namespace/metallb-system created
Step #1 - "vendors_test": I0321 16:33:16.394526    2660 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
Step #1 - "vendors_test": I0321 16:33:16.403532    2660 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
Step #1 - "vendors_test": I0321 16:33:16.409827    2660 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
Step #1 - "vendors_test": I0321 16:33:16.416901    2660 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
Step #1 - "vendors_test": I0321 16:33:16.423959    2660 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
Step #1 - "vendors_test": I0321 16:33:16.429784    2660 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
Step #1 - "vendors_test": I0321 16:33:16.446748    2660 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
Step #1 - "vendors_test": I0321 16:33:16.454172    2660 run.go:26] (kubectl): serviceaccount/controller created
Step #1 - "vendors_test": I0321 16:33:16.458771    2660 run.go:26] (kubectl): serviceaccount/speaker created
Step #1 - "vendors_test": I0321 16:33:16.463725    2660 run.go:26] (kubectl): role.rbac.authorization.k8s.io/controller created
Step #1 - "vendors_test": I0321 16:33:16.468986    2660 run.go:26] (kubectl): role.rbac.authorization.k8s.io/pod-lister created
Step #1 - "vendors_test": I0321 16:33:16.472385    2660 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
Step #1 - "vendors_test": I0321 16:33:16.476518    2660 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
Step #1 - "vendors_test": I0321 16:33:16.480851    2660 run.go:26] (kubectl): rolebinding.rbac.authorization.k8s.io/controller created
Step #1 - "vendors_test": I0321 16:33:16.485716    2660 run.go:26] (kubectl): rolebinding.rbac.authorization.k8s.io/pod-lister created
Step #1 - "vendors_test": I0321 16:33:16.489915    2660 run.go:26] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
Step #1 - "vendors_test": I0321 16:33:16.492945    2660 run.go:26] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
Step #1 - "vendors_test": I0321 16:33:16.496839    2660 run.go:26] (kubectl): secret/webhook-server-cert created
Step #1 - "vendors_test": I0321 16:33:16.508558    2660 run.go:26] (kubectl): service/webhook-service created
Step #1 - "vendors_test": I0321 16:33:16.514973    2660 run.go:26] (kubectl): deployment.apps/controller created
Step #1 - "vendors_test": I0321 16:33:16.521573    2660 run.go:26] (kubectl): daemonset.apps/speaker created
Step #1 - "vendors_test": I0321 16:33:16.527857    2660 run.go:26] (kubectl): validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
Step #1 - "vendors_test": I0321 16:33:16.531399    2660 deploy.go:869] Creating metallb secret
Step #1 - "vendors_test": I0321 16:33:16.533763    2660 deploy.go:1308] Waiting on deployment "metallb-system" to be healthy
Step #0 - "kne_test": W0321 16:33:16.712029    1642 run.go:29] (kind):  ✓ Preparing nodes 📦 
Step #0 - "kne_test": W0321 16:33:16.758686    1642 run.go:29] (kind):  • Writing configuration 📜  ...
Step #0 - "kne_test": W0321 16:33:17.249581    1642 run.go:29] (kind):  ✓ Writing configuration 📜
Step #0 - "kne_test": W0321 16:33:17.249607    1642 run.go:29] (kind):  • Starting control-plane 🕹️  ...
Step #0 - "kne_test": W0321 16:33:37.765192    1642 run.go:29] (kind):  ✓ Starting control-plane 🕹️
Step #0 - "kne_test": W0321 16:33:37.765220    1642 run.go:29] (kind):  • Installing StorageClass 💾  ...
Step #0 - "kne_test": W0321 16:33:38.750702    1642 run.go:29] (kind):  ✓ Installing StorageClass 💾
Step #0 - "kne_test": W0321 16:33:39.010683    1642 run.go:29] (kind): Set kubectl context to "kind-kne"
Step #0 - "kne_test": W0321 16:33:39.010708    1642 run.go:29] (kind): You can now use your cluster with:
Step #0 - "kne_test": W0321 16:33:39.010715    1642 run.go:29] (kind): kubectl cluster-info --context kind-kne
Step #0 - "kne_test": W0321 16:33:39.010725    1642 run.go:29] (kind): Not sure what to do next? 😅  Check out https://kind.sigs.k8s.io/docs/user/quick-start/
Step #0 - "kne_test": I0321 16:33:39.011757    1642 deploy.go:638] Deployed kind cluster: kne
Step #0 - "kne_test": W0321 16:33:39.045906    1642 run.go:29] (/home/user/kne-internal/set_pid_max.sh): + sudo sysctl kernel.pid_max=1048575
Step #0 - "kne_test": I0321 16:33:39.432516    1642 run.go:26] (/home/user/kne-internal/set_pid_max.sh): kernel.pid_max = 1048575
Step #0 - "kne_test": I0321 16:33:39.433924 
...
[Logs truncated due to log size limitations. For full logs, see https://console.cloud.google.com/cloud-build/builds;region=us-central1/a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13?project=94286565069.]
...
s.network.keysight.com configured
Step #0 - "kne_test": I0321 16:35:28.213443    6450 run.go:26] (kubectl): serviceaccount/ixiatg-op-controller-manager unchanged
Step #0 - "kne_test": I0321 16:35:28.218426    6450 run.go:26] (kubectl): role.rbac.authorization.k8s.io/ixiatg-op-leader-election-role unchanged
Step #0 - "kne_test": I0321 16:35:28.227021    6450 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/ixiatg-op-manager-role configured
Step #0 - "kne_test": I0321 16:35:28.231549    6450 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/ixiatg-op-metrics-reader unchanged
Step #0 - "kne_test": I0321 16:35:28.236937    6450 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/ixiatg-op-proxy-role unchanged
Step #0 - "kne_test": I0321 16:35:28.242101    6450 run.go:26] (kubectl): rolebinding.rbac.authorization.k8s.io/ixiatg-op-leader-election-rolebinding unchanged
Step #0 - "kne_test": I0321 16:35:28.247140    6450 run.go:26] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/ixiatg-op-manager-rolebinding unchanged
Step #0 - "kne_test": I0321 16:35:28.252412    6450 run.go:26] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/ixiatg-op-proxy-rolebinding unchanged
Step #0 - "kne_test": I0321 16:35:28.255059    6450 run.go:26] (kubectl): configmap/ixiatg-op-manager-config unchanged
Step #0 - "kne_test": I0321 16:35:28.258654    6450 run.go:26] (kubectl): service/ixiatg-op-controller-manager-metrics-service unchanged
Step #0 - "kne_test": I0321 16:35:28.264079    6450 run.go:26] (kubectl): deployment.apps/ixiatg-op-controller-manager unchanged
Step #0 - "kne_test": I0321 16:35:28.266055    6450 deploy.go:1251] Deploying IxiaTG config map from: /home/user/kne/manifests/keysight/ixiatg-configmap.yaml
Step #0 - "kne_test": I0321 16:35:28.472880    6450 run.go:26] (kubectl): configmap/ixiatg-release-config unchanged
Step #0 - "kne_test": I0321 16:35:28.475142    6450 deploy.go:1255] IxiaTG controller deployed
Step #0 - "kne_test": I0321 16:35:28.475179    6450 deploy.go:1308] Waiting on deployment "ixiatg-op-system" to be healthy
Step #0 - "kne_test": I0321 16:35:28.477552    6450 deploy.go:1335] Deployment "ixiatg-op-system" healthy
Step #0 - "kne_test": I0321 16:35:28.477577    6450 deploy.go:264] Deploying controller...
Step #0 - "kne_test": I0321 16:35:28.477585    6450 deploy.go:1166] Deploying SRLinux controller from: /home/user/kne/manifests/controllers/srlinux/manifest.yaml
Step #0 - "kne_test": I0321 16:35:28.720250    6450 run.go:26] (kubectl): namespace/srlinux-controller unchanged
Step #0 - "kne_test": I0321 16:35:28.734895    6450 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/srlinuxes.kne.srlinux.dev configured
Step #0 - "kne_test": I0321 16:35:28.738128    6450 run.go:26] (kubectl): serviceaccount/srlinux-controller-controller-manager unchanged
Step #0 - "kne_test": I0321 16:35:28.744636    6450 run.go:26] (kubectl): role.rbac.authorization.k8s.io/srlinux-controller-leader-election-role unchanged
Step #0 - "kne_test": I0321 16:35:28.757172    6450 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/srlinux-controller-manager-role configured
Step #0 - "kne_test": I0321 16:35:28.763937    6450 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/srlinux-controller-metrics-reader unchanged
Step #0 - "kne_test": I0321 16:35:28.769314    6450 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/srlinux-controller-proxy-role unchanged
Step #0 - "kne_test": I0321 16:35:28.776212    6450 run.go:26] (kubectl): rolebinding.rbac.authorization.k8s.io/srlinux-controller-leader-election-rolebinding unchanged
Step #0 - "kne_test": I0321 16:35:28.784047    6450 run.go:26] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/srlinux-controller-manager-rolebinding unchanged
Step #0 - "kne_test": I0321 16:35:28.789623    6450 run.go:26] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/srlinux-controller-proxy-rolebinding unchanged
Step #0 - "kne_test": I0321 16:35:28.792793    6450 run.go:26] (kubectl): service/srlinux-controller-controller-manager-metrics-service unchanged
Step #0 - "kne_test": I0321 16:35:28.797092    6450 run.go:26] (kubectl): deployment.apps/srlinux-controller-controller-manager unchanged
Step #0 - "kne_test": I0321 16:35:28.798945    6450 deploy.go:1170] SRLinux controller deployed
Step #0 - "kne_test": I0321 16:35:28.798974    6450 deploy.go:1308] Waiting on deployment "srlinux-controller" to be healthy
Step #0 - "kne_test": I0321 16:35:28.801348    6450 deploy.go:1335] Deployment "srlinux-controller" healthy
Step #0 - "kne_test": I0321 16:35:28.801371    6450 deploy.go:264] Deploying controller...
Step #0 - "kne_test": I0321 16:35:28.801379    6450 deploy.go:1068] Deploying CEOSLab controller from: /home/user/kne/manifests/controllers/ceoslab/manifest.yaml
Step #0 - "kne_test": I0321 16:35:29.030485    6450 run.go:26] (kubectl): namespace/arista-ceoslab-operator-system unchanged
Step #0 - "kne_test": I0321 16:35:29.058371    6450 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/ceoslabdevices.ceoslab.arista.com configured
Step #0 - "kne_test": I0321 16:35:29.061312    6450 run.go:26] (kubectl): serviceaccount/arista-ceoslab-operator-controller-manager unchanged
Step #0 - "kne_test": I0321 16:35:29.066267    6450 run.go:26] (kubectl): role.rbac.authorization.k8s.io/arista-ceoslab-operator-leader-election-role unchanged
Step #0 - "kne_test": I0321 16:35:29.077173    6450 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/arista-ceoslab-operator-manager-role configured
Step #0 - "kne_test": I0321 16:35:29.082246    6450 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/arista-ceoslab-operator-metrics-reader unchanged
Step #0 - "kne_test": I0321 16:35:29.089053    6450 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/arista-ceoslab-operator-proxy-role unchanged
Step #0 - "kne_test": I0321 16:35:29.094267    6450 run.go:26] (kubectl): rolebinding.rbac.authorization.k8s.io/arista-ceoslab-operator-leader-election-rolebinding unchanged
Step #0 - "kne_test": I0321 16:35:29.100833    6450 run.go:26] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/arista-ceoslab-operator-manager-rolebinding unchanged
Step #0 - "kne_test": I0321 16:35:29.105968    6450 run.go:26] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/arista-ceoslab-operator-proxy-rolebinding unchanged
Step #0 - "kne_test": I0321 16:35:29.108873    6450 run.go:26] (kubectl): configmap/arista-ceoslab-operator-manager-config unchanged
Step #0 - "kne_test": I0321 16:35:29.111889    6450 run.go:26] (kubectl): service/arista-ceoslab-operator-controller-manager-metrics-service unchanged
Step #0 - "kne_test": I0321 16:35:29.117431    6450 run.go:26] (kubectl): deployment.apps/arista-ceoslab-operator-controller-manager unchanged
Step #0 - "kne_test": I0321 16:35:29.119747    6450 deploy.go:1072] CEOSLab controller deployed
Step #0 - "kne_test": I0321 16:35:29.119781    6450 deploy.go:1308] Waiting on deployment "arista-ceoslab-operator-system" to be healthy
Step #0 - "kne_test": I0321 16:35:29.124471    6450 deploy.go:1335] Deployment "arista-ceoslab-operator-system" healthy
Step #0 - "kne_test": I0321 16:35:29.124506    6450 deploy.go:264] Deploying controller...
Step #0 - "kne_test": I0321 16:35:29.124514    6450 deploy.go:1117] Deploying Lemming controller from: /home/user/kne/manifests/controllers/lemming/manifest.yaml
Step #0 - "kne_test": I0321 16:35:29.352128    6450 run.go:26] (kubectl): namespace/lemming-operator unchanged
Step #0 - "kne_test": I0321 16:35:29.372680    6450 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/lemmings.lemming.openconfig.net configured
Step #0 - "kne_test": I0321 16:35:29.376242    6450 run.go:26] (kubectl): serviceaccount/lemming-controller-manager unchanged
Step #0 - "kne_test": I0321 16:35:29.385838    6450 run.go:26] (kubectl): role.rbac.authorization.k8s.io/lemming-leader-election-role unchanged
Step #0 - "kne_test": I0321 16:35:29.394903    6450 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/lemming-manager-role configured
Step #0 - "kne_test": I0321 16:35:29.399783    6450 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/lemming-metrics-reader unchanged
Step #0 - "kne_test": I0321 16:35:29.407262    6450 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/lemming-proxy-role unchanged
Step #0 - "kne_test": I0321 16:35:29.412110    6450 run.go:26] (kubectl): rolebinding.rbac.authorization.k8s.io/lemming-leader-election-rolebinding unchanged
Step #0 - "kne_test": I0321 16:35:29.417472    6450 run.go:26] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/lemming-manager-rolebinding unchanged
Step #0 - "kne_test": I0321 16:35:29.422262    6450 run.go:26] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/lemming-proxy-rolebinding unchanged
Step #0 - "kne_test": I0321 16:35:29.425305    6450 run.go:26] (kubectl): configmap/lemming-manager-config unchanged
Step #0 - "kne_test": I0321 16:35:29.428572    6450 run.go:26] (kubectl): service/lemming-controller-manager-metrics-service unchanged
Step #0 - "kne_test": I0321 16:35:29.433668    6450 run.go:26] (kubectl): deployment.apps/lemming-controller-manager unchanged
Step #0 - "kne_test": I0321 16:35:29.435942    6450 deploy.go:1121] Lemming controller deployed
Step #0 - "kne_test": I0321 16:35:29.435975    6450 deploy.go:1308] Waiting on deployment "lemming-operator" to be healthy
Step #0 - "kne_test": I0321 16:35:29.439012    6450 deploy.go:1335] Deployment "lemming-operator" healthy
Step #0 - "kne_test": I0321 16:35:29.439080    6450 deploy.go:275] Controllers deployed and healthy
Step #0 - "kne_test": I0321 16:35:29.439853    6450 deploy.go:119] Deployment complete, ready for topology
Step #0 - "kne_test": Log files can be found in:
Step #0 - "kne_test":     /tmp/kne.kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13.user.log.INFO.20240321-163525.6450
Step #0 - "kne_test":     /tmp/kne.kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13.user.log.WARNING.20240321-163526.6450
Step #0 - "kne_test": ++ kubectl get pods -A
Step #0 - "kne_test": NAMESPACE                        NAME                                                          READY   STATUS    RESTARTS   AGE
Step #0 - "kne_test": arista-ceoslab-operator-system   arista-ceoslab-operator-controller-manager-768d797f66-5txgb   2/2     Running   0          36s
Step #0 - "kne_test": ixiatg-op-system                 ixiatg-op-controller-manager-676f668ddb-f6qwv                 2/2     Running   0          69s
Step #0 - "kne_test": kube-system                      coredns-787d4945fb-g97p5                                      1/1     Running   0          102s
Step #0 - "kne_test": kube-system                      coredns-787d4945fb-z24x9                                      1/1     Running   0          102s
Step #0 - "kne_test": kube-system                      etcd-kne-control-plane                                        1/1     Running   0          115s
Step #0 - "kne_test": kube-system                      kindnet-hq8g6                                                 1/1     Running   0          102s
Step #0 - "kne_test": kube-system                      kube-apiserver-kne-control-plane                              1/1     Running   0          115s
Step #0 - "kne_test": kube-system                      kube-controller-manager-kne-control-plane                     1/1     Running   0          115s
Step #0 - "kne_test": kube-system                      kube-proxy-f8lpc                                              1/1     Running   0          102s
Step #0 - "kne_test": kube-system                      kube-scheduler-kne-control-plane                              1/1     Running   0          115s
Step #0 - "kne_test": lemming-operator                 lemming-controller-manager-77f8fc45df-4kwbz                   2/2     Running   0          14s
Step #0 - "kne_test": local-path-storage               local-path-provisioner-c8855d4bb-64qgx                        1/1     Running   0          102s
Step #0 - "kne_test": meshnet                          meshnet-26tql                                                 1/1     Running   0          70s
Step #0 - "kne_test": metallb-system                   controller-8bb68977b-r5bq5                                    1/1     Running   0          102s
Step #0 - "kne_test": metallb-system                   speaker-tfbwl                                                 1/1     Running   0          96s
Step #0 - "kne_test": srlinux-controller               srlinux-controller-controller-manager-6c7cc8dd47-qvbzr        2/2     Running   0          47s
Step #0 - "kne_test": ++ /home/user/kne/kne_cli/kne teardown kne/deploy/kne/kind-bridge.yaml
Step #0 - "kne_test": I0321 16:35:29.578445    6552 deploy.go:329] Deleting cluster...
Step #0 - "kne_test": W0321 16:35:29.597057    6552 run.go:29] (kind): Deleting cluster "kne" ...
Step #0 - "kne_test": W0321 16:35:32.253836    6552 run.go:29] (kind): Deleted nodes: ["kne-control-plane"]
Step #0 - "kne_test": I0321 16:35:32.254540    6552 deploy.go:333] Cluster deleted
Step #0 - "kne_test": I0321 16:35:32.254573    6552 deploy.go:134] Cluster deployment teardown complete
Step #0 - "kne_test": Log files can be found in:
Step #0 - "kne_test":     /tmp/kne.kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13.user.log.INFO.20240321-163529.6552
Step #0 - "kne_test":     /tmp/kne.kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13.user.log.WARNING.20240321-163529.6552
Step #0 - "kne_test": ++ cat
Step #0 - "kne_test": ++ /home/user/kne/kne_cli/kne deploy /tmp/dep-cfg.yaml --report_usage=false
Step #0 - "kne_test": I0321 16:35:32.312966    6620 deploy.go:96] no controllers specified
Step #0 - "kne_test": I0321 16:35:32.313664    6620 deploy.go:191] Deploying cluster...
Step #0 - "kne_test": I0321 16:35:32.318928    6620 deploy.go:590] kind version valid: got 0.22.0 want 0.17.0
Step #0 - "kne_test": I0321 16:35:32.318983    6620 deploy.go:601] Attempting to recycle existing cluster "kne"...
Step #0 - "kne_test": W0321 16:35:32.381090    6620 run.go:29] (kubectl): error: context "kind-kne" does not exist
Step #0 - "kne_test": I0321 16:35:32.382279    6620 deploy.go:626] Creating kind cluster with: [create cluster --name kne --image kindest/node:v1.26.0 --config /home/user/kne/manifests/kind/config.yaml]
Step #0 - "kne_test": W0321 16:35:32.485506    6620 run.go:29] (kind): Creating cluster "kne" ...
Step #0 - "kne_test": W0321 16:35:32.485533    6620 run.go:29] (kind):  • Ensuring node image (kindest/node:v1.26.0) 🖼  ...
Step #0 - "kne_test": W0321 16:35:32.525019    6620 run.go:29] (kind):  ✓ Ensuring node image (kindest/node:v1.26.0) 🖼
Step #0 - "kne_test": W0321 16:35:32.525038    6620 run.go:29] (kind):  • Preparing nodes 📦   ...
Step #0 - "kne_test": W0321 16:35:37.477928    6620 run.go:29] (kind):  ✓ Preparing nodes 📦 
Step #0 - "kne_test": W0321 16:35:37.525160    6620 run.go:29] (kind):  • Writing configuration 📜  ...
Step #0 - "kne_test": W0321 16:35:37.869717    6620 run.go:29] (kind):  ✓ Writing configuration 📜
Step #0 - "kne_test": W0321 16:35:37.869736    6620 run.go:29] (kind):  • Starting control-plane 🕹️  ...
Step #0 - "kne_test": W0321 16:35:57.366645    6620 run.go:29] (kind):  ✓ Starting control-plane 🕹️
Step #0 - "kne_test": W0321 16:35:57.366669    6620 run.go:29] (kind):  • Installing StorageClass 💾  ...
Step #0 - "kne_test": W0321 16:35:58.382828    6620 run.go:29] (kind):  ✓ Installing StorageClass 💾
Step #0 - "kne_test": W0321 16:35:58.656442    6620 run.go:29] (kind): Set kubectl context to "kind-kne"
Step #0 - "kne_test": W0321 16:35:58.656495    6620 run.go:29] (kind): You can now use your cluster with:
Step #0 - "kne_test": W0321 16:35:58.656504    6620 run.go:29] (kind): kubectl cluster-info --context kind-kne
Step #0 - "kne_test": W0321 16:35:58.656515    6620 run.go:29] (kind): Have a nice day! 👋
Step #0 - "kne_test": I0321 16:35:58.657292    6620 deploy.go:638] Deployed kind cluster: kne
Step #0 - "kne_test": W0321 16:35:58.659229    6620 run.go:29] (/home/user/kne-internal/set_pid_max.sh): + sudo sysctl kernel.pid_max=1048575
Step #0 - "kne_test": I0321 16:35:58.673554    6620 run.go:26] (/home/user/kne-internal/set_pid_max.sh): kernel.pid_max = 1048575
Step #0 - "kne_test": I0321 16:35:58.675450    6620 deploy.go:661] Found manifest "/home/user/kne/manifests/kind/bridge.yaml"
Step #0 - "kne_test": I0321 16:35:58.904244    6620 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/kindnet created
Step #0 - "kne_test": I0321 16:35:58.911099    6620 run.go:26] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/kindnet created
Step #0 - "kne_test": I0321 16:35:58.923406    6620 run.go:26] (kubectl): serviceaccount/kindnet created
Step #0 - "kne_test": I0321 16:35:58.936220    6620 run.go:26] (kubectl): daemonset.apps/kindnet created
Step #0 - "kne_test": I0321 16:35:58.937987    6620 deploy.go:668] Setting up GAR access for [us-west1-docker.pkg.dev]
Step #0 - "kne_test": W0321 16:35:59.760998    6620 run.go:29] (docker): WARNING! Your password will be stored unencrypted in /tmp/kne_kind_docker3650811232/config.json.
Step #0 - "kne_test": W0321 16:35:59.761022    6620 run.go:29] (docker): Configure a credential helper to remove this warning. See
Step #0 - "kne_test": W0321 16:35:59.761027    6620 run.go:29] (docker): https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Step #0 - "kne_test": I0321 16:35:59.761518    6620 run.go:26] (docker): Login Succeeded
Step #0 - "kne_test": I0321 16:36:00.064256    6620 kind.go:127] Setup GAR access for [us-west1-docker.pkg.dev]
Step #0 - "kne_test": I0321 16:36:00.064605    6620 deploy.go:675] Loading container images
Step #0 - "kne_test": I0321 16:36:00.064636    6620 deploy.go:730] Loading "us-west1-docker.pkg.dev/kne-external/kne/networkop/init-wait:ga" as "networkop/init-wait:latest"
Step #0 - "kne_test": W0321 16:36:08.801576    6620 run.go:29] (kind): Image: "networkop/init-wait:latest" with ID "sha256:9469cb38beaf99320df54231cbeee278db9e45c5b19a32e123b0a6b1eb0fec78" not yet present on node "kne-control-plane", loading...
Step #0 - "kne_test": I0321 16:36:09.603300    6620 deploy.go:765] Loaded all container images
Step #0 - "kne_test": I0321 16:36:09.603326    6620 deploy.go:195] Cluster deployed
Step #0 - "kne_test": I0321 16:36:09.687366    6620 run.go:26] (kubectl): �[0;32mKubernetes control plane�[0m is running at �[0;33mhttps://127.0.0.1:34221�[0m
Step #0 - "kne_test": I0321 16:36:09.687390    6620 run.go:26] (kubectl): �[0;32mCoreDNS�[0m is running at �[0;33mhttps://127.0.0.1:34221/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy�[0m
Step #0 - "kne_test": I0321 16:36:09.687397    6620 run.go:26] (kubectl): To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Step #0 - "kne_test": I0321 16:36:09.688886    6620 deploy.go:199] Cluster healthy
Step #0 - "kne_test": I0321 16:36:09.690544    6620 deploy.go:210] Validating kubectl version
Step #0 - "kne_test": I0321 16:36:09.780453    6620 deploy.go:242] Deploying ingress...
Step #0 - "kne_test": I0321 16:36:09.780731    6620 deploy.go:845] Creating metallb namespace
Step #0 - "kne_test": I0321 16:36:09.780788    6620 deploy.go:864] Deploying MetalLB from: /home/user/kne/manifests/metallb/manifest.yaml
Step #0 - "kne_test": I0321 16:36:10.031768    6620 run.go:26] (kubectl): namespace/metallb-system created
Step #0 - "kne_test": I0321 16:36:10.048340    6620 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/addresspools.metallb.io created
Step #0 - "kne_test": I0321 16:36:10.062584    6620 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/bfdprofiles.metallb.io created
Step #0 - "kne_test": I0321 16:36:10.078741    6620 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/bgpadvertisements.metallb.io created
Step #0 - "kne_test": I0321 16:36:10.099514    6620 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/bgppeers.metallb.io created
Step #0 - "kne_test": I0321 16:36:10.113504    6620 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/communities.metallb.io created
Step #0 - "kne_test": I0321 16:36:10.130855    6620 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/ipaddresspools.metallb.io created
Step #0 - "kne_test": I0321 16:36:10.148737    6620 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/l2advertisements.metallb.io created
Step #0 - "kne_test": I0321 16:36:10.174571    6620 run.go:26] (kubectl): serviceaccount/controller created
Step #0 - "kne_test": I0321 16:36:10.197698    6620 run.go:26] (kubectl): serviceaccount/speaker created
Step #0 - "kne_test": I0321 16:36:10.220985    6620 run.go:26] (kubectl): role.rbac.authorization.k8s.io/controller created
Step #0 - "kne_test": I0321 16:36:10.235171    6620 run.go:26] (kubectl): role.rbac.authorization.k8s.io/pod-lister created
Step #0 - "kne_test": I0321 16:36:10.241813    6620 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
Step #0 - "kne_test": I0321 16:36:10.250937    6620 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
Step #0 - "kne_test": I0321 16:36:10.261111    6620 run.go:26] (kubectl): rolebinding.rbac.authorization.k8s.io/controller created
Step #0 - "kne_test": I0321 16:36:10.274969    6620 run.go:26] (kubectl): rolebinding.rbac.authorization.k8s.io/pod-lister created
Step #0 - "kne_test": I0321 16:36:10.284438    6620 run.go:26] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
Step #0 - "kne_test": I0321 16:36:10.292794    6620 run.go:26] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
Step #0 - "kne_test": I0321 16:36:10.301810    6620 run.go:26] (kubectl): secret/webhook-server-cert created
Step #0 - "kne_test": I0321 16:36:10.314524    6620 run.go:26] (kubectl): service/webhook-service created
Step #0 - "kne_test": I0321 16:36:10.326891    6620 run.go:26] (kubectl): deployment.apps/controller created
Step #0 - "kne_test": I0321 16:36:10.337869    6620 run.go:26] (kubectl): daemonset.apps/speaker created
Step #0 - "kne_test": I0321 16:36:10.348384    6620 run.go:26] (kubectl): validatingwebhookconfiguration.admissionregistration.k8s.io/metallb-webhook-configuration created
Step #0 - "kne_test": I0321 16:36:10.352865    6620 deploy.go:869] Creating metallb secret
Step #0 - "kne_test": I0321 16:36:10.357395    6620 deploy.go:1308] Waiting on deployment "metallb-system" to be healthy
Step #1 - "vendors_test": I0321 16:36:24.641467    8024 juniper.go:219] ncptx - pod running.
Step #0 - "kne_test": I0321 16:36:39.126621    6620 deploy.go:1335] Deployment "metallb-system" healthy
Step #0 - "kne_test": I0321 16:36:39.129906    6620 deploy.go:893] Applying metallb ingress config
Step #0 - "kne_test": W0321 16:36:39.136474    6620 deploy.go:931] Failed to create address polling (will retry 5 times)
Step #0 - "kne_test": I0321 16:36:44.173827    6620 deploy.go:1308] Waiting on deployment "metallb-system" to be healthy
Step #0 - "kne_test": I0321 16:36:44.176373    6620 deploy.go:1335] Deployment "metallb-system" healthy
Step #0 - "kne_test": I0321 16:36:44.176395    6620 deploy.go:251] Ingress healthy
Step #0 - "kne_test": I0321 16:36:44.176402    6620 deploy.go:252] Deploying CNI...
Step #0 - "kne_test": I0321 16:36:44.176408    6620 deploy.go:994] Deploying Meshnet from: /home/user/kne/manifests/meshnet/grpc/manifest.yaml
Step #0 - "kne_test": I0321 16:36:44.994473    6620 run.go:26] (kubectl): namespace/meshnet created
Step #0 - "kne_test": I0321 16:36:45.008681    6620 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/gwirekobjs.networkop.co.uk created
Step #0 - "kne_test": I0321 16:36:45.020613    6620 run.go:26] (kubectl): customresourcedefinition.apiextensions.k8s.io/topologies.networkop.co.uk created
Step #0 - "kne_test": I0321 16:36:45.034204    6620 run.go:26] (kubectl): serviceaccount/meshnet created
Step #0 - "kne_test": I0321 16:36:45.043130    6620 run.go:26] (kubectl): clusterrole.rbac.authorization.k8s.io/meshnet-clusterrole created
Step #0 - "kne_test": I0321 16:36:45.050511    6620 run.go:26] (kubectl): clusterrolebinding.rbac.authorization.k8s.io/meshnet-clusterrolebinding created
Step #0 - "kne_test": W0321 16:36:45.063154    6620 run.go:29] (kubectl): Warning: spec.template.spec.nodeSelector[beta.kubernetes.io/arch]: deprecated since v1.14; use "kubernetes.io/arch" instead
Step #0 - "kne_test": I0321 16:36:45.063728    6620 run.go:26] (kubectl): daemonset.apps/meshnet created
Step #0 - "kne_test": I0321 16:36:45.066230    6620 deploy.go:998] Meshnet Deployed
Step #0 - "kne_test": I0321 16:36:45.066259    6620 deploy.go:1003] Waiting on Meshnet to be Healthy
Step #0 - "kne_test": I0321 16:36:45.068676    6620 deploy.go:1024] Meshnet Healthy
Step #0 - "kne_test": I0321 16:36:45.068704    6620 deploy.go:262] CNI healthy
Step #0 - "kne_test": I0321 16:36:45.068718    6620 deploy.go:275] Controllers deployed and healthy
Step #0 - "kne_test": I0321 16:36:45.068839    6620 deploy.go:119] Deployment complete, ready for topology
Step #0 - "kne_test": Log files can be found in:
Step #0 - "kne_test":     /tmp/kne.kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13.user.log.INFO.20240321-163532.6620
Step #0 - "kne_test":     /tmp/kne.kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13.user.log.WARNING.20240321-163532.6620
Step #0 - "kne_test": ++ kubectl get pods -A
Step #0 - "kne_test": NAMESPACE            NAME                                        READY   STATUS              RESTARTS   AGE
Step #0 - "kne_test": kube-system          coredns-787d4945fb-nwqb4                    1/1     Running             0          33s
Step #0 - "kne_test": kube-system          coredns-787d4945fb-tlfmf                    1/1     Running             0          33s
Step #0 - "kne_test": kube-system          etcd-kne-control-plane                      1/1     Running             0          50s
Step #0 - "kne_test": kube-system          kindnet-mxr6k                               1/1     Running             0          34s
Step #0 - "kne_test": kube-system          kube-apiserver-kne-control-plane            1/1     Running             0          50s
Step #0 - "kne_test": kube-system          kube-controller-manager-kne-control-plane   1/1     Running             0          50s
Step #0 - "kne_test": kube-system          kube-proxy-pxm2n                            1/1     Running             0          34s
Step #0 - "kne_test": kube-system          kube-scheduler-kne-control-plane            1/1     Running             0          50s
Step #0 - "kne_test": local-path-storage   local-path-provisioner-c8855d4bb-w4w64      1/1     Running             0          33s
Step #0 - "kne_test": meshnet              meshnet-sdcn2                               0/1     ContainerCreating   0          0s
Step #0 - "kne_test": metallb-system       controller-8bb68977b-2drcf                  1/1     Running             0          33s
Step #0 - "kne_test": metallb-system       speaker-sht8p                               1/1     Running             0          27s
Step #0 - "kne_test": ++ docker exec kne-control-plane crictl images
Step #0 - "kne_test": ++ grep docker.io/networkop/init-wait
Step #0 - "kne_test": docker.io/networkop/init-wait                                 latest               9469cb38beaf9       5.86MB
Step #0 - "kne_test": ++ /home/user/kne/kne_cli/kne teardown kne/deploy/kne/kind-bridge.yaml
Step #0 - "kne_test": I0321 16:36:45.335226    9554 deploy.go:329] Deleting cluster...
Step #0 - "kne_test": W0321 16:36:45.357018    9554 run.go:29] (kind): Deleting cluster "kne" ...
Step #0 - "kne_test": W0321 16:36:48.235895    9554 run.go:29] (kind): Deleted nodes: ["kne-control-plane"]
Step #0 - "kne_test": I0321 16:36:48.236490    9554 deploy.go:333] Cluster deleted
Step #0 - "kne_test": I0321 16:36:48.236545    9554 deploy.go:134] Cluster deployment teardown complete
Step #0 - "kne_test": Log files can be found in:
Step #0 - "kne_test":     /tmp/kne.kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13.user.log.INFO.20240321-163645.9554
Step #0 - "kne_test":     /tmp/kne.kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13.user.log.WARNING.20240321-163645.9554
Step #0 - "kne_test": ++ /home/user/kne/kne_cli/kne deploy kne/deploy/kne/kubeadm.yaml --report_usage=false
Step #0 - "kne_test": I0321 16:36:48.282975    9653 deploy.go:191] Deploying cluster...
Step #0 - "kne_test": I0321 16:36:50.658984    9653 run.go:26] (sudo): [init] Using Kubernetes version: v1.29.3
Step #0 - "kne_test": I0321 16:36:50.659016    9653 run.go:26] (sudo): [preflight] Running pre-flight checks
Step #0 - "kne_test": W0321 16:36:55.085663    9653 run.go:29] (sudo): error execution phase preflight: [preflight] Some fatal errors occurred:
Step #0 - "kne_test": W0321 16:36:55.085991    9653 run.go:29] (sudo): 	[ERROR CRI]: container runtime is not running: output: time="2024-03-21T16:36:54Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
Step #0 - "kne_test": W0321 16:36:55.086008    9653 run.go:29] (sudo): , error: exit status 1
Step #0 - "kne_test": W0321 16:36:55.086014    9653 run.go:29] (sudo): [preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
Step #0 - "kne_test": W0321 16:36:55.086024    9653 run.go:29] (sudo): To see the stack trace of this error execute with --v=5 or higher
Step #0 - "kne_test": Error: failed to deploy cluster: "/usr/bin/sudo kubeadm init --cri-socket unix:///var/run/cri-dockerd.sock --pod-network-cidr 10.244.0.0/16 --token-ttl 0" failed: exit status 1
Step #0 - "kne_test": Log files can be found in:
Step #0 - "kne_test":     /tmp/kne.kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13.user.log.INFO.20240321-163648.9653
Step #0 - "kne_test":     /tmp/kne.kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13.user.log.WARNING.20240321-163655.9653
Step #0 - "kne_test": + cleanup
Step #0 - "kne_test": + gcloud compute instances delete kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13 --quiet
Step #0 - "kne_test": Deleted [https://www.googleapis.com/compute/v1/projects/kne-external/zones/us-central1-a/instances/kne-presubmit-kne-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13].
Finished Step #0 - "kne_test"
ERROR
ERROR: build step 0 "gcr.io/kne-external/remote-builder" failed: step exited with non-zero status: 1
Step #1 - "vendors_test": ++ cleanup
Step #1 - "vendors_test": ++ gcloud compute instances delete kne-presubmit-vendors-a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13 --quiet

Build Log: https://console.cloud.google.com/cloud-build/builds;region=us-central1/a8dcbcd1-8533-4a2a-bcfb-a9e6c8046d13?project=94286565069