-
Notifications
You must be signed in to change notification settings - Fork 681
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve tagging & deployment for install-contour/provisioner-working scripts #5854
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #5854 +/- ##
=======================================
Coverage 78.62% 78.62%
=======================================
Files 138 138
Lines 19631 19631
=======================================
Hits 15434 15434
Misses 3894 3894
Partials 303 303 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @harshil1973!
Looks good to me. Small remark from me inline.
The side effect of patching with timestamp label is that also at the initial make install-*
(when cluster is being created) the kubectl patch
will trigger an extra restart that would not be necessary. But at least to me, this seems small "price" to pay for not spamming the developer's disk with temp images.
@tsaarni Done! |
045f121
to
a60bdc0
Compare
|
||
# Install the Gateway provisioner using the loaded image. | ||
export CONTOUR_IMG=ghcr.io/projectcontour/contour:latest | ||
export CONTOUR_IMG=ghcr.io/projectcontour/contour:dev |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there's an issue here where even though the provisioner itself will be updated to use the newly built -dev
image, any Contour instances managed by it don't get updated, since the tag hasn't changed, which can be problematic for the dev workflow.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In that case, maybe an old pseudo-random image tag schema would be nice with some changes like we can add the prefix dev
to it so that we can remove all the images starting with the dev
tag.
The Contour project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to the #contour channel in the Kubernetes Slack |
One path forward here could be to just make this change for the non-provisioner use case, which will at least solve some of the problem. We can continue to work on the provisioner scenario separately. |
0248bf5
to
808ed01
Compare
@skriss Done 👍 |
If removing provisioner-working from the scope then I think But I wonder, isn't it possible to follow your original idea for managed contour's as well by something like: diff --git a/test/scripts/install-provisioner-working.sh b/test/scripts/install-provisioner-working.sh
@@ -51,6 +52,9 @@ ${KUBECTL} apply -f <(cat examples/gateway-provisioner/03-gateway-provisioner.ya
# Patching the deployment with timestamp will trigger pod restart.
${KUBECTL} patch deployment -n projectcontour contour-gateway-provisioner --patch '{"spec": {"template": {"metadata": {"annotations": {"timestamp": "'$(date +%s)'"}}}}}'
+for deployment in $(kubectl get deployment -n projectcontour -l app.kubernetes.io/managed-by=contour-gateway-provisioner -o name) ; do
+ ${KUBECTL} patch $deployment -n projectcontour --patch '{"spec": {"template": {"metadata": {"annotations": {"timestamp": "'$(date +%s)'"}}}}}'
+done
# Wait for the provisioner to report "Ready" status.
${KUBECTL} wait --timeout="${WAITTIME}" -n projectcontour -l control-plane=contour-gateway-provisioner deployments --for=condition=Available If there are not any (Gateway instance not created yet) the loop would be just skipped. |
808ed01
to
3903564
Compare
There was small confusion, the loop that starts managed contours should have applied to On the other hand I'll include diff here on top of the current PR that shows what I meant: diff --git a/test/scripts/install-contour-working.sh b/test/scripts/install-contour-working.sh
index f5ce10d6..88e595bc 100755
--- a/test/scripts/install-contour-working.sh
+++ b/test/scripts/install-contour-working.sh
@@ -75,9 +75,7 @@ for file in ${REPO}/examples/contour/02-job-certgen.yaml ${REPO}/examples/contou
done
# Patching the deployment with timestamp will trigger pod restart.
-for deployment in $(kubectl get deployment -n projectcontour -l app.kubernetes.io/managed-by=contour-gateway-provisioner -o name) ; do
- ${KUBECTL} patch $deployment -n projectcontour --patch '{"spec": {"template": {"metadata": {"annotations": {"timestamp": "'$(date +%s)'"}}}}}'
-done
+${KUBECTL} patch deployment -n projectcontour contour --patch '{"spec": {"template": {"metadata": {"annotations": {"timestamp": "'$(date +%s)'"}}}}}'
# Wait for Contour and Envoy to report "Ready" status.
${KUBECTL} wait --timeout="${WAITTIME}" -n projectcontour -l app=contour deployments --for=condition=Available
diff --git a/test/scripts/install-provisioner-working.sh b/test/scripts/install-provisioner-working.sh
index ddbe02e8..70ec461d 100755
--- a/test/scripts/install-provisioner-working.sh
+++ b/test/scripts/install-provisioner-working.sh
@@ -33,13 +33,13 @@ fi
VERSION="v$$"
# Build the Contour Provisioner image.
-make -C ${REPO} container IMAGE=ghcr.io/projectcontour/contour VERSION=${VERSION}
+make -C ${REPO} container IMAGE=ghcr.io/projectcontour/contour VERSION=dev
# Push the Contour Provisioner image into the cluster.
-kind::cluster::load::docker ghcr.io/projectcontour/contour:${VERSION}
+kind::cluster::load::docker ghcr.io/projectcontour/contour:dev
# Install the Gateway provisioner using the loaded image.
-export CONTOUR_IMG=ghcr.io/projectcontour/contour:${VERSION}
+export CONTOUR_IMG=ghcr.io/projectcontour/contour:dev
${KUBECTL} apply -f examples/gateway-provisioner/00-common.yaml
${KUBECTL} apply -f examples/gateway-provisioner/01-roles.yaml
@@ -52,6 +52,11 @@ ${KUBECTL} apply -f <(cat examples/gateway-provisioner/03-gateway-provisioner.ya
# Patching the deployment with timestamp will trigger pod restart.
${KUBECTL} patch deployment -n projectcontour contour-gateway-provisioner --patch '{"spec": {"template": {"metadata": {"annotations": {"timestamp": "'$(date +%s)'"}}}}}'
+# Trigger restart of all pods managed by the provisioner.
+for deployment in $(kubectl get deployment -n projectcontour -l app.kubernetes.io/managed-by=contour-gateway-provisioner -o name) ; do
+ ${KUBECTL} patch $deployment -n projectcontour --patch '{"spec": {"template": {"metadata": {"annotations": {"timestamp": "'$(date +%s)'"}}}}}'
+done
+
# Wait for the provisioner to report "Ready" status.
${KUBECTL} wait --timeout="${WAITTIME}" -n projectcontour -l control-plane=contour-gateway-provisioner deployments --for=condition=Available
${KUBECTL} wait --timeout="${WAITTIME}" -n projectcontour -l control-plane=contour-gateway-provisioner pods --for=condition=Ready That is, |
…scripts Signed-off-by: Harshil Patel <[email protected]>
3903564
to
a1db613
Compare
The Contour project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to the #contour channel in the Kubernetes Slack |
The Contour project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to the #contour channel in the Kubernetes Slack |
The Contour project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to the #contour channel in the Kubernetes Slack |
closes #5277