Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve tagging & deployment for install-contour/provisioner-working scripts #5854

Closed
wants to merge 1 commit into from

Conversation

harshil1973
Copy link
Contributor

closes #5277

@harshil1973 harshil1973 requested a review from a team as a code owner October 14, 2023 11:47
@harshil1973 harshil1973 requested review from stevesloka and sunjayBhatia and removed request for a team October 14, 2023 11:47
@codecov
Copy link

codecov bot commented Oct 14, 2023

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (55aa95d) 78.62% compared to head (a1db613) 78.62%.
Report is 69 commits behind head on main.

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##             main    #5854   +/-   ##
=======================================
  Coverage   78.62%   78.62%           
=======================================
  Files         138      138           
  Lines       19631    19631           
=======================================
  Hits        15434    15434           
  Misses       3894     3894           
  Partials      303      303           

@tsaarni tsaarni added the release-note/none-required Marks a PR as not requiring a release note. Should only be used for very small changes. label Oct 16, 2023
Copy link
Member

@tsaarni tsaarni left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you @harshil1973!

Looks good to me. Small remark from me inline.

The side effect of patching with timestamp label is that also at the initial make install-* (when cluster is being created) the kubectl patch will trigger an extra restart that would not be necessary. But at least to me, this seems small "price" to pay for not spamming the developer's disk with temp images.

test/scripts/install-contour-working.sh Show resolved Hide resolved
test/scripts/install-provisioner-working.sh Outdated Show resolved Hide resolved
@harshil1973
Copy link
Contributor Author

@tsaarni Done!


# Install the Gateway provisioner using the loaded image.
export CONTOUR_IMG=ghcr.io/projectcontour/contour:latest
export CONTOUR_IMG=ghcr.io/projectcontour/contour:dev
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there's an issue here where even though the provisioner itself will be updated to use the newly built -dev image, any Contour instances managed by it don't get updated, since the tag hasn't changed, which can be problematic for the dev workflow.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In that case, maybe an old pseudo-random image tag schema would be nice with some changes like we can add the prefix dev to it so that we can remove all the images starting with the dev tag.

Copy link

github-actions bot commented Nov 8, 2023

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 14d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 8, 2023
@skriss
Copy link
Member

skriss commented Nov 8, 2023

One path forward here could be to just make this change for the non-provisioner use case, which will at least solve some of the problem. We can continue to work on the provisioner scenario separately.

@github-actions github-actions bot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 9, 2023
@harshil1973
Copy link
Contributor Author

@skriss Done 👍

@tsaarni
Copy link
Member

tsaarni commented Nov 30, 2023

If removing provisioner-working from the scope then I think test/scripts/install-provisioner-working.sh should not patch the timestamp at all (like it currently still does), but instead we'd just relay on random PID based image tag. Otherwise it will be restarted twice 😄

But I wonder, isn't it possible to follow your original idea for managed contour's as well by something like:

diff --git a/test/scripts/install-provisioner-working.sh b/test/scripts/install-provisioner-working.sh
@@ -51,6 +52,9 @@ ${KUBECTL} apply -f <(cat examples/gateway-provisioner/03-gateway-provisioner.ya

 # Patching the deployment with timestamp will trigger pod restart.
 ${KUBECTL} patch deployment -n projectcontour contour-gateway-provisioner --patch '{"spec": {"template": {"metadata": {"annotations": {"timestamp": "'$(date +%s)'"}}}}}'
+for deployment in $(kubectl get deployment -n projectcontour -l app.kubernetes.io/managed-by=contour-gateway-provisioner -o name) ; do
+    ${KUBECTL} patch $deployment -n projectcontour --patch '{"spec": {"template": {"metadata": {"annotations": {"timestamp": "'$(date +%s)'"}}}}}'
+done

 # Wait for the provisioner to report "Ready" status.
 ${KUBECTL} wait --timeout="${WAITTIME}" -n projectcontour -l control-plane=contour-gateway-provisioner deployments --for=condition=Available

If there are not any (Gateway instance not created yet) the loop would be just skipped.

@tsaarni
Copy link
Member

tsaarni commented Dec 4, 2023

There was small confusion, the loop that starts managed contours should have applied to install-provisioner-working.sh which is for testing contour provisioner & managed contours.

On the other hand install-contour-working.sh which has just single (unmanaged) contour, what you had previously was correct.

I'll include diff here on top of the current PR that shows what I meant:

diff --git a/test/scripts/install-contour-working.sh b/test/scripts/install-contour-working.sh
index f5ce10d6..88e595bc 100755
--- a/test/scripts/install-contour-working.sh
+++ b/test/scripts/install-contour-working.sh
@@ -75,9 +75,7 @@ for file in ${REPO}/examples/contour/02-job-certgen.yaml ${REPO}/examples/contou
 done

 # Patching the deployment with timestamp will trigger pod restart.
-for deployment in $(kubectl get deployment -n projectcontour -l app.kubernetes.io/managed-by=contour-gateway-provisioner -o name) ; do
-  ${KUBECTL} patch $deployment -n projectcontour --patch '{"spec": {"template": {"metadata": {"annotations": {"timestamp": "'$(date +%s)'"}}}}}'
-done
+${KUBECTL} patch deployment -n projectcontour contour --patch '{"spec": {"template": {"metadata": {"annotations": {"timestamp": "'$(date +%s)'"}}}}}'

 # Wait for Contour and Envoy to report "Ready" status.
 ${KUBECTL} wait --timeout="${WAITTIME}" -n projectcontour -l app=contour deployments --for=condition=Available
diff --git a/test/scripts/install-provisioner-working.sh b/test/scripts/install-provisioner-working.sh
index ddbe02e8..70ec461d 100755
--- a/test/scripts/install-provisioner-working.sh
+++ b/test/scripts/install-provisioner-working.sh
@@ -33,13 +33,13 @@ fi
 VERSION="v$$"

 # Build the Contour Provisioner image.
-make -C ${REPO} container IMAGE=ghcr.io/projectcontour/contour VERSION=${VERSION}
+make -C ${REPO} container IMAGE=ghcr.io/projectcontour/contour VERSION=dev

 # Push the Contour Provisioner image into the cluster.
-kind::cluster::load::docker ghcr.io/projectcontour/contour:${VERSION}
+kind::cluster::load::docker ghcr.io/projectcontour/contour:dev

 # Install the Gateway provisioner using the loaded image.
-export CONTOUR_IMG=ghcr.io/projectcontour/contour:${VERSION}
+export CONTOUR_IMG=ghcr.io/projectcontour/contour:dev

 ${KUBECTL} apply -f examples/gateway-provisioner/00-common.yaml
 ${KUBECTL} apply -f examples/gateway-provisioner/01-roles.yaml
@@ -52,6 +52,11 @@ ${KUBECTL} apply -f <(cat examples/gateway-provisioner/03-gateway-provisioner.ya
 # Patching the deployment with timestamp will trigger pod restart.
 ${KUBECTL} patch deployment -n projectcontour contour-gateway-provisioner --patch '{"spec": {"template": {"metadata": {"annotations": {"timestamp": "'$(date +%s)'"}}}}}'

+# Trigger restart of all pods managed by the provisioner.
+for deployment in $(kubectl get deployment -n projectcontour -l app.kubernetes.io/managed-by=contour-gateway-provisioner -o name) ; do
+  ${KUBECTL} patch $deployment -n projectcontour --patch '{"spec": {"template": {"metadata": {"annotations": {"timestamp": "'$(date +%s)'"}}}}}'
+done
+
 # Wait for the provisioner to report "Ready" status.
 ${KUBECTL} wait --timeout="${WAITTIME}" -n projectcontour -l control-plane=contour-gateway-provisioner deployments --for=condition=Available
 ${KUBECTL} wait --timeout="${WAITTIME}" -n projectcontour -l control-plane=contour-gateway-provisioner pods --for=condition=Ready

That is, install-contour-working.sh is changed back to what you had previously and install-provisioner-working.sh applies the idea to provisioner use case by patching both provisioner and the managed contours.

Copy link

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 14d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Ensure your PR is passing all CI checks. PRs that are fully green are more likely to be reviewed. If you are having trouble with CI checks, reach out to the #contour channel in the Kubernetes Slack workspace.
  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot added lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 19, 2023
Copy link

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 14d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Ensure your PR is passing all CI checks. PRs that are fully green are more likely to be reviewed. If you are having trouble with CI checks, reach out to the #contour channel in the Kubernetes Slack workspace.
  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jan 17, 2024
Copy link

The Contour project currently lacks enough contributors to adequately respond to all PRs.

This bot triages PRs according to the following rules:

  • After 14d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, the PR is closed

You can:

  • Ensure your PR is passing all CI checks. PRs that are fully green are more likely to be reviewed. If you are having trouble with CI checks, reach out to the #contour channel in the Kubernetes Slack workspace.
  • Mark this PR as fresh by commenting or pushing a commit
  • Close this PR
  • Offer to help out with triage

Please send feedback to the #contour channel in the Kubernetes Slack

@github-actions github-actions bot closed this Feb 20, 2024
@harshil1973 harshil1973 deleted the rollout_feature branch March 1, 2024 08:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. release-note/none-required Marks a PR as not requiring a release note. Should only be used for very small changes.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

improve tagging & deployment for install-contour/provisioner-working
3 participants