-
-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add latest Tag to Docker Release #10498
base: master
Are you sure you want to change the base?
Conversation
|
.github/workflows/create-release.yml
Outdated
@@ -68,6 +68,6 @@ jobs: | |||
context: . | |||
push: true | |||
platforms: linux/amd64,linux/arm64 | |||
tags: ghcr.io/badges/shields:server-${{ steps.date.outputs.date }} | |||
tags: ghcr.io/badges/shields:server-${{ steps.date.outputs.date }},shieldsio/shields:latest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tags: ghcr.io/badges/shields:server-${{ steps.date.outputs.date }},shieldsio/shields:latest | |
tags: ghcr.io/badges/shields:server-${{ steps.date.outputs.date }},ghcr.io/badges/shields:latest |
I think having a rolling tag that points at the latest snapshot makes sense. I reckon using Note: Before we merge this, update https://github.com/badges/shields/blob/master/doc/self-hosting.md#docker again |
as far as i know, the we made the decision to not include it intentionally when we started producing our own, and i'm not keen on reversing that decision, at least not absent exceptionally compelling reasons |
To be clear: Are you saying you think we shouldn't use the name Or are you completely against having a rolling tag pointing to the latest snapshot with any name? |
TL;DR Yes I'm adamantly opposed to adding any Yes I'm also generally opposed to adding any new rolling tag (though I'm not pushing for us to drop the existing rolling I think our starting position needs to be opposition to adding any new tags, with the paradigm for any new proposals to change that position needing to sufficiently make the case as to why we should make an exception to that, as opposed to starting from the position of having to defend why we don't want a new tag If we're going to entertain this then I'd really like to get a better understanding of the problem we're trying to solve. To expand on my prior comment with some receipts, deploying images using the https://kubernetes.io/docs/concepts/containers/images/#image-pull-policy
That's a short and sweet summary, but the underlying reasons/blast radius from deploying rolling tags ( The docker images we produce are for the Shields server, so it's something that gets deployed as opposed to being used as a base image (where rolling tags are less problematic) by developers using them to construct their own deployable images. Accordingly, or at least from my perspective, if as a project we start producing a
Conversely, I don't see what substantive gains would be achieved from doing this shields.io needs to deploy basically every update we get, but i believe it's still the case that level of cadence would be extremely rare for self-hosters, many (perhaps most if not all?) of whom can just deploy and forget about it as their surface of badge consumption is much narrower than the entire corpus of shields.io users (e.g. how many self-hosting users absolutely urgently need #10489, #10497, etc.) i run a self-hosted shields instance within a highly regulated corporate environment that has resultant policies which set a relatively low ceiling on the age of running images which in turn results in me having to redeploy more frequently than needed from a shields-updates/features perspective. i'd posit my own self-hosted deployment cadence is much more frequent than the overwhelming majority of self-hosters. i have no issues tweaking the variable i use to grab the new monthly tag. it's not something i find burdensome whatsoever, i wouldn't switch to any rolling tag to spare myself that highly marginal effort i go through every < 45 days, and if i was adequately motivated i'm sure i could script that away. we provide the finally, our monthly server tags aren't anything special relative to As such, if there's a self-hosting use case that wants to completely automate the process of upgrading their self-hosted instance by removing the human element of specifying a calver tag, why can't they use the existing, rolling e.g. If there's a self-hoster that deployed a |
Having shot us in the foot with #9391 (comment) Certainly not in an environment where you've got more than one instance in a cluster
I think this is quite a good point. First off in that situation (base image) you only pull from the upstream tag at build time and then you're locked once your image is built. But also, in a situation where the tag you're tracking is In any case. This is not an issue I have really strong views on. Hence tagging you in - I figured you have probably thought about this one than I have. I think I'll leave it with @smashedr to say why this would be good/useful if they are not convinced. Otherwise we can just close it. tangent
When this issue popped up, it actually got me thinking about why we use |
In a production environment, you should always pin versions. But for a simple self-deployment, using a latest tag makes it very easy to update to the latest version. All I would have to do to update is click 1 button: And as far as knowing what version is running, all I have to do is click on the image, and its clear as day: But when pinning versions, I have to get the latest version tag, open my IDE, commit the changes, then redeploy the application: https://github.com/smashedr/shields.io The |
@smashedr thanks for the quick response. i'll reply to a couple items inline below, but i do want to center on what i view as the main question that still remains unanswered (or at least unclear in my mind): for your particular use case, what gaps/challenges do you have with the
I get the simplicity that comes with a model that utilizes rolling tags, though I'm unsure what your personal distinction is between "simple self-deployment" and "production". Regardless though, we have to think about the artifacts (which includes tags) that we produce, how those could potentially be consumed by any of our users, and the potential effects that could have on us. We can reasonably say that we don't know if/how the app may behave on e.g. powerpc or any other non-x86 instruction set architecture, and that we don't have the time/expertise/funds to be able to test, validate, and support, but we do already spend precious cycles supporting self-hosting users and extending our main image with more rolling tags (especially To be clear, I'm not suggesting that any and every usage of a rolling tag will result in a catastrophic explosion, and it may not be in an issue for your particular circumstances. But we can't focus solely on that, as it's not an image tag we'd be producing for your use case alone.
Good shout, i forgot we put that env var in there. Though worth noting that just makes troubleshooting a bit easier in that it simplifies the process of identifying one of the problematic situations (e.g. different versions running on different nodes) that can and do arise with deploying rolling tags
This is where I come back to trying to differentiate between one user, one use case (the perspective you're understandably focused on) versus all users, any use case (the perspective from which we have to evaluate this) what's the scenario where someone is: a. not in production, by extension presumably open to accepting some risk |
I am not asking anyone else to switch to using the rolling tag All this does is add the ability for additional workflows to be used, vs the one you want people to use; pinning versions. It up to the end users to decide what they want in their environment. There is a big difference in As far as the a,b,c scenario you mentioned. I am that person. A, I fully understand how it works, and it makes my workflow much, much easier to manage. B, See A, it makes the workflow much, much easier to manage for me. C, I can not use So again, all I am asking is to add the ability for people who want to use a |
re: #10498 (comment) For the sake of trying to expedite the conversation, I'll reiterate that I know what you're asking for. I understand what the PR is proposing, and I understand when and how we're publishing our tags, so we can leave that there 👍 Your comment did include the answer to my question though which is helpful: You need to deploy on arm, you'd prefer to use a rolling tag, and there's not a rolling arm-compatible tag today. That's the objective. You've proposed one way that would achieve that, but that's not the only way we could provide a rolling tag that'll run on arm. I'm going to go ahead and say that The main point I want to drill into now though to see what alternatives might be viable is this 👇
I'm curious what "beeding edge" means to you, specifically in this context. What do you think are the meaningful differences between our |
The name of the tag is irrelevant, I just want to see a rolling tag that I can use to make updating my self-hosted server much easier. As far as the other tags, I don't fully understand how this project decides to make a |
It is pretty aribtrary. That is why I refer to it as a snapshot. There is a scheduled job that runs once a month It is fairly aribtrary and hasn't gone through any additional QA. There are some slight qualifiers to this though. I might delay a release if there's a fix I'd particularly like to get merged first (e.g: #8467 (comment) ). I do also try to make sure we don't cut a snapshot at a time when there is some known problem. Occasionally we do change something that causes an issue. For example, we merge a PR that seems reasonable at review time, get it into production and realise it introduces a problem we hadn't considered (example: #10125 ). This is not super frequent, but it has happened. If the automated PR saying "time to do a release" popped up when something like that had happened and I knew master was in a buggy state, I would hold off on cutting the release until that issue had been resolved. The other thing is I will not usually cut a monthly snapshot if there are changes on master we are not yet running in production. So I try to make sure everything that goes into a snapshot has served some level of prod traffic without immediately falling over. In that sense, I would say there is a slightly higher level of "stability" associated with the monthly releases than just tracking next/master. |
I'd partially agree but maybe with some nuance. I think snapshot is a fair description and feel it echoes my main original point I wanted to ensure was clear about it being arbitrary: Where I'd still draw a little nuance is that I wouldn't want anyone to infer a label of "stability" on any of the server tags. We don't use them ourselves, and it's very conceivable that a For me the snapshot/ We've given @smashedr some conflicting thoughts on next steps so I think we as a maintainer team need to discuss a couple things to get on the same page and figure out to proceed here, the only hill I'll die on is that I don't want any tag named |
One other change I would like to see, is adding a
latest
tag to the docker image that always points to the latest version published.This will allow easily updating self-hosted deployments without having to make code changes every time.
I also updated the setting of the
date
output that is soon to be deprecated with the new format. Reference https://github.blog/changelog/2022-10-11-github-actions-deprecating-save-state-and-set-output-commands/