-
Notifications
You must be signed in to change notification settings - Fork 329
Migrate docker builds to Docker Hub #86
Comments
I think deferring a move to DockerHub until the pleasure/pain ratio balances is a sound idea. |
Closing, there's either agreement or lack of interest, both lead to no action for now. |
I believe the ability to add builds to existing repositories is here now! I'm happy to do the migration if no one remembers any other outstanding issues. |
@openzipkin/core frankly I have no stake specifically in quay. it is harder to setup and slightly more work to troubleshoot. It does "work" almost always, but that's not actually a reason :P |
Thinking for this, can start by adding a |
if I remember correctly, the only thing we were last worried about was
conflating "latest" with "master", so as long as there's a different
tag used sounds like a good place to start.
…On Thu, Aug 29, 2019 at 12:36 PM Anuraag Agrawal ***@***.***> wrote:
Thinking for this, can start by adding a master tag to the docker-zipkin images that auto builds on master push. It will fix #170, takes no time to set up, and gives a chance to see the build work before migrating release processes or affecting any existing tags. How does that sound?
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Thanks - yep definitely intend to leave the latest tag as is, which isn't a problem with the docker hub UI. |
I haven't seen anyone say don't stop.. and usually I try to collect reasons on behalf of others who might be afk. As far as I know, you should have clear runway. thanks for the offer to help! |
Migrated all repos, all have a master tag now. Builds seem to succeed fine. This shows we can use docker hub to push tags without losing our repos and it's fairly easy to set up. Will think through how we might migrate the release process. |
FTR, I'm all for dropping Quay if all our requirements can now be satisfied using just Docker Hub. @anuraaga thank you for taking on this! |
So now all the repos have a Dockerfile that builds from the latest source. Having docker hub build a release image is also as simple as configuring the same build, with the regex on Looking at #82, my understanding is the goal is to get rid of
Then the final step becomes deciding what to do with the remaining folders in Let me know if you have any thoughts. |
only other thought comes to mind is double-checking the size/layer count of the images in case an accident happened along the way. For good measure, having someone besides you triggering a re-publish would also be good for the team. Ex sometimes I have to re-publish by deleting the tag and re-pushing it. for extra credit, see if @jeqo wants to try this out on his zipkin-kafka-storage repo, which we're discussing if is ready for going to oz-contrib |
Realized one thing missing from the plan was the fact that current release tags do not have Dockerfiles. So testing out the new release builds requires creating tags. For testing the build, I'll create |
So zipkin-dependencies has the expected tags, and I'd say the release procedure in general checks out https://cloud.docker.com/u/openzipkin/repository/docker/openzipkin/zipkin-dependencies The image is a few megs smaller. I'm trying to find the difference, though it's not so obvious. I noticed though that when building the repo, I also realized |
Also just for the record, looking at the zipkin-dependencies build, I can see why it would be publishing a non-clean build. Travis first runs |
@anuraaga is there a way to automate a smoke test / integration test of the docker image after build? |
or, we may still need to use travis to run a script to build the container and run the tests. |
@saturnism To smoke test the docker image itself, I think docker hub can run a test defined as docker-compose.test.yml. I can look into that more, not so familiar with it. But want to confirm, is the goal to check the docker image, or the goal to check a running server, with docker as one option for building a server? If the later, I think a better option might be to write normal integration tests that depend on the zipkin-server snapshot build. Then the server could be started by junit relatively easily and it'd work for PR builds without extra effort. |
Just for context. We do have e2e tests in some example repos like
zipkin-php-example and that tests that we can actually interact with the
server, not sure that is the requirements here. IMHO spinning up a
container and calling the health endpoint should be enough here for now.
tor. 5. sep. 2019, 01:28 skrev Anuraag Agrawal <[email protected]>:
… @saturnism <https://github.com/saturnism> To smoke test the docker image
itself, I think docker hub can run a test defined as
docker-compose.test.yml. I can look into that more, not so familiar with it.
But want to confirm, is the goal to check the docker image, or the goal to
check a running server, with docker as one option for building a server? If
the later, I think a better option might be to right normal integration
tests that depend on the zipkin-server snapshot build. Then the server
could be started by junit relatively easily and it'd work for PR builds
without extra effort.
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
<#86>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAXOYARZG6EOCHB7DMP3DBLQICKGPANCNFSM4BYHUDPA>
.
|
Thanks - yeah I think that those test a client of zipkin api, so I think we can start any server container. Here we want to check a server plugin (stackdriver storage) so we need to package the code under test in with the actual zipkin server which is not in the repo. It's possible to do this with docker too, but I'm suspecting it's simpler to just leave it to maven. |
For |
I want to make sure everyone is on the same page. Currently we have tests on zipkin-gcp, committed or in PRs, which check directly
Neither of these use zipkin-server, they run business logic in the zipkin-gcp repo directly. Now there is an issue of when actually running zipkin-server with stackdriver - zipkin-server uses spring boot magic to load the business logic from zipkin-gcp. It's quite possible that logic that works in storage integration tests does not actually work when loaded into zipkin server. Testing this means having tests that combine the code of zipkin-server and zipkin-gcp. Maven can do this, as can docker. These involve building, they don't work with just running a docker image of the server that already exists since the code under test is the server itself. There is also the possibility that when someone runs the built docker image, the server doesn't work. One reason for this could be because the jar was built for Java 12 but the base image is Java 11. To me, this sort of issue seems far more unlikely than an issue stemming from Java / spring-boot combining the modules. So I'd like to ask again, is the goal to check the docker image, or the goal to check a running server? The latter has more options that give more flexibility (I personally don't think we need to go so far as to build a docker image on every PR build). The former needs to build a docker image and running tests - doing it on master commits is probably ok but also seems relatively low value vs cost to me. |
We definitely should have a server test in
This has happened before, and this wouldn't have been caught from I'm also not suggesting running a full docker image test on every PR. The question was:
We only need to run this before or after image is built, so that we can validate this automatically. The goal is to check the docker image 😄 |
E.g., if we use existing repository https://github.com/openzipkin/docker-zipkin-gcp/ Thinking out loud:
|
openzipkin/zipkin#2808 adds a docker test for zipkin-server. @saturnism I think a similar pattern will be pretty simple for zipkin-gcp and solve the issues we're seeing testing it. The caveat I've found is Docker Hub only supports older-style secrets, meaning just environment variables set in the web UI (similar to CircleCI). I guess we'd add a base64-encoded service account private key in the web UI, which would be viewable to all the owners of the openzipkin docker organization, as opposed to a Travis secret which is only decryptable by Travis itself. Would that be ok? |
nice! i think the only thing i discovered was that if healthcheck was not UP, the HTTP status code was still 200. Thus I had to check specifically for the parsed health response. For Stackdriver secret, @elefeint: should we create a separate service account for dockerhub w/ just the append permission. |
oh that "still 200" issue is fixed in zipkin 2.17 out now
…On Tue, Sep 24, 2019 at 9:11 PM Ray Tsang ***@***.***> wrote:
nice! i think the only thing i discovered was that if healthcheck was not
UP, the HTTP status code was still 200. Thus I had to check specifically
for the parsed health response.
For Stackdriver secret, @elefeint <https://github.com/elefeint>: should
we create a separate service account for dockerhub w/ just the append
permission.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#86>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAAPVV7JMOQUOIS5VBXZCRDQLIYKBANCNFSM4BYHUDPA>
.
|
@anuraaga what's the best way for me to get the service account information configured in the target environment? we can create a dedicated service account just for this. |
@saturnism Thanks a lot for all the help on maintaining zipkin-gcp :) I think it's appropriate to give you access to the In the meantime, I'd recommend going ahead and setting up a personal docker repo with your fork of the zipkin-gcp repo to get started. I do that to test out hooks, etc before sending a zipkin PR like here https://hub.docker.com/u/anuraaga. I think you can verify your service account + docker-compose.test.yml works in a fork before setting up the real repo. |
@anuraaga it's also saturnism :) will do the tests separately. cheers, |
I have gone ahead and configured a test release tag build in
Instead of pushing a dummy tag to try it out, which sends notifications and such, I'm thinking it's simpler to just let this run during the next release since it only points to |
thanks rag
|
As discussed in https://gitter.im/openzipkin/zipkin-private?at=5da6c32439e2ef28adef1e35 We are planning on atticing the The first migration of a storage build is openzipkin/zipkin#2849 For non-storage builds, release Dockerfiles have also been added to the code repos, e.g. openzipkin/zipkin#2846 They are wired up to push tags such as |
These are used in our integration tests for zipkin-storage/elasticsearch See openzipkin-attic/docker-zipkin#86
last step (I think) is to make sure that on the release-N.N.N tag, we update the |
nevermind. we don't use ZIPKIN_VERSION anymore |
the last important step is to integrate such that on a release tag (N.N.N) we push all tags to dockerhub, not just the N.N.N. Now, for 2.18.0, we now pus all of 2.18.0 2.18 2 and latest |
@anuraaga I noticed here..
I think this won't be completely correct, right? the docker hook will be run in parallel with travis (which publishes). Only by accident would bintray complete first.. or am I missing some step here https://cloud.docker.com/u/openzipkin/repository/docker/openzipkin/zipkin/builds |
answered own question about the tags.. ordering question still outstanding |
Ordering should be fine - |
actually the "1.2.3" tag actually does the deploy.
|
Oh shoot didn't notice that - so far only tested with already released version. In that case going to need to think a bit. With the current scheme will need to go ahead and wire the docker hub webhook after deploy. But I wonder why we deploy on the separate build - I think it means the same release jar is built twice, once for prepare and once for deploy. This seems wasteful of Travis resources. |
the release tag is an automation hook to avoid prepping manually. The release commit N.N.N is helpful as frequently deployment fails, and all we have to do is delete the dead bintray thing and click rebuild. Let's try to decouple the issues to avoid adding more complexity. maybe the travis run can invoke the dockerhub hook similar to how it does maven central? |
Makes sense yeah will add the hook. |
thanks!!
…On Fri, Oct 18, 2019, 10:04 AM Anuraag Agrawal ***@***.***> wrote:
Makes sense yeah will add the hook.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#86>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAAPVV5NWTQCHCKWUOOWX2DQPEKTHANCNFSM4BYHUDPA>
.
|
I think this should close out the main repo and allow us to archive this one. We still have loose ends for the -dependencies -gcp and -aws images to finish up afterwards |
#82 has a pretty big scope. One way to cut down on it is to remove complexity from the Docker build pipeline by leaving Quay.io.
For context: we're originally using Quay.io because it could start builds based on tags matching a regex being pushed to GitHub, back when Docker Hub could not. Today Docker Hub has the same functionality.
The intuitive thing to do is to start using automated builds on Docker Hub. Unfortunately there's no way to turn a repository into an automated build (which makes sense, it'd violate the expectation of images in the automated build "repository" being reproducible based only on
Dockerfile
s). There also doesn't seem to be a way to rename repositories. This leaves us with the following options that I can see:zipkin-collector-2
orzipkin-collector-auto
or ...), mark the current repositories as deprecated in documentation, communicate very loudly. However careful we are, I think some confusion is unavoidable.Given all these, today I have a slight preference to stay with Quay.io. Docker Hub is bound to get significant improvements, so with time I expect we can still migrate to it. A word on why that's a good idea: being the standard repository, we must be on Docker Hub; which means if we're anywhere else too, that's added complexity.
The text was updated successfully, but these errors were encountered: