From f0887f5547c2d7d077b428c96077faa7ccc17ee8 Mon Sep 17 00:00:00 2001 From: vlukashenko Date: Tue, 13 Feb 2024 16:07:51 +0100 Subject: [PATCH 1/6] remove gitlab-ci --- _episodes/08-gitlab-ci.md | 101 ++++--------------------- _episodes/09-containerized-analysis.md | 36 ++------- 2 files changed, 21 insertions(+), 116 deletions(-) diff --git a/_episodes/08-gitlab-ci.md b/_episodes/08-gitlab-ci.md index 3b7574f..563d716 100644 --- a/_episodes/08-gitlab-ci.md +++ b/_episodes/08-gitlab-ci.md @@ -1,21 +1,19 @@ --- -title: "Gitlab CI for Automated Environment Preservation" +title: "Github and Dockerhub for Automated Environment Preservation" teaching: 20 exercises: 25 questions: -- "How can gitlab CI and docker work together to automatically preserve my analysis environment?" -- "What do I need to add to my gitlab repo(s) to enable this automated environment preservation?" +- "What do I need to do to enable this automated environment preservation on github?" objectives: -- "Learn how to write a Dockerfile to containerize your analysis code and environment." -- "Understand what needs to be added to your `.gitlab-ci.yml` file to keep the containerized environment continuously up to date for your repo." +- "Learn how to werite a Dockerfile to containerize your analysis code and environment." +- "Understand how to use github + dockerhub to enable automatic environment preservation." keypoints: -- "gitlab CI allows you to re-build a container that encapsulates the environment each time new commits are pushed to the analysis repo." -- "This functionality is enabled by adding a Dockerfile to your repo that specifies how to build the environment, and an image-building stage to the `.gitlab-ci.yml` file." +- "Combination of github and dockerhub allows you to automatically build the docker containers every time you push to a repository." --- ## Introduction -In this section, we learn how to combine the forces of docker and gitlab CI to automatically keep your analysis environment up-to-date. This is accomplished by adding an extra stage to the CI pipeline for each analysis repo, which builds a container image that includes all aspects of the environment needed to run the code. +In this section, we learn how to combine the forces of dockerhub and github to automatically keep your analysis environment up-to-date. We will be doing this using the [CMS OpenData HTauTau Analysis Payload](https://hsf-training.github.io/hsf-training-cms-analysis-webpage/). Specifically, we will be using two "snapshots" of this code which are the repositories described on the [setup page](https://hsf-training.github.io/hsf-training-docker/setup.html) of this training. A walkthrough of how to setup those repositories can also be found [on this video](https://www.youtube.com/watch?v=krsBupoxoNI&list=PLKZ9c4ONm-VnqD5oN2_8tXO0Yb1H_s0sj&index=7). The "snapshot" repositories are available on GitHub ([skimmer repository](https://github.com/hsf-training/hsf-training-cms-analysis-snapshot) and [statistics repository](https://github.com/hsf-training/hsf-training-cms-analysis-snapshot-stats) ). If you don't already have this setup, take a detour now and watch that video and revisit the setup page. @@ -102,91 +100,15 @@ As we've seen, all these components can be encoded in a Dockerfile. So the first > {: .source} {: .callout} -## Add docker building to your gitlab CI +## Automatic image building with github + dockerhub -Now, you can proceed with updating your `.gitlab-ci.yml` to actually build the container during the CI/CD pipeline and store it in the gitlab registry. You can later pull it from the gitlab registry just as you would any other container, but in this case using your CERN credentials. - -> ## Not from CERN? -> If you do not have a CERN computing account with access to [gitlab.cern.ch](https://[gitlab.cern.ch), then everything discussed here is also available on [gitlab.com](https://gitlab.com) offers CI/CD tools, including the docker builder. Furthermore, you can do the same with github + dockerhub as explained in the next subsection. -{: .callout} - -Add the following lines at the end of the `.gitlab-ci.yml` file to build the image and save it to the docker registry. - -~~~yaml -build_image: - stage: build - variables: - TO: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG-$CI_COMMIT_SHORT_SHA - tags: - - docker-image-build - script: - - ignore -~~~ -{: .source} - - - -Once this is done, you can commit and push the updated `.gitlab-ci.yml` file to your gitlab repo and check to make sure the pipeline passed. If it passed, the repo image built by the pipeline should now be stored on the docker registry, and be accessible as follows: - -~~~bash -docker login gitlab-registry.cern.ch -docker pull gitlab-registry.cern.ch/[repo owner's username]/[skimming repo name]:[branch name]-[shortened commit SHA] -~~~ -{: .source} - -You can also go to the container registry on the gitlab UI to see all the images you've built: - -ContainerRegistry - -Notice that the script to run is just a dummy 'ignore' command. This is because using the docker-image-build tag, the jobs always land on special runners that are managed by CERN IT which run a custom script in the background. You can safely ignore the details. - -> ## Recommended Tag Structure -> You'll notice the environment variable `TO` in the `.gitlab-ci.yml` script above. This controls the name of the Docker image that is produced in the CI step. Here, the image name will be `:-`. The shortened 8-character commit SHA ensures that each image created from a different commit will be unique, and you can easily go back and find images from previous commits for debugging, etc. -> -> As you'll see tomorrow, it's recommended when using your images as part of a REANA workflow to make a unique image for each gitlab commit, because REANA will only attempt to update an image that it's already pulled if it sees that there's a new tag associated with the image. -> -> If you feel it's overkill for your specific use case to save a unique image for every commit, the `-$CI_COMMIT_SHORT_SHA` can be removed. Then the `$CI_COMMIT_REF_SLUG` will at least ensure that images built from different branches will not overwrite each other, and tagged commits will correspond to tagged images. -{: .callout} - -### Alternative: GitLab.com - -This training module is rather CERN-centric and assumes you have a CERN computing account with access to [gitlab.cern.ch](https://[gitlab.cern.ch). If this is not the case, then as with the [CICD training module](https://hsf-training.github.io/hsf-training-cicd/), everything can be carried out using [gitlab.com](https://gitlab.com) with a few slight modifications. These changes are largely surrounding the syntax and the concept remains that you will have to specify that your pipeline job that builds the image is executed on a special type of runner with the appropriate `services`. However, unlike at CERN, there is not pre-defined `script` that runs on these runners and pushes to your registry, so you will have to write this script yourself but this will be little more than adding commands that you have been exposed to in previous section of this training like `docker build`. - -Add the following lines at the end of the `.gitlab-ci.yml` file to build the image and save it to the docker registry. - -~~~yaml -build image: - stage: build - image: docker:latest - services: - - docker:dind - script: - - docker build -t registry.gitlab.com/burakh/docker-training . - - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY - - docker push registry.gitlab.com/burakh/docker-training -~~~ -{: .source} - -In this job, the specific `image: docker:latest`, along with specifying the `services` to contain `docker:dind` is equivalent to the requesting the `docker-build-image` tag on [gitlab.cern.ch](https://[gitlab.cern.ch). If you are curious to read about this in detail, refer to the [official gitlab documentation](https://docs.gitlab.com/ee/ci/docker/using_docker_build.html) or (this example)[https://gitlab.com/gitlab-examples/docker]. - -In the `script` of this job there are three components : - - [`docker build`](https://docs.docker.com/engine/reference/commandline/build/) : This is performing the same build of our docker image to the tagged image which we will call `registry.gitlab.com/burakh/docker-training` - - [`docker login`](https://docs.docker.com/engine/reference/commandline/login/) : This call is performing [an authentication of the user to the gitlab registry](https://docs.gitlab.com/ee/user/packages/container_registry/#authenticating-to-the-gitlab-container-registry) using a set of [predefined environment variables](https://docs.gitlab.com/ee/ci/variables/predefined_variables.html) that are automatically available in any gitlab repository. - - [`docker push`](https://docs.docker.com/engine/reference/commandline/push/) : This call is pushing the docker image which exists locally on the runner to the gitlab.com registry associated with the repository against which we have performed the authentication in the previous step. - -If the job runs successfully, then in the same way as described for [gitlab.cern.ch](https://[gitlab.cern.ch) in the previous section, you will be able to find the `Container Registry` on the left hand icon menu of your gitlab.com web browser and navigate to the image that was pushed to the registry. Et voila, c'est fini, exactement comme au CERN! - -### Alternative: Automatic image building with github + dockerhub - -If you don't have access to [gitlab.cern.ch](https://gitlab.cern.ch), you can still -automatically build a docker image every time you push to a repository with github and +You can automatically build a docker image every time you push to a repository with github and dockerhub. 1. Create a clone of the skim and the fitting repository on your private github. You can use the [GitHub Importer](https://docs.github.com/en/github/importing-your-projects-to-github/importing-a-repository-with-github-importer) for this. It's up to you whether you want to make this repository public or private. - 2. Create a free account on [dockerhub](http://hub.docker.com/). 3. Once you confirmed your email, head to ``Settings`` > ``Linked Accounts`` and connect your github account. @@ -213,12 +135,17 @@ docker pull /: ~~~ {: .source} +> ## Tag your docker image +> Notice that the command above had a ```` specified. A tag uniquely identifies a docker image. When puched to + + + ## An updated version of `skim.sh` > ## Exercise (10 mins) > Since we're now taking care of building the skimming executable during image building, let's make an updated version of `skim.sh` that excludes the step of building the `skim` executable. > -> The updated script should just directly run the pre-existing `skim` executable on the input samples. You could call it eg. `skim_prebuilt.sh`. We'll be using this updated script in an exercise later on in which we'll be going through the full analysis in containers launched from the images we create with gitlab CI. +> The updated script should just directly run the pre-existing `skim` executable on the input samples. You could call it eg. `skim_prebuilt.sh`. We'll be using this updated script in an exercise later on in which we'll be going through the full analysis in containers launched from the images. > > Once you're happy with the script, you can commit and push it to the repo. > diff --git a/_episodes/09-containerized-analysis.md b/_episodes/09-containerized-analysis.md index 7e55b6b..bfc590a 100755 --- a/_episodes/09-containerized-analysis.md +++ b/_episodes/09-containerized-analysis.md @@ -38,37 +38,15 @@ To bring it all together, we can also preserve our fitting framework in its own {: .challenge} > ## Exercise (5 min) -> Now, add the same image-building stage to the `.gitlab-ci.yml` file as we added for the skimming repo. You will also need to add a `- build` stage at the top in addition to any other stages. +> Now, add the automatic image building using dockerhub as we added for the skimming repo. > -> **Note:** I would suggest listing the `- build` stage before the other stages so it will run first. This way, even if the other stages fail for whatever reason, the image can still be built with the `- build` stage. -> -> Once you're happy with the `.gitlab-ci.yml`, commit and push the new file to the fitting repo. -> > ## Solution -> > ~~~yaml -> > stages: -> > - build -> > - [... any other stages] -> > -> > build_image: -> > stage: build -> > variables: -> > TO: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG-$CI_COMMIT_SHORT_SHA -> > tags: -> > - docker-image-build -> > script: -> > - ignore -> > -> > [... rest of .gitlab-ci.yml] -> > ~~~ -> > {: .source} -> {: .solution} {: .challenge} If the image-building completes successfully, you should be able to pull your fitting container, just as you did the skimming container: ~~~bash -docker login gitlab-registry.cern.ch -docker pull gitlab-registry.cern.ch/[repo owner's username]/[fitting repo name]:[branch name]-[shortened commit sha] +docker login +docker pull /: ~~~ {: .source} @@ -99,10 +77,10 @@ Now that we've preserved our full analysis environment in docker images, let's t > > ### Part 1: Skimming > > ~~~bash > > # Pull the image for the skimming repo -> > docker pull gitlab-registry.cern.ch/[your_partners_username]/[skimming repo name]:[branch name]-[shortened commit SHA] +> > docker pull [your_partners_username]/[skimming repo image name]:[tag] > > > > # Start up the container and volume-mount the skimming_output directory into it -> > docker run --rm -it -v ${PWD}/skimming_output:/skimming_output gitlab-registry.cern.ch/[your_partners_username]/[skimming repo name]:[branch name]-[shortened commit SHA] /bin/bash +> > docker run --rm -it -v ${PWD}/skimming_output:/skimming_output [your_partners_username]/[skimming repo image name]:[tag] /bin/bash > > > > # Run the skimming code > > bash skim_prebuilt.sh root://eospublic.cern.ch//eos/root-eos/HiggsTauTauReduced/ /skimming_output @@ -113,10 +91,10 @@ Now that we've preserved our full analysis environment in docker images, let's t > > ### Part 2: Fitting > > ~~~bash > > # Pull the image for the fitting repo -> > docker pull gitlab-registry.cern.ch/[your_partners_username]/[fitting repo name]:[branch name]-[shortened commit SHA] +> > docker pull [your_partners_username]/[fitting repo iamge name]:[tag] > > > > # Start up the container and volume-mount the skimming_output and fitting_output directories into it -> > docker run --rm -it -v ${PWD}/skimming_output:/skimming_output -v ${PWD}/fitting_output:/fitting_output gitlab-registry.cern.ch/[your_partners_username]/[fitting repo name]:[branch name]-[shortened commit SHA] /bin/bash +> > docker run --rm -it -v ${PWD}/skimming_output:/skimming_output -v ${PWD}/fitting_output:/fitting_output [your_partners_username]/[fitting repo image name]:[tag] /bin/bash > > > > # Run the fitting code > > bash fit.sh /skimming_output/histograms.root /fitting_output From 915655d53406301c26467f1f398fe432f0f4bb61 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Tue, 13 Feb 2024 15:11:16 +0000 Subject: [PATCH 2/6] [pre-commit.ci] auto fixes from pre-commit.com hooks for more information, see https://pre-commit.ci --- _episodes/08-gitlab-ci.md | 4 ++-- _episodes/09-containerized-analysis.md | 2 +- 2 files changed, 3 insertions(+), 3 deletions(-) diff --git a/_episodes/08-gitlab-ci.md b/_episodes/08-gitlab-ci.md index 563d716..4c5e899 100644 --- a/_episodes/08-gitlab-ci.md +++ b/_episodes/08-gitlab-ci.md @@ -13,7 +13,7 @@ keypoints: ## Introduction -In this section, we learn how to combine the forces of dockerhub and github to automatically keep your analysis environment up-to-date. +In this section, we learn how to combine the forces of dockerhub and github to automatically keep your analysis environment up-to-date. We will be doing this using the [CMS OpenData HTauTau Analysis Payload](https://hsf-training.github.io/hsf-training-cms-analysis-webpage/). Specifically, we will be using two "snapshots" of this code which are the repositories described on the [setup page](https://hsf-training.github.io/hsf-training-docker/setup.html) of this training. A walkthrough of how to setup those repositories can also be found [on this video](https://www.youtube.com/watch?v=krsBupoxoNI&list=PLKZ9c4ONm-VnqD5oN2_8tXO0Yb1H_s0sj&index=7). The "snapshot" repositories are available on GitHub ([skimmer repository](https://github.com/hsf-training/hsf-training-cms-analysis-snapshot) and [statistics repository](https://github.com/hsf-training/hsf-training-cms-analysis-snapshot-stats) ). If you don't already have this setup, take a detour now and watch that video and revisit the setup page. @@ -136,7 +136,7 @@ docker pull /: {: .source} > ## Tag your docker image -> Notice that the command above had a ```` specified. A tag uniquely identifies a docker image. When puched to +> Notice that the command above had a ```` specified. A tag uniquely identifies a docker image. When puched to diff --git a/_episodes/09-containerized-analysis.md b/_episodes/09-containerized-analysis.md index bfc590a..5fd5268 100755 --- a/_episodes/09-containerized-analysis.md +++ b/_episodes/09-containerized-analysis.md @@ -38,7 +38,7 @@ To bring it all together, we can also preserve our fitting framework in its own {: .challenge} > ## Exercise (5 min) -> Now, add the automatic image building using dockerhub as we added for the skimming repo. +> Now, add the automatic image building using dockerhub as we added for the skimming repo. > {: .challenge} From 6fcfacb958f014df6db7ca6fdb04a6528994c5a6 Mon Sep 17 00:00:00 2001 From: "Lera Lukashenko (Valeriia Lukashenko)" Date: Wed, 14 Feb 2024 10:25:14 +0100 Subject: [PATCH 3/6] Update _episodes/08-gitlab-ci.md Co-authored-by: Michel H. Villanueva --- _episodes/08-gitlab-ci.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_episodes/08-gitlab-ci.md b/_episodes/08-gitlab-ci.md index 4c5e899..d3821ac 100644 --- a/_episodes/08-gitlab-ci.md +++ b/_episodes/08-gitlab-ci.md @@ -5,7 +5,7 @@ exercises: 25 questions: - "What do I need to do to enable this automated environment preservation on github?" objectives: -- "Learn how to werite a Dockerfile to containerize your analysis code and environment." +- "Learn how to write a Dockerfile to containerize your analysis code and environment." - "Understand how to use github + dockerhub to enable automatic environment preservation." keypoints: - "Combination of github and dockerhub allows you to automatically build the docker containers every time you push to a repository." From f7430197979e8eaf99afb14ad715acfa452b9060 Mon Sep 17 00:00:00 2001 From: vlukashenko Date: Wed, 14 Feb 2024 10:36:03 +0100 Subject: [PATCH 4/6] remove gitlab link and use the hsf training website --- _episodes/09-containerized-analysis.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_episodes/09-containerized-analysis.md b/_episodes/09-containerized-analysis.md index 5fd5268..75097ca 100755 --- a/_episodes/09-containerized-analysis.md +++ b/_episodes/09-containerized-analysis.md @@ -66,7 +66,7 @@ Now that we've preserved our full analysis environment in docker images, let's t > mkdir fitting_output > ~~~ > -> Find a partner and pull the image they've built for their skimming repo from the gitlab registry. Launch a container using your partner's image. Try to run the analysis code to produce the `histogram.root` file that will get input to the fitting repo, using the `skim_prebuilt.sh` script we created in the previous lesson for the first skimming step. You can follow the skimming instructions in [step 1](https://gitlab.cern.ch/awesome-workshop/awesome-analysis-eventselection-stage2/blob/master/README.md#step-1-skimming) and [step 2](https://gitlab.cern.ch/awesome-workshop/awesome-analysis-eventselection-stage2/blob/master/README.md#step-2-histograms) of the README. +> Find a partner and pull the image they've built for their skimming repo. Launch a container using your partner's image. Try to run the analysis code to produce the `histogram.root` file that will get input to the fitting repo, using the `skim_prebuilt.sh` script we created in the previous lesson for the first skimming step. You can follow the skimming instructions in [step 3](https://hsf-training.github.io/hsf-training-cms-analysis-webpage/03-skimming/index.html) and [step 4](https://hsf-training.github.io/hsf-training-cms-analysis-webpage/04-histograms/index.html) of the CMS OpenData HTauTau Analysis Payload. > > **Note:** We'll need to pass the output from the skimming stage to the fitting stage. To enable this, you can volume mount the `skimming_output` directory into the container. Then, as long as you save the skimming output to the volume-mounted location in the container, it will also be available locally under `skimming_output`. > From d5fcb019996bfb221d48bc83e5f39ab864b4076b Mon Sep 17 00:00:00 2001 From: vlukashenko Date: Wed, 14 Feb 2024 13:00:10 +0100 Subject: [PATCH 5/6] fix a tag sentence --- _episodes/08-gitlab-ci.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_episodes/08-gitlab-ci.md b/_episodes/08-gitlab-ci.md index d3821ac..b153ccf 100644 --- a/_episodes/08-gitlab-ci.md +++ b/_episodes/08-gitlab-ci.md @@ -136,7 +136,7 @@ docker pull /: {: .source} > ## Tag your docker image -> Notice that the command above had a ```` specified. A tag uniquely identifies a docker image. When puched to +> Notice that the command above had a ```` specified. A tag uniquely identifies a docker image and is usually used to identify different versions of the same image. The tag name has to be written with ASCII symbols. From 4e13c9782b5c7806e77478f24fd0d0fc2bab1a70 Mon Sep 17 00:00:00 2001 From: vlukashenko Date: Wed, 14 Feb 2024 15:32:29 +0100 Subject: [PATCH 6/6] fix typo --- _episodes/09-containerized-analysis.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/_episodes/09-containerized-analysis.md b/_episodes/09-containerized-analysis.md index 75097ca..e9c20ec 100755 --- a/_episodes/09-containerized-analysis.md +++ b/_episodes/09-containerized-analysis.md @@ -91,7 +91,7 @@ Now that we've preserved our full analysis environment in docker images, let's t > > ### Part 2: Fitting > > ~~~bash > > # Pull the image for the fitting repo -> > docker pull [your_partners_username]/[fitting repo iamge name]:[tag] +> > docker pull [your_partners_username]/[fitting repo image name]:[tag] > > > > # Start up the container and volume-mount the skimming_output and fitting_output directories into it > > docker run --rm -it -v ${PWD}/skimming_output:/skimming_output -v ${PWD}/fitting_output:/fitting_output [your_partners_username]/[fitting repo image name]:[tag] /bin/bash