See a list of the Docker images here.
To use one of those images locally:
- Install Google Cloud SDK
- Configure docker to use the gcloud command-line tool as a credential helper:
gcloud auth configure-docker
- Run the image locally:
# example
docker run -it --rm -p 8888:8888 pkg.dev/jupyterhub-docker-images/mpa2065:latest
We provide Conda and Julia environemnt files if you wish to recreate the JupyterHub environment locally. These files are available for download from this Google Storage Bucket.
This document outlines the process for creation of Docker images for JupyterHub at Brown. Every semester multiple courses request a JupyterHub, we build one image for each of those courses based on requirements specified in the request. We use Github Actions and Docker Compose to create the environments, build, and push the docker images.
To create an image to be used in JupyterHub for a particular class, we need these components:
-
Dockerfile
: the base dockerfile to create the image. The file currently being is a modification of the official Jupyterbase-notebook
image. -
docker-compose.yml
: docker compose file with three services. The first two steps are used to create environment files for conda and julia respectively. This step will generate and write conda's and julia's environment files. These files will be uploaded as artifacts so students can use them to reproduce the JH environment. The third step uses the environment files generated in steps one and two to build the image and push to GCR (Google Container Registry). -
scripts/
: contains the scripts needed by the image. Currently it has scripts needed by the Berkley image and the ones needed for the Jupyter official images. -
requirements/common/
: containsrequirements.txt
(list of packages to install from conda-forge) andrequirements.pip.txt
(list of packages to install using pip). Those files will get installed in the base environment.
Each class has the following exclusive components:
requirements/classes/${className}/
: the requirement files with the class-specific packages needed to create the conda environment.requirements.txt
(requiered) – list of packages to isntall from conda-forge.requirements.pip.txt
(optional) – list of packages to install using pip.requirements.jl
(optional) – julia file withconst julia_packages = []
, with an array of packages to install.condarc
(required) - conda configuration file listing channels for installation. By default the only channel isconda-forge
.github/workflow/className.yml
and.github/workflow/className-tag.yml
: the github action workflow. One workflow per class will make the environment files artifacts easier to find. In addition, it allows us to run the workflow conditionally on changes related to a single class. The last step requires the followign environment variables to be set accondingly.
Note: The production image will be created in CI.
To add a new class:
- Use the provided script in
dev/add_class.sh
to create a workflow file and scafold the requirements directory. The script takes the following arguments: -c
: class name (string)-s
: class season/semester (fall, summer, spring)-t
: target in docker file (string –base
,r_lang
orr_julia
)-p
: python version (i.e 3.7 if ommited defaults to 3.9.17)-q
: whether to install sqlite kernel (ommit the-q
tag if sqlite is not required)-y
: class year
# e.g
cd dev/
./add_class.sh -c data1010 -t r_julia -s fall -p 3.7 -q
To build the images locally:
- Create the environment files (only required if if TARGET is
r_julia
):
CLASS=apma0360 docker compose up julia_build
- Build JH Image
COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 CLASS=apma0360 TARGET=base docker compose up jh_image
or
COMPOSE_DOCKER_CLI_BUILD=1 DOCKER_BUILDKIT=1 CLASS=apma0360 TARGET=base SQLITE=true docker compose up jh_image
- Run the image
docker run -it --rm -p 8888:8888 <class>:latest
- Test pushing to pkg.dev
You will need to authenticate on GCLoud with our Service account. To do so, download a new key for the service account as a JSON file (if you haven't done so before). Authenticate your local gcloud
with the service account
gcloud auth activate-service-account --key-file=<path-to-json-key>
Then, configure pkg.dev
for docker and the zone of your registry, e.g.,
gcloud auth configure-docker us-east1-docker.pkg.dev
You could test push a local image as follows (mpa2065 example)
docker tag mpa2065:latest us-east1-docker.pkg.dev/jupyterhub-docker-images/all-classes/mpa2065:test-local
docker push us-east1-docker.pkg.dev/jupyterhub-docker-images/all-classes/mpa2065:test-local