Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom notebooks #150

Merged
merged 24 commits into from
May 11, 2021
Merged
Show file tree
Hide file tree
Changes from 10 commits
Commits
Show all changes
24 commits
Select commit Hold shift + click to select a range
3d4d9cf
remove notebook volume
cwcummings Mar 15, 2021
8c8f715
bump images version
cwcummings Mar 31, 2021
1d5164f
revert change on jupyterhub_config
cwcummings Apr 7, 2021
85b50a0
new deploy script to download notebooks in specific images
cwcummings Apr 9, 2021
e33f841
add log file output and var replacement in deployment script
cwcummings Apr 12, 2021
914bdc6
improve logging in new deploy script
cwcummings Apr 14, 2021
fc2e504
only install yq if necessary in deploy script
cwcummings Apr 16, 2021
04b46a9
add custom tutorial notebooks volume with a subfolder of the same nam…
cwcummings Apr 19, 2021
a3d5a24
fix typo
cwcummings Apr 19, 2021
27e5c7f
update doc
cwcummings Apr 19, 2021
a91a145
change dest_dir option from config to a env var
cwcummings Apr 21, 2021
a081a7c
fix volume mounts for notebooks for backward compatibility
cwcummings Apr 21, 2021
f19af5e
fix scheduler log variable
cwcummings Apr 21, 2021
51846fd
reorder variables in env.local file
cwcummings Apr 21, 2021
f8e2b12
multiple fixes on script and env.local after PR feedback
cwcummings Apr 22, 2021
55234f3
add cronjob generation and moved related deploy script to external repo
cwcummings Apr 23, 2021
31d5027
fix logdir for script and remove unnecessary volume mount
cwcummings May 3, 2021
fe7348c
remove common directory volume mount
cwcummings May 3, 2021
cfe4634
move env file for deploy script to external repo + minor fixes for pr…
cwcummings May 4, 2021
5c1e56a
rename mount directory to avoid conflict with other deploy jobs
cwcummings May 11, 2021
c934e3d
add todo comment
cwcummings May 11, 2021
927f090
fix error commented line in config
cwcummings May 11, 2021
fe79bc5
Merge branch 'master' of https://github.com/bird-house/birdhouse-depl…
cwcummings May 11, 2021
8990073
bump pavics jupyter images versions
cwcummings May 11, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 21 additions & 3 deletions birdhouse/config/jupyterhub/jupyterhub_config.py.template
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,8 @@ from os.path import join
import logging
import subprocess

from dockerspawner import DockerSpawner

c = get_config() # noqa # can be called directy without import because injected by IPython

c.JupyterHub.bind_url = 'http://:8000/jupyter'
Expand All @@ -20,7 +22,22 @@ c.JupyterHub.db_url = '/persist/jupyterhub.sqlite'

c.JupyterHub.template_paths = ['/custom_templates']

c.JupyterHub.spawner_class = 'dockerspawner.DockerSpawner'
class CustomDockerSpawner(DockerSpawner):

def start(self):
if(os.environ['MOUNT_IMAGE_SPECIFIC_NOTEBOOKS'] == 'true'):
host_dir = join(os.environ['JUPYTERHUB_USER_DATA_DIR'], 'tutorial-notebooks', self.user_options.get('image'))
image_dir = join('/notebook_dir/tutorial-notebooks', self.user_options.get('image'))
ChaamC marked this conversation as resolved.
Show resolved Hide resolved

# Mount a volume with a tutorial-notebook subfolder corresponding to the image name, if it exists
if(os.path.isdir(host_dir)):
self.volumes[host_dir] = {
'bind': image_dir,
'mode': 'ro'
}
return super().start()

c.JupyterHub.spawner_class = CustomDockerSpawner

# Selects the first image from the list by default
c.DockerSpawner.image = os.environ['DOCKER_NOTEBOOK_IMAGES'].split()[0]
Expand Down Expand Up @@ -49,9 +66,10 @@ if len(host_gdrive_settings_path) > 0:
"mode": "ro"
}

# Mount folder containing the notebooks that should be available to all images
host_tutorial_notebooks_dir = join(jupyterhub_data_dir, "tutorial-notebooks")
c.DockerSpawner.volumes[host_tutorial_notebooks_dir] = {
"bind": join(notebook_dir, "tutorial-notebooks"),
c.DockerSpawner.volumes[join(host_tutorial_notebooks_dir, "common")] = {
"bind": join(notebook_dir, "tutorial-notebooks/common"),
ChaamC marked this conversation as resolved.
Show resolved Hide resolved
"mode": "ro"
}

Expand Down
3 changes: 3 additions & 0 deletions birdhouse/default.env
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,9 @@ export THREDDS_IMAGE="unidata/thredds-docker:4.6.15"
# Folder on the host to persist Jupyter user data (noteboooks, HOME settings)
export JUPYTERHUB_USER_DATA_DIR="/data/jupyterhub_user_data"

# Activates mounting a tutorial-notebooks subfolder that has the same name as the spawned image on JupyterHub
export MOUNT_IMAGE_SPECIFIC_NOTEBOOKS=false
ChaamC marked this conversation as resolved.
Show resolved Hide resolved

# Path to the file containing the clientID for the google drive extension for jupyterlab
export JUPYTER_GOOGLE_DRIVE_SETTINGS=""

Expand Down
88 changes: 88 additions & 0 deletions birdhouse/deployment/deploy-data-specific-image
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
#!/bin/bash
# Deploy data from git repo(s) to local folder(s).
# This script is run directly on a specific image (such as pavics/crim-jupyter-eo or pavics/crim-jupyter-nlp).
# It will be used to download and update the different tutorial notebooks associated with a specific image.
#
# This is meant to be run on the same host running PAVICS.
#
# The data details is specified in a yaml config file (TEMPLATE_CONFIG_YML), using this format :
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not clear here that the yaml config file must be in the image. This is the case right?
Also the file is expected to be located to a particular location : https://github.com/bird-house/birdhouse-deploy/pull/150/files#diff-17e9b2e274b97a022f797d1f221f2b50144c0ce3b70537a9faa2b7412b2a2cafR159

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, in our case, we run this script on an image, so the config must be on the same image, yes. But, I suppose the script could be used directly on the host, without using any image. Since our use case here is to run it on a docker image, I should just put a clarification that the config file should be on the image too though. :P

And, the script has to be copied, when building the docker image, to the same path that will be written for the job command (like on the link you put here.)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hum ok, I agree that this script doesn't mind where the config file is located, it is passed as an argument.
In fact, what is needed is a better place to document the location of the config file in images. All images will require this config file at a specific location so that the job (my link) can find it.

# - repo_url: path to the github repo (ex.: https://github.com/crim-ca/pavics-jupyter-images)
# branch: name of the branch containing the required version of the data
# source_path: path of the desired file or folder in the source repo
# dest_dir: directory where the data will be copied to in the specific image where the script is run
ChaamC marked this conversation as resolved.
Show resolved Hide resolved
#
# One or more entries respecting this format can be added to the yaml file, in order to download multiple file/folders.
#
# An installation of jq and yq is preferred prior to executing this script.
ChaamC marked this conversation as resolved.
Show resolved Hide resolved
#
# Setting environment variable DEPLOY_DATA_LOGFILE='/path/to/logfile.log'
# will redirect all STDOUT and STDERR to that logfile so this script will be
# completely silent.

if [ ! -z "$DEPLOY_DATA_LOGFILE" ]; then
exec >>$DEPLOY_DATA_LOGFILE 2>&1
fi

START_TIME="`date -Isecond`"
echo "==========
deploy-data-specific-image START_TIME=$START_TIME"

TEMPLATE_CONFIG_YML="$1"
if [ -z "$TEMPLATE_CONFIG_YML" ]; then
echo "ERROR: missing config.yml file" 1>&2
exit 2
fi

# Empty value could mean typo in the keys in the config file.
ensure_not_null() {
if [ "$*" = null ]; then
echo "ERROR: value empty" 1>&2
exit 1
fi
}
ChaamC marked this conversation as resolved.
Show resolved Hide resolved

# Check installation of required package to read yaml files
if ! command -v jq &> /dev/null; then
apt-get -y install jq
fi
if ! command -v yq &> /dev/null; then
pip install yq
fi
ChaamC marked this conversation as resolved.
Show resolved Hide resolved

# Find how many entries are in the config file
LENGTH="`yq -r '. | length' $TEMPLATE_CONFIG_YML`"
ChaamC marked this conversation as resolved.
Show resolved Hide resolved

if [ -z $LENGTH ]; then
echo "ERROR: empty config file" 1>&2
exit 1
fi

CONFIG_YML="/tmp/notebook_config.yml"
ChaamC marked this conversation as resolved.
Show resolved Hide resolved
# Replace environment variables from template to their actual values
envsubst < $TEMPLATE_CONFIG_YML > $CONFIG_YML

for ((i=0;i<$LENGTH;i++)); do
# Extract data from config
ChaamC marked this conversation as resolved.
Show resolved Hide resolved
GIT_REPO="`yq -r .[$i].repo_url $CONFIG_YML`"
ensure_not_null "$GIT_REPO"

BRANCH="`yq -r .[$i].branch $CONFIG_YML`"
ensure_not_null "$BRANCH"

SOURCE_PATH="`yq -r .[$i].source_path $CONFIG_YML`"
ensure_not_null "$SOURCE_PATH"

DEST_DIR=$( eval echo "`yq -r .[$i].dest_dir $CONFIG_YML`")
ChaamC marked this conversation as resolved.
Show resolved Hide resolved
ensure_not_null "$DEST_DIR"

FULL_URL=$GIT_REPO/branches/$BRANCH/$SOURCE_PATH

echo "Extracting ${FULL_URL} to ${DEST_DIR}"

# Download the data from github and copy it to the destination directory
svn export --force $FULL_URL $DEST_DIR
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Download from github using svn client? I didn't know that it could be done! But why not just use a git client?

The bigger question is this script seems to do a very similar work as the existing deploy-data script. Not clear what feature is missing but why we can not re-use the existing deploy-data script? If a feature is missing, can we add it instead of forking another script?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

They do have a lot of similarities. I could give it a try and try using the deploy-data script directly, but I am scared of finding a blocking point sometime if we want to add different features to one of the use case.
They are almost similar though, if not the same, for now.

Also, I see that the deploy-data script requires the use of Docker, which means we have to install Docker on the pavics-jupyter-base (this would replace the yq/jq installation actually). I don't know what is the best here, if we want to keep those images to the bare minimum.

@dbyrns I would be curious about your input on this :)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have enough information on what @tlvu would like you to reuse to make that call. If he could show us how it could easily be done I'm not against it. As long as we don't have to invest another couple of days to do that. The same apply to the other reuse request, maybe you could talk directly to each other so that it could be done fast.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the main difference between our scripts is that the deploy-data script is meant to run on a generic image that includes Docker and Git, but our new script deploy-data-specific-image is meant to be run directly on one of our own specific image, which includes our config.yaml file.

A benefit of using the new script is that it lets us have the yaml file directly on the specific image's repo, keeping it close to its related context. For example, this means, if a developer wants to include a new folder for the crim-eo environment, he just goes to the crim-eo's repo and changes the config there.

I am not sure if we could do this easily with the deploy-data script? I think the yaml files are stored directly on the birdhouse-deploy repo? Such as : deploy_raven_testdata_to_thredds.env and deploy-data-raven-testdata-to-thredds.yml

They are then added as a job in the env.local file :

#if [ -f "$COMPOSE_DIR/components/scheduler/deploy_raven_testdata_to_thredds.env" \

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that I have more time to digest this change. I would suggest moving this deploy-data-specific-image script into the jupyter-pavics-base image so the script and the config yaml (that is in the child image) is part of the same final image.

Why? So eventually Jenkins can also checkout all the notebooks and run tests on them, like currently with the default Jupyter image. The e2e repo should not be responsible for checking out the notebooks like currently and will enable that same e2e repo to test against different notebooks depending on the Jupyter image. That requires implementation of Ouranosinc/PAVICS-e2e-workflow-tests#57 but we can start to lay the ground in that direction.

For the scheduler cronjob, this means you don't even have to volume-mount the script from outside since it's already inside the image !

As for re-using the existing generic deploy-data script, I think it simply boils down to

  1. being able to use yq as docker run (currently) or as installed (your case), will have to add a new switch for that.
  2. download the repo as a tar/zip archive (your way) instead of git pull (current), will have to add a new switch for that as well.
    Note that going the tar/zip archive route nullify the caching provided by git pull. However, git pull way waste more space because basically there is a double checkout, one used for caching, one in the /data/jupyter_user_data/tutorial-notebooks/.... Each side have pros and cons so for generic and re-usabilty sake, we can implement both.

In a different PR, once deploy-data have the additional mode of operations, the jupyter-pavics-base will wget the script part of the build and make it available inside the image as deploy-data-specific-image is, without being directly committed as deploy-data-specific-image.

done

echo "notebookdeploy finished END_TIME=`date -Isecond`"

# vi: tabstop=8 expandtab shiftwidth=4 softtabstop=4
1 change: 1 addition & 0 deletions birdhouse/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -358,6 +358,7 @@ services:
JUPYTER_DEMO_USER_CPU_LIMIT: ${JUPYTER_DEMO_USER_CPU_LIMIT}
JUPYTER_GOOGLE_DRIVE_SETTINGS: ${JUPYTER_GOOGLE_DRIVE_SETTINGS}
JUPYTERHUB_README: ${JUPYTERHUB_README}
MOUNT_IMAGE_SPECIFIC_NOTEBOOKS: ${MOUNT_IMAGE_SPECIFIC_NOTEBOOKS}
volumes:
- ./config/jupyterhub/jupyterhub_config.py:/srv/jupyterhub/jupyterhub_config.py:ro
- ./config/jupyterhub/custom_templates:/custom_templates:ro
Expand Down
30 changes: 28 additions & 2 deletions birdhouse/env.local.example
Original file line number Diff line number Diff line change
Expand Up @@ -142,12 +142,33 @@ export POSTGRES_MAGPIE_PASSWORD=postgres-qwerty
# "Notebook" are all the tutorial notebooks on Jupyter.
#export AUTODEPLOY_NOTEBOOK_FREQUENCY="@every 5m"

# Log directory used for extra autodeploy tasks
# export AUTO_DEPLOY_LOGDIR=/var/log/PAVICS
ChaamC marked this conversation as resolved.
Show resolved Hide resolved

# Add more jobs to ./components/scheduler/config.yml
#
# Potential usages: other deployment, backup jobs on the same machine
#
#export AUTODEPLOY_EXTRA_SCHEDULER_JOBS=""
#
# Example extra job that deploys custom notebooks for a specific image
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we just create a script that loop over DOCKER_NOTEBOOK_IMAGES env var (https://github.com/bird-house/birdhouse-deploy/blob/master/birdhouse/env.local.example#L214) and perform this operation on each of them?
So, if I want to offer the eo image I add it the the available images env and voilà! I also get all its notebooks.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you mean automatically creating a job for each image found in DOCKER_NOTEBOOK_IMAGES?
I suppose we could, most of the parameters will stay the same with only the name (such as eo, nlp, etc.) changing.
But, do we always want to create a job for all of those images? For example, I don't want to have notebooks specific to pavics/workflow-tests, I don't want to have a job for this image. But I guess we could organize those images differently, like having another variable that contains the list of the images for which we want to actually run the script.

Or did you mean a single job that will run the script for each images?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean a single job running a script updating all images. Basically what you already have, but inside a loop. Apart the image name everything else is the same.
And yes for all images found in DOCKER_NOTEBOOK_IMAGES. Right now we have workflow-tests, but the name is misleading because it's the primary image that is used by almost everyone! In fact all images in DOCKER_NOTEBOOK_IMAGES are available to user, so yes I think that we want to keep them updated.

#export AUTODEPLOY_EXTRA_SCHEDULER_JOBS="
#- name: notebookdeploy-eo
# comment: Auto-deploy tutorial notebooks for the eo image
# schedule: '${AUTODEPLOY_NOTEBOOK_FREQUENCY}'
# command: '/deploy-data-specific-image /notebook_config.yml.template'
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following my previous comment regarding this config, given that this file /notebook_config.yml.template is in the image, why is it a .template?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This "cronjob" is again pretty much similar to the existing https://github.com/bird-house/birdhouse-deploy/blob/57a320c3583f804838e8c18aeb064266527a0bc9/birdhouse/components/scheduler/deploy_data_job.env. Same question, if a feature is missing, can we add to it than forking it?

This one you might not be aware since it's under the "scheduler" component because it has to be used with that component.

Using that "generic" cronjob allows a much simplified cronjob definition like this https://github.com/bird-house/birdhouse-deploy/blob/57a320c3583f804838e8c18aeb064266527a0bc9/birdhouse/components/scheduler/deploy_raven_testdata_to_thredds.env

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, I used the template word because in this version of the config, we find the variable ${JUPYTERHUB_USER_DATA_DIR}, which gets replaced by its value when running the script.
We do a copy of the config file using envsubst < $TEMPLATE_CONFIG_YML > $CONFIG_YML, where the copy has the real value instead of the variable.
I agree 'template' is not the best word here though. I don't know if you have a better idea.

In regards to your comment about the dest_dir variable found in the yaml config, I could remove it, and we could always output the notebooks to the same directory, in ${JUPYTERHUB_USER_DATA_DIR}/tutorial-notebooks/{$IMAGE_NAME}.
We could remove that option of customization, which could remove my need of having a copy of the config with variables to replace.
If we decide to remove the dest_dir option, I will just remove all mentions of .template, and remove the envsubst command from the script.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok, it solves two issues and make the whole thing simpler, so I would remove the dest_dir option from the config.
But I would not use the directory ${JUPYTERHUB_USER_DATA_DIR}/tutorial-notebooks/{$IMAGE_NAME}. This is where the notebooks need to be on the host, but in the image it could be anywhere like /tmp as long as the calling command mount the host volume at that point. My suggestion still hold : in the docker command include the volume mount and set an environment variable telling the script where to put the stuff (the dest_dir, but as a env var rather than a config)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again, now that I have more time to digest this PR ...

I would suggest moving this sample big blob of code to generate the cronjob also into jupyter-pavics-base repo so it is together with the script deploy-data-specific-image it wraps.

  • It makes the env.local.example shorter and less intimidating
  • You now own the "code" so if you need to fix something (like add/remove a volume-mount) you can do it and it's transparent to all the callers, all they need is always run the latest version. You basically provide the user with a stable interface and give you control to change the implementation.
  • Later, in a different PR, once we "migrate" to the generic deploy-data script, the matching generic cronjob components/scheduler/deploy_data_job.env will also have to be adapted for whatever newer options that deploy-data will support. And we still keep the same consistency that the "script" and the "cronjob wrapper" are together.

Try to inspire from the existing generic cronjob wrapper, especially the part about

if [ -z "`echo "$AUTODEPLOY_EXTRA_SCHEDULER_JOBS" | grep $DEPLOY_DATA_JOB_JOB_NAME`" ]; then
# Add job only if not already added (config is read more than once during
# autodeploy process).
to ensure it also works during autodeploy (there is a slight difference between ./pavics-compose.sh invoked manually on the console and invoke inside the scheduler component).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Forgot to say, once you move the "cronjob generation code" out to jupyter-pavics-base, you can reference it back in env.local.example to "advertise" it, like

# Load pre-configured cronjob to automatically deploy Raven testdata to Thredds
# for Raven tutorial notebooks.
#
# See the job for additional possible configurations. The "scheduler"
# component needs to be enabled for this pre-configured job to work.
#
#if [ -f "$COMPOSE_DIR/components/scheduler/deploy_raven_testdata_to_thredds.env" \
# -a -f "$COMPOSE_DIR/components/scheduler/deploy_data_job.env" ]; then
# . $COMPOSE_DIR/components/scheduler/deploy_raven_testdata_to_thredds.env
# . $COMPOSE_DIR/components/scheduler/deploy_data_job.env
#fi

# dockerargs: >-
# --rm --name notebookdeploy-eo
# --volume /var/run/docker.sock:/var/run/docker.sock:ro
# --volume ${COMPOSE_DIR}/deployment/deploy-data-specific-image:/deploy-data-specific-image:ro
# --volume ${JUPYTERHUB_USER_DATA_DIR}:${JUPYTERHUB_USER_DATA_DIR}:rw
# --volume ${AUTO_DEPLOY_LOGDIR}:${AUTO_DEPLOY_LOGDIR}:rw
# --env JUPYTERHUB_USER_DATA_DIR=${JUPYTERHUB_USER_DATA_DIR}
# --env DEPLOY_DATA_LOGFILE=${AUTO_DEPLOY_LOGDIR}/notebookdeploy-eo.log
# --user 0:0
# image: ${DOCKER_EO_IMAGE}
#"
#
# Load pre-configured job to auto-renew LetsEncrypt SSL certificate if a
# LetsEncrypt SSL certificate has previously been requested.
#
Expand Down Expand Up @@ -208,8 +229,10 @@ export POSTGRES_MAGPIE_PASSWORD=postgres-qwerty

# Jupyter single-user server images
#export DOCKER_NOTEBOOK_IMAGES="pavics/workflow-tests:210216 \
# pavics/crim-jupyter-eo:0.1.0 \
# pavics/crim-jupyter-nlp:0.1.0"
# ${DOCKER_EO_IMAGE} \
# ${DOCKER_NLP_IMAGE}"
#export DOCKER_EO_IMAGE="pavics/crim-jupyter-eo:0.1.0"
#export DOCKER_NLP_IMAGE="pavics/crim-jupyter-nlp:0.1.0"
ChaamC marked this conversation as resolved.
Show resolved Hide resolved

# allow jupyterhub user selection of which notebook image to run
# see https://jupyter-docker-stacks.readthedocs.io/en/latest/using/selecting.html
Expand All @@ -226,6 +249,9 @@ export POSTGRES_MAGPIE_PASSWORD=postgres-qwerty
# }
#"

ChaamC marked this conversation as resolved.
Show resolved Hide resolved
# Activates mounting a tutorial-notebooks subfolder that has the same name as the spawned image on JupyterHub
# export MOUNT_IMAGE_SPECIFIC_NOTEBOOKS=true
ChaamC marked this conversation as resolved.
Show resolved Hide resolved

# The parent folder where all the user notebooks will be stored.
# For example, a user named "bob" will have his data in $JUPYTERHUB_USER_DATA_DIR/bob
# and this folder will be mounted when he logs into JupyterHub.
Expand Down