-
Notifications
You must be signed in to change notification settings - Fork 14
Containers
The pilot has the ability to execute the payload within a singularity container using two different methods; using 1) ALRB or 2) raw singularity. Currently, ALRB is the preferred method in ATLAS, but it can be set in the Pilot default config file. Details can be found in the sections below.
The preferred container setup method in ATLAS is using ALRB;
atlasLocalSetup.sh -c <containter_path>
The pilot sets up the environmental variable thePlatform
, expected by the setup script, but also added as the value for container_path
. For user jobs, thePlatform
is set to job.alrbuserplatform
(extracted from platform/cmtconfig in the job definition, corresponding to the $ALRB_USER_PLATFORM
value). If the job definition for a user job contains --containerImage
, the corresponding value will be used as the container_path
value, unless the ALRB user platform is defined. For production jobs, thePlatform
is set to the cmtconfig/platform value from the job definition.
If container options are specified in queuedata, the pilot adds them as the value for the environmental variable ALRB_CONT_CMDOPTS
.
In practice, to get around the piping problem (too many quote signs) the pilot is first writing the payload command to be executed within the container to a file, container_script.sh
, and refers to it using the ALRB_CONT_RUNPAYLOAD
environment variable which is known to the atlasLocalSetup
script. A concrete example is given below.
export ATLAS_LOCAL_ROOT_BASE=/cvmfs/atlas.cern.ch/repo/ATLASLocalRootBase;
export thePlatform="x86_64-slc6-gcc49-opt";
export ALRB_CONT_CMDOPTS="$ALRB_CONT_CMDOPTS -C";
export ALRB_CONT_RUNPAYLOAD='. /srv/container_script.sh';
source ${ATLAS_LOCAL_ROOT_BASE}/user/atlasLocalSetup.sh -c $thePlatform
For using direct/raw singularity, the pilot builds the command like so
cmd = "export workdir=" + workdir + "; singularity --verbose exec " + singularity_options + " " + image_path + \
" /bin/bash -c " + pipes.quote("cd $workdir;pwd;%s" % cmd)
where the workdir
is the working directory, singularity_options
is taken from the container_options
field in queuedata (expected to contain -B, pilot always adds /cvmfs,${workdir},/home
), and image_name
is taken from the --containerImage
value in the job definition.
Field name | Typical value | Meaning |
---|---|---|
container_options | -B /cvmfs,/scratch,/etc/grid-security --contain | Option to be added to raw "singularity exec" command |
container_type | singularity:pilot | Container name and the name of the component that should set it up (pilot or wrapper). Pilot can handle string like 'singularity:pilot;docker:wrapper'. |
# Container parameters
[Container]
# Master parameter
# Is the pilot allowed to use containers? If False, then any database settings are ignored
allow_container: False
# Name of middleware image (to be revised)
# This image is used if middleware is not found locally on the worker node. Middleware is expected to be present
# in the container image
middleware_container: middleware.img
# The setup type can be either ALRB or (explicit) singularity
setup_type: ALRB
- Introduction
- Pilot Architecture
- Pilot Workflows
- Event service
- Metadata
- Direct Access
- Signal Handling
- Error Codes
- Containers
- Special Algorithms
- Pilot Configuration
- Timing Measurements
- Copy Tools
- Pilot release procedure