You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The problem with conda is the overhead that comes with it. For distributed procesing you have to either
transfer the conda installation with the job
put the installation on shared storage (e.g. software)
or make it part of the job to setup conda
The first solution adds a lot of input data that might break the sandbox size on some sites.
The second solution is different for each cluster (e.g. AFS @ CERN, /software @ Bristol).
The last solution adds CPU inefficiency to the jobs.
We can keep the ROOT dependency, but I would move the responsibility to the user:
use CVMFS if requested (default)
use conda if requested
leave it up to the user if requested
With CVMFS you can even run it on OS X (in a C7 container), I can provide the instructions for that.
What do you think? I want to reduce the maintanance needed on our side while supporting as many clusters as possible.
pip install cms-l1t-analysis
init-workspace
script that would copy configs over from installation area & create setup.shThe text was updated successfully, but these errors were encountered: