You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Reviewing the code, I'm unsure if the step that creates jobs using the [Model]job.txt scripts are really necessary. The only significant difference between them is the hard-coded definition of the number of live points and cpus as well as the specific cluster to run the job on. I realize we have each object/model fit submitted as a separate job to create a pseudo-parallelized process, but perhaps we could streamline this by creating one generic script through which the jobs are created? It's currently manageable, but it we start to add more models or create a system for dynamically adding models, it would quickly become excessively unwieldy
The text was updated successfully, but these errors were encountered:
Started attempt to address this issue with creation of create_fits_script.py in 25b819e
and subsequent commits. If the specifics can be moved into settings.json and created with the script, this would make it easier to add new models and also adapt the pipeline for other systems
Reviewing the code, I'm unsure if the step that creates jobs using the [Model]job.txt scripts are really necessary. The only significant difference between them is the hard-coded definition of the number of live points and cpus as well as the specific cluster to run the job on. I realize we have each object/model fit submitted as a separate job to create a pseudo-parallelized process, but perhaps we could streamline this by creating one generic script through which the jobs are created? It's currently manageable, but it we start to add more models or create a system for dynamically adding models, it would quickly become excessively unwieldy
The text was updated successfully, but these errors were encountered: