Skip to content

Commit

Permalink
updated README, minor refactoring and version bump
Browse files Browse the repository at this point in the history
  • Loading branch information
gregorgebhardt committed Oct 18, 2018
1 parent 291b3bd commit 7e82505
Show file tree
Hide file tree
Showing 3 changed files with 32 additions and 31 deletions.
34 changes: 29 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,13 +48,27 @@ class MyExperiment(ClusterWork):
# ...
def reset(self, config=None, rep=0):
# run code that sets up your experiment for each repetition here
"""
Run code that sets up repetition rep of your experiment.
:param config: a dictionary with the experiment configuration
:param rep: the repetition counter
"""
pass
def iterate(self, config=None, rep=0, n=0):
# run your experiment for iteration n
# return results as a dictionary, for each key there will be one column in a results table.
"""
Run iteration n of repetition rep of your experiment.
:param config: a dictionary with the experiment configuration
:param rep: the repetition counter
:param n: the iteration counter
"""
pass
# Return results as a dictionary, for each key there will be one column in a results pandas.DataFrame.
# The DataFrame will be stored below the path defined in the experiment config.
return {'results': None}
# to run the experiments, you simply call run on your derived class
Expand All @@ -67,6 +81,8 @@ if __name__ == '__main__':
**ClusterWork** also implement a restart functionality. Since your results are stored after each iteration, your experiment can be restarted if its execution was interrupted for some reason. To obtain this functionality, you need to implement in addition at least the method `restore_state(self, config: dict, rep: int, n: int)`. Additionally, the method `save_state(self, config: dict, rep: int, n: int)` can be implemented to store additional information the needs to be loaded in the `restore_state` method. Finally, a flag `_restore_supported` must be set to `True`.
```Python
from cluster_work import ClusterWork
class MyExperiment(ClusterWork):
_restore_supported = True
Expand All @@ -86,6 +102,8 @@ class MyExperiment(ClusterWork):
The parameters for the experiment can be defined in an YAML-file that is passed as an command-line argument. Inside the derived class, we can define default parameters as a dictionary in the `_default_params` field:
```Python
from cluster_work import ClusterWork
class MyExperiment(ClusterWork):
# ...
Expand Down Expand Up @@ -117,8 +135,6 @@ The required keys for each experiment are `name`, `repetitions`, `iterations`, a
name: "DEFAULT"
repetitions: 20
iterations: 5
# this is the path where the results are stored,
# it can be different from the location of the experiment scripts
path: "path/to/experiment/folder"
params:
Expand All @@ -137,6 +153,14 @@ params:
num_steps: 50
```
The `path` defines the director where the results are stored. This path can be different from the location of the
experiment scripts **ClusterWork** will create a directory with the experiment `name` below this path in which it stores
the specific experiment configuration (with the missing values from the default parameters and for one set of parameters
from the list/grid feature, see below).
In a sub-folder `log`, **ClusterWork** will store the logged output and the results for each repetition and iteration.
If you want to get the path to the experiment folder or the log folder use the fields `self._path`, `self._log_path`,
and `self._log_path_rep`.
#### The list feature
If the key `list` is given in an experiment document, the experiment will be expanded for each value in the given list. For example
Expand Down
1 change: 0 additions & 1 deletion sbatch_template.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,5 @@ source activate your_env
cd %%experiment_cwd%%

srun hostname > hostfile.$SLURM_JOB_ID
hostfileconv hostfile.$SLURM_JOB_ID

mpiexec -map-by node -hostfile $SLURM_JOB_ID.hostfile --mca mpi_warn_on_fork 0 --display-allocation --display-map python -m mpi4py %%python_script_name%% -c %%yaml_config%% -d -v
28 changes: 3 additions & 25 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
# Versions should comply with PEP440. For a discussion on single-sourcing
# the version across setup.py and the project code, see
# https://packaging.python.org/en/latest/single_source_version.html
version='0.3.0',
version='0.3.1',

description='A framework to run experiments on an computing cluster.',
long_description=long_description,
Expand All @@ -30,27 +30,22 @@
# Choose your license
license='BSD-3',

# See https://pypi.python.org/pypi?%3Aaction=list_classifiers
classifiers=[
# How mature is this project? Common values are
# How mature is this project?
# 3 - Alpha
# 4 - Beta
# 5 - Production/Stable
'Development Status :: 4 - Beta',

# Indicate who your project is intended for
'Intended Audience :: Science/Research',
'Intended Audience :: Education',
'Topic :: System :: Distributed Computing',
'Topic :: Scientific/Engineering',
'Topic :: Scientific/Engineering :: Information Analysis',
'Topic :: Education',

# Pick your license as you wish (should match "license" above)
'License :: OSI Approved :: BSD License',

# Specify the Python versions you support here. In particular, ensure
# that you indicate whether you support Python 2, Python 3 or both.
'Programming Language :: Python :: 3',
'Programming Language :: Python :: 3.3',
'Programming Language :: Python :: 3.4',
Expand All @@ -64,24 +59,7 @@
# What does your project relate to?
keywords=['scientific', 'experiments', 'distributed computing', 'mpi', 'research'],

# You can just specify the packages manually here if your project is
# simple. Or you can use find_packages().
# packages=find_packages(exclude=['examples', 'experiments', 'notebooks']),
py_modules=['cluster_work', 'plot_work'],

py_modules=["cluster_work", "plot_work", "cluster_work_tools"],

# List run-time dependencies here. These will be installed by pip when
# your project is installed. For an analysis of "install_requires" vs pip's
# requirements files see:
# https://packaging.python.org/en/latest/requirements.html
install_requires=['PyYAML', 'numpy', 'pandas', 'matplotlib', 'ipython', 'ipywidgets'],

# To provide executable scripts, use entry points in preference to the
# "scripts" keyword. Entry points provide cross-platform support and allow
# pip to create the appropriate form of executable for the target platform.
# entry_points={
# 'console_scripts': [
# 'hostfileconv=cluster_work_tools:convert_hostfile',
# ],
# },
)

0 comments on commit 7e82505

Please sign in to comment.