Skip to content

Commit

Permalink
Merge branch 'benchmarking' of https://github.com/automl/DEHB into be…
Browse files Browse the repository at this point in the history
…nchmarking
  • Loading branch information
Bronzila committed Jul 3, 2024
2 parents bfadd8b + 2c23bf3 commit 7dc6912
Show file tree
Hide file tree
Showing 3 changed files with 38 additions and 40 deletions.
23 changes: 23 additions & 0 deletions .github/PULL_REQUEST_TEMPLATE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
## Pull Request Checklist

Thank you for your contribution! Before submitting this PR, please make sure you have completed the following steps:

### 1. Unit Tests / Normal PR Workflow

- [ ] Ensure all existing unit tests pass.
- [ ] Add new unit tests to cover the changes.
- [ ] Verify that your code follows the project's coding standards.
- [ ] Add documentation for your code if necessary.
- [ ] Check below, if your changes require you to run benchmarks.

#### When Do I Need To Run Benchmarks?

Depending on your changes, we ask you to run some benchmarks:

1. Style changes.

If your changes only consist of style modifications, such as renaming or adding docstrings, and do not interfere with DEHB's interface, functionality, or algorithm, it is sufficient for all test cases to pass.

2. Changes to DEHB's interface and functionality or the algorithm itself.

If your changes affect the interface, functionality, or algorithm of DEHB, please also run the synthetic benchmarks (MFH3, MFH6 of MFPBench, and the CountingOnes benchmark). This will help determine whether any changes introduced bugs or significantly altered DEHB's performance. However, at the reviewer's discretion, you may also be asked to run your changes on real-world benchmarks if deemed necessary. For instructions on how to install and run the benchmarks, please have a look at our [benchmarking instructions](../benchmarking/BENCHMARKING.md). Please use the same budget for your benchmark runs as we specified in the instructions.
10 changes: 10 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,16 @@ When submitting a pull request, please ensure the following:
- Ensure your code follows the project's code style and guidelines.
- Be responsive to any feedback or questions during the review process.

Additonally, we ask you to run specific benchmarks, depending on the depth of your changes:

1. Style changes.

If your changes only consist of style modifications, such as renaming or adding docstrings, and do not interfere with DEHB's interface, functionality, or algorithm, it is sufficient for all test cases to pass.

2. Changes to DEHB's interface and functionality or the algorithm itself.

If your changes affect the interface, functionality, or algorithm of DEHB, please also run the synthetic benchmarks (MFH3, MFH6 of MFPBench, and the CountingOnes benchmark). This will help determine whether any changes introduced bugs or significantly altered DEHB's performance. However, at the reviewer's discretion, you may also be asked to run your changes on real-world benchmarks if deemed necessary. For instructions on how to install and run the benchmarks, please have a look at our [benchmarking instructions](./benchmarking/BENCHMARKING.md). Please use the same budget for your benchmark runs as we specified in the instructions.

## Code Style and Guidelines

To maintain consistency and readability, we follow a set of code style and guidelines. Please make sure that your code adheres to these standards:
Expand Down
45 changes: 5 additions & 40 deletions benchmarking/BENCHMARKING.md
Original file line number Diff line number Diff line change
Expand Up @@ -49,49 +49,14 @@ pip install -e .[benchmarking,hpobench_benchmark]
The benchmarking script is highly configurable and lets you choose between the budget types (`fevals`, `brackets` and `total_cost`), the execution setup (`run`(default), `ask_tell` and `restart`), the benchmarks used (`tab_nn`, `tab_rf`, `tab_svm`, `tab_lr`, `surrogate`, `nasbench201`) and the seeds used for each benchmark run (default: [0]).

```shell
python3.8 benchmarking/hpobench_benchmark.py --fevals 300 --benchmarks tab_nn tab_rf tab_svm tab_lr surrogate nasbench201 --seed 0 --n_seeds 10 --output_path logs/hpobench_benchmarking
python3.8 benchmarking/hpobench_benchmark.py --fevals 300 --benchmarks tab_nn tab_rf tab_svm tab_lr surrogate nasbench201 --seed 0 --n_seeds 5 --output_path logs/hpobench_benchmarking
```

## Installation Guide MFPBench

The following guide walks you trough instaling mfpbench and running the benchmarking script. Here, we assume that you execute the commands in your cloned DEHB repository. Depending on the choice of benchmark, different requirements have to be installed, which are not compatible with one another. Thus we divide the setup into two sections, one for installing the JAHS benchmark and one for the PD1 benchmark. The MFHartmann benchmarks are work with both installations.
The following guide walks you trough instaling mfpbench and running the benchmarking script. Here, we assume that you execute the commands in your cloned DEHB repository.

## JAHS Benchmark

### Create Virtual Environment

Before starting, please make sure you have clean virtual environment using python 3.8 ready. The following commands walk you through on how to do this with conda.

```shell
conda create --name dehb_jahs python=3.8
conda activate dehb_jahs
```

### Installing DEHB with MFPBench

There are some additional dependencies needed for plotting and table generation, therefore please install DEHB with the benchmarking options:

```shell
pip install -e .[benchmarking,jahs_benchmark]
```

### Downloading Benchmark Data

In order to run the benchmark, first we need to download the benchmark data:

```shell
python -m mfpbench download --benchmark jahs
```

### Running the Benchmarking Script

The setup is similar as in the HPOBench section, however under this installation only the `jahs` (joint architecture and hyperparameter search), `mfh3` and `mfh6` benchmarks are available.

```shell
python3.8 benchmarking/mfpbench_benchmark.py --fevals 300 --benchmarks jahs mfh3 mfh6 --seed 0 --n_seeds 10 --output_path logs/jahs_benchmarking
```

## PD1 Benchmark
## PD1 Benchmark and MFHartmann

### Create Virtual Environment

Expand Down Expand Up @@ -123,7 +88,7 @@ python -m mfpbench download --benchmark pd1
We currently support and use the PD1 benchmarks `cifar100_wideresnet_2048`, `imagenet_resnet_512`, `lm1b_transformer_2048` and `translatewmt_xformer_64`. Moreover, the `mfh3` and `mfh6` benchmarks are available.

```shell
python3.8 benchmarking/mfpbench_benchmark.py --fevals 300 --benchmarks cifar100_wideresnet_2048 imagenet_resnet_512 lm1b_transformer_2048 translatewmt_xformer_64 mfh3 mfh6 --seed 0 --n_seeds 10 --output_path logs/pd1_benchmarks
python3.8 benchmarking/mfpbench_benchmark.py --fevals 300 --benchmarks mfh3 mfh6 cifar100_wideresnet_2048 imagenet_resnet_512 lm1b_transformer_2048 translatewmt_xformer_64 mfh3 mfh6 --seed 0 --n_seeds 5 --output_path logs/pd1_benchmarks
```

## CountingOnes Benchmark
Expand All @@ -133,5 +98,5 @@ The CountingOnes benchmark is a synthetical benchmark and only depends on numpy,
### Running the Benchmarking Script

```shell
python benchmarking/countingones_benchmark.py --seed 0 --n_seeds 10 --fevals 300 --output_path logs/countingones --n_continuous 50 --n_categorical 50
python benchmarking/countingones_benchmark.py --seed 0 --n_seeds 5 --fevals 300 --output_path logs/countingones --n_continuous 50 --n_categorical 50
```

0 comments on commit 7dc6912

Please sign in to comment.