Skip to content

Commit

Permalink
Improve install and results generation process
Browse files Browse the repository at this point in the history
  • Loading branch information
duembgen committed Nov 2, 2023
1 parent 7f35a35 commit 010b828
Show file tree
Hide file tree
Showing 11 changed files with 41 additions and 44 deletions.
4 changes: 2 additions & 2 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ _plots_test/
_data/
_results/*.log
_results/**/*.pkl
_results_final/
_results_server/
_results/
_results_test/

*.egg-info/
*.pdf
17 changes: 11 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ A pre-print is available at [https://arxiv.org/abs/2209.04266](https://arxiv.org

## Installation

This code was written for Ubuntu 20.04.5, using Python 3.8.10.
This code was last tested with Ubuntu 20.04.1, using Python 3.10.3

### Local install

Expand All @@ -26,29 +26,34 @@ git clone --recursive [email protected]:utiasASRL/safe_and_smooth
All requirements can be installed by running
```
conda env create -f environment.yml
conda activate safeandsmooth
```

To check that the installation was successful, run
```
conda activate safeandsmooth
pytest .
```
You can also check that you can generate some toy example results by running
```
_scripts/generate_test_results.sh
```
and then checking the output created in `_plots_test`.

Please report any installation issues.

## Generate results

There are three types of results reported in the paper:

- Noise study: Run `simulate_noise.py` to generate the simulation study (Figures 4 and 7 (appendix)).
- Timing study: Run `simulate_time.py` to generate the runtime comparison (Figure 5)
- Real data: Run `evaluate_data.py` to evaluate the real dataset (Figures 1, 5 and 6).
- Noise study: Run `_scripts/simulate_noise.py` to generate the simulation study (Figures 4 and 7 (appendix)).
- Timing study: Run `_scripts/simulate_time.py` to generate the runtime comparison (Figure 5)
- Real data: Run `_scripts/evaluate_real.py` to evaluate the real dataset (Figures 1, 5 and 6).

You can generate all results by running
```
_scripts/generate_all_results.sh
```
After generating, all data can be evaluated, and new figures created, using the jupyter notebook `SafeAndSmooth.ipynb`. For more evaluations of the real dataset, refer to the notebook `DatasetEvaluation.ipynb`.
After generating, all data can be evaluated, and new figures created, by running `python _scripts/plot_results.py`. For more evaluations of the real dataset, refer to the notebook `_notebooks/DatasetEvaluation.ipynb` (you may need to run `pip install -r requirements.txt` for additional plotting libraries).

## Code references

Expand Down
5 changes: 1 addition & 4 deletions _scripts/evaluate_real.py
Original file line number Diff line number Diff line change
Expand Up @@ -168,11 +168,8 @@ def evaluate_datasets(
from utils.helper_params import logs_to_file, parse_arguments

save_results = True
out_dir = "_results_final/"

args = parse_arguments("Generate dataset results.")
if args.test:
out_dir = "_results/"
out_dir = args.resultdir

logfile = os.path.join(out_dir, "evaluate_real.log")
with logs_to_file(logfile):
Expand Down
16 changes: 4 additions & 12 deletions _scripts/generate_all_results.sh
Original file line number Diff line number Diff line change
@@ -1,13 +1,5 @@
#!/bin/bash

# run with fewer instances to make sure everything is working properly.
#python3 _scripts/simulate_time.py --test --resultdir="_results"
#python3 _scripts/simulate_noise.py --test --resultdir="_results"
#python3 _scripts/evaluate_real.py --test --resultdir="_results"
python3 _scripts/plot_results.py --resultdir="_results_server" --plotdir="_plots_test/"

# generate final results
#python3 _scripts/simulate_time.py --resultdir="_results_final/"
#python3 _scripts/simulate_noise.py --resultdir="_results_final/"
#python3 _scripts/evaluate_real.py --resultdir="_results_final/"
#python3 _scripts/plot_results.py --resultdir="_results_final/" --plotdir="_plots/"
#python3 _scripts/simulate_time.py --resultdir="_results"
#python3 _scripts/simulate_noise.py --resultdir="_results"
python3 _scripts/evaluate_real.py --resultdir="_results"
python3 _scripts/plot_results.py --resultdir="_results" --plotdir="_plots"
6 changes: 6 additions & 0 deletions _scripts/generate_test_results.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
#!/bin/bash
# run with fewer instances to make sure everything is working properly.
#python3 _scripts/simulate_time.py --test --resultdir="_results_test"
#python3 _scripts/simulate_noise.py --test --resultdir="_results_test"
python3 _scripts/evaluate_real.py --test --resultdir="_results_test"
python3 _scripts/plot_results.py --resultdir="_results_test" --plotdir="_plots_test"
2 changes: 1 addition & 1 deletion _scripts/plot_results.py
Original file line number Diff line number Diff line change
Expand Up @@ -679,12 +679,12 @@ def plot_real_top_calib(outdir, plotdir):
from utils.helper_params import parse_arguments

args = parse_arguments("Plot all results")
plot_real_top_calib(args.resultdir, args.plotdir)

plot_noise(args.resultdir, args.plotdir)
plot_timing(args.resultdir, args.plotdir)

plot_real_top(args.resultdir, args.plotdir)
plot_real_top_calib(args.resultdir, args.plotdir)
plot_real_top_estimate(args.resultdir, args.plotdir)

plot_problem_setup(args.plotdir)
4 changes: 2 additions & 2 deletions _scripts/simulate_noise.py
Original file line number Diff line number Diff line change
Expand Up @@ -177,8 +177,8 @@ def generate_results(params_dir, out_dir, save_results=SAVE_RESULTS, test=False)
if save_results:
results.to_pickle(fname)
print("saved intermediate as", fname)
if test:
plt.show()
# if test:
# plt.show()
return results


Expand Down
10 changes: 0 additions & 10 deletions environment.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,16 +7,6 @@ dependencies:
- python=3.10
- pip=22.3

- jupyter
- seaborn>=0.12.2
- numpy>=1.23.5
- scipy==1.10.0
- pandas>=1.4.3
- plotly>=5.18.0
- progressbar2==4.0
- pyyaml>=6.0
- pytest>=7.1.2

- pip:
- -e poly_matrix
- -e .
9 changes: 4 additions & 5 deletions requirements.txt
Original file line number Diff line number Diff line change
@@ -1,10 +1,9 @@
jupyter
seaborn>=0.12.2
numpy>=1.23.5
scipy>=1.9.0
scipy==1.10.0
matplotlib>=3.8.1
pandas>=1.4.3
progressbar2==4.0
pyyaml>=6.0
pytest>=7.1.2
scikit-umfpack>=0.3.3
scikit-sparse>=0.4.8
jupyter
plotly>=5.18.0
10 changes: 9 additions & 1 deletion setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,19 @@ license = { file="LICENSE" }

[options]
packages = find:
python_requires = >=3.10
install_requires =
numpy
scipy
matplotlib
pandas
progressbar2
pyyaml

[options.packages.find] # do not mistake tests/ for a package directory
exclude=tests*

[flake8]
ignore = W292, W391, F541, F841, W503
ignore = W292, W391, F541, F841, W503, E741
exclude = _notebooks/*, *.ipynb_checkpoints*
max-line-length = 99
2 changes: 1 addition & 1 deletion utils/helper_params.py
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ def parse_arguments(description=""):
"-r",
"--resultdir",
help="directory of results",
default="_results",
default="_results_test",
)
parser.add_argument(
"-p",
Expand Down

0 comments on commit 010b828

Please sign in to comment.