Skip to content

Commit

Permalink
ENH: Add benchmarking files
Browse files Browse the repository at this point in the history
Add benchmarking files so that `nifreeze` can be benchmarked using
`asv`:
- Add the actual files that allow to benchmark different `nifreeze`
  capabilities.
- Add a new `benchmark` optional dependencies section to
  `pyproject.toml`.
- Add the `asv` configuration file.
- Add a `README.rst` file to explain how to run the benchmarking.
- Add a GitHub Actions workflow file to run the benchmarks for every PR.
  • Loading branch information
jhlegarreta committed Jan 19, 2025
1 parent a78af6c commit 1ea5bc5
Show file tree
Hide file tree
Showing 6 changed files with 302 additions and 0 deletions.
59 changes: 59 additions & 0 deletions .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
name: Benchmark

on:
push:
branches:
- main
- maint/*
pull_request:
branches:
- main
- maint/*
# Allow job to be triggered manually from GitHub interface
workflow_dispatch:

defaults:
run:
shell: bash

concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: true

permissions:
contents: read

jobs:
benchmark:
name: Linux
runs-on: ubuntu-latest
defaults:
run:
shell: bash
strategy:
fail-fast: false
matrix:
python-version: [ '3.11' ]

steps:
- name: Set up system
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v5
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install .[antsopt,benchmark]
- name: Set threading parameters for reliable benchmarking
run: |
export OPENBLAS_NUM_THREADS=1
export MKL_NUM_THREADS=1
export OMP_NUM_THREADS=1
- name: Run benchmarks
run: |
asv machine --yes --config asv.conf.json
asv run --config asv.conf.json --show-stderr
95 changes: 95 additions & 0 deletions asv.conf.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
{
// The version of the config file format. Do not change, unless
// you know what you are doing.
"version": 1,

// The name of the project being benchmarked
"project": "nifreeze",

// The project's homepage
"project_url": "https://www.nipreps.org/nifreeze/",

// The URL or local path of the source code repository for the
// project being benchmarked
"repo": "..",

// List of branches to benchmark. If not provided, defaults to "master"
// (for git) or "tip" (for mercurial).
"branches": ["HEAD"],

"build_command": [
"python -m build --wheel -o {build_cache_dir} {build_dir}"
],

// The DVCS being used. If not set, it will be automatically
// determined from "repo" by looking at the protocol in the URL
// (if remote), or by looking for special directories, such as
// ".git" (if local).
"dvcs": "git",

// The tool to use to create environments. May be "conda",
// "virtualenv" or other value depending on the plugins in use.
// If missing or the empty string, the tool will be automatically
// determined by looking for tools on the PATH environment
// variable.
"environment_type": "virtualenv",

// the base URL to show a commit for the project.
"show_commit_url": "https://github.com/nipreps/nifreeze/commit/",

// The Pythons you'd like to test against. If not provided, defaults
// to the current version of Python used to run `asv`.
// "pythons": ["3.12"],

// The matrix of dependencies to test. Each key is the name of a
// package (in PyPI) and the values are version numbers. An empty
// list indicates to just test against the default (latest)
// version.
"matrix": {
"dipy": [],
"nipype": [],
"nest-asyncio": [],
"nitransforms": [],
"numpy": [],
"scikit_learn": [],
"scipy": []
},

// The directory (relative to the current directory) that benchmarks are
// stored in. If not provided, defaults to "benchmarks"
"benchmark_dir": "benchmarks",

// The directory (relative to the current directory) to cache the Python
// environments in. If not provided, defaults to "env"
"env_dir": "env",


// The directory (relative to the current directory) that raw benchmark
// results are stored in. If not provided, defaults to "results".
"results_dir": "results",

// The directory (relative to the current directory) that the html tree
// should be written to. If not provided, defaults to "html".
"html_dir": "html",

// The number of characters to retain in the commit hashes.
// "hash_length": 8,

// `asv` will cache wheels of the recent builds in each
// environment, making them faster to install next time. This is
// number of builds to keep, per environment.
"build_cache_size": 8,

// The commits after which the regression search in `asv publish`
// should start looking for regressions. Dictionary whose keys are
// regexps matching to benchmark names, and values corresponding to
// the commit (exclusive) after which to start looking for
// regressions. The default is to start from the first commit
// with results. If the commit is `null`, regression detection is
// skipped for the matching benchmark.
//
// "regressions_first_commits": {
// "some_benchmark": "352cdf", // Consider regressions only after this commit
// "another_benchmark": null, // Skip regression detection altogether
// }
}
40 changes: 40 additions & 0 deletions benchmarks/README.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,40 @@
.. -*- rst -*-
===================
NiFreeze benchmarks
===================

Benchmarking NiFreeze with Airspeed Velocity.

Usage
-----

Airspeed Velocity manages building and Python environments by itself,
unless told otherwise. To run the benchmarks, you do not need to
install a development version of NiFreeze to your current Python environment.

To run all benchmarks for the latest commit, navigate to the root NiFreeze
``benchmarks`` directory and execute::

asv run

For testing benchmarks locally, it may be better to run these without
replications::

export REGEXP="bench.*Ufunc"
asv run --dry-run --show-stderr --python=same --quick -b $REGEXP

All of the commands above display the results in plain text in the console,
and the results are not saved for comparison with future commits. For
greater control, a graphical view, and to have results saved for future
comparison you can run ASV commands (record results and generate HTML):::

asv run --skip-existing-commits --steps 10 ALL
asv publish
asv preview

More on how to use ``asv`` can be found in `ASV documentation`_
Command-line help is available as usual via ``asv --help`` and
``asv run --help``.

.. _ASV documentation: https://asv.readthedocs.io/
Empty file.
102 changes: 102 additions & 0 deletions benchmarks/benchmarks/bench_model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
# emacs: -*- mode: python; py-indent-offset: 4; indent-tabs-mode: nil -*-
# vi: set ft=python sts=4 ts=4 sw=4 et:
#
# Copyright The NiPreps Developers <[email protected]>
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
# We support and encourage derived works from this project, please read
# about our expectations at
#
# https://www.nipreps.org/community/licensing/
#
"""Benchmarking for nifreeze's models."""

from abc import ABC

import dipy.data as dpd
import nibabel as nb
import numpy as np
from dipy.core.gradients import get_bval_indices
from dipy.io import read_bvals_bvecs
from dipy.segment.mask import median_otsu
from scipy.ndimage import binary_dilation
from skimage.morphology import ball

from nifreeze.model.gpr import DiffusionGPR, SphericalKriging


class DiffusionGPRBenchmark(ABC):
def __init__(self):
self._estimator = None
self._X_train = None
self._y_train = None
self._X_test = None
self._y_test = None

def setup(self, *args, **kwargs):
beta_a = 1.38
beta_l = 1 / 2.1
alpha = 0.1
disp = True
optimizer = None
self.make_estimator((beta_a, beta_l, alpha, disp, optimizer))
self.make_data()

def make_estimator(self, params):
beta_a, beta_l, alpha, disp, optimizer = params
kernel = SphericalKriging(beta_a=beta_a, beta_l=beta_l)
self._estimator = DiffusionGPR(
kernel=kernel,
alpha=alpha,
disp=disp,
optimizer=optimizer,
)

def make_data(self):
name = "sherbrooke_3shell"

dwi_fname, bval_fname, bvec_fname = dpd.get_fnames(name=name)
dwi_data = nb.load(dwi_fname).get_fdata()
bvals, bvecs = read_bvals_bvecs(bval_fname, bvec_fname)

_, brain_mask = median_otsu(dwi_data, vol_idx=[0])
brain_mask = binary_dilation(brain_mask, ball(8))

bval = 1000
indices = get_bval_indices(bvals, bval, tol=20)

bvecs_shell = bvecs[indices]
shell_data = dwi_data[..., indices]
dwi_vol_idx = len(indices) // 2

# Prepare a train/test mask (False for all directions except the left-out where it's true)
train_test_mask = np.zeros(bvecs_shell.shape[0], dtype=bool)
train_test_mask[dwi_vol_idx] = True

# Generate train/test bvecs
self._X_train = bvecs_shell[~train_test_mask, :]
self._X_test = bvecs_shell[train_test_mask, :]

# Select voxels within brain mask
y = shell_data[brain_mask]

# Generate train/test data
self._y_train = y[:, ~train_test_mask]
self._y_test = y[:, train_test_mask]

def time_fit(self, *args):
self._estimator = self._estimator.fit(self._X_train, self._y_train.T)

def time_predict(self):
self._estimator.predict(self._X_test)
6 changes: 6 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -81,6 +81,12 @@ antsopt = [
"smac",
]

benchmark = [
"asv",
"pyperf",
"virtualenv",
]

# Aliases
docs = ["nifreeze[doc]"]
tests = ["nifreeze[test]"]
Expand Down

0 comments on commit 1ea5bc5

Please sign in to comment.