Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improving benchmark experience2.0 #5

Draft
wants to merge 3 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -149,6 +149,9 @@ dmypy.json
# pytype static type analyzer
.pytype/

# Ruff cache
.ruff_cache/

# Cython debug symbols
cython_debug/

Expand Down
37 changes: 25 additions & 12 deletions 30_perf_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
The generated plots are saved as a PNG file with the specified filename.

Usage:
python 30_perf_analysis.py <output_filename>
python 30_perf_analysis.py <path_to_csv> <wich_traj> <output_filename>
Lenoush marked this conversation as resolved.
Show resolved Hide resolved
"""

import argparse
Expand All @@ -20,14 +20,20 @@
parser = argparse.ArgumentParser(
description="Generate benchmark plots and save to specified file."
)
parser.add_argument(
"path_to_folder_with_csv", type=str, help="Path to the folder with all your csv you want to show."
)
parser.add_argument(
"traj_dimension", type=int, help="Indicate if your trajectory is in 2D or 3D."
)
parser.add_argument(
"output_filename", type=str, help="Name of the output file (without extension)."
)
args = parser.parse_args()

# Directory where benchmark result files are stored
BENCHMARK_DIR = "./outputs"
results_files = glob.glob(BENCHMARK_DIR + "/CPU/**/*.csv", recursive=True)
BENCHMARK_DIR = args.path_to_folder_with_csv
results_files = glob.glob(BENCHMARK_DIR + "**/*.csv", recursive=True)

# Read and concatenate all CSV files into a single DataFrame
df = pd.concat(map(pd.read_csv, results_files))
Expand Down Expand Up @@ -64,18 +70,20 @@
custom_palette = {1: "black", 12: "darkblue", 32: "purple"}

# Define x-axis limits for each metric
xlims = {
k: v
for k, v in zip(
metrics.keys(),
[(0, 35), (0, 80)] if num_metrics == 2 else [(0, 5), (0, 80), (0, 20)],
)
}
if num_metrics == 2:
limits = [(0, 35), (0, 80)]
elif args.traj_dimension == 2:
limits = [(0, 0.3), (0, 7), (0, 1)]
else:
limits = [(0, 5), (0, 80), (0, 20)]

xlims = {k: v for k, v in zip(metrics.keys(), limits)}
Lenoush marked this conversation as resolved.
Show resolved Hide resolved

# Generate bar plots for each task and metric
for row, task in zip(axs, tasks):
ddf = df[df["task"] == task]
for ax, (k) in zip(row[:num_metrics], metrics.keys()):
# print("k", k)
sns.barplot(
ddf,
x=k,
Expand All @@ -94,11 +102,13 @@
max_limit = xlims[k][1]
for container in ax.containers:
labels = [
f"{v:.1f}" if v >= max_limit else "" for v in container.datavalues
f"{v:.3f}" if v >= max_limit else "" for v in container.datavalues
]
ax.bar_label(
container, labels=labels, label_type="center", color="white", fontsize=6
)
# print(container.datavalues, container._label)


# Set axis labels
for ax, xlabel in zip(axs[-1, :], metrics.values()):
Expand Down Expand Up @@ -126,9 +136,12 @@
ax.set_xlim(xlim)
xticks = ax.get_xticks()
ax.set_xticks(xticks)
ax.set_xticklabels([f"{xt:.1f}" for xt in xticks])
ax.set_xticklabels([f"{int(xt)}" if xt % 1 == 0 else f"{xt:.1f}" for xt in xticks])

# Save the figure to the specified directory with the provided filename
output_file = BENCHMARK_DIR + f"/{args.output_filename}.png"
plt.savefig(output_file)
plt.show()

# Save the DataFrame to a CSV file
df.to_csv( BENCHMARK_DIR + f"/{args.output_filename}.csv", index=False)
59 changes: 43 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,6 @@ This are a collection of script to perform benchmarking of MRI-NUFFT operations.

They rely on the hydra configuration package and hydra-callback for measuring statistics. (see `requirements.txt`)


To fully reproduce the benchmarks 4 steps are necessary:

0. Get a Cartesian Reference image file, name `cpx_cartesian.npy`
Expand All @@ -14,24 +13,52 @@ To fully reproduce the benchmarks 4 steps are necessary:
Elif a 3D traj , `python 00_trajectory3D.py` + shape of your data
2. Run the benchmarks. Currently are available:
- The Performance benchmark, checking the CPU/GPU usage and memory footprint for the different backend and configuration `perf` folder.
If you have a configuration for 1 backend, 1 traj and 1 coil you can use `python 10_benchmark_perf.py` for you perf analysis.
If you want to make several benchmark in a row, you can run `python auto_benchmark_perf.py`
Backends, trajectories and coils can be managed directly at the start of this script.

In every case don't forget to install the necessary dependencies for each backend
You can use `python 10_benchmark_perf.py` for you perf analysis.
You can change some parameters like :
`python 10_benchmark_perf.py data.n_coils=32 backend.name=cufinufft`
You can also do multirun like :
`python 10_benchmark_perf.py -m data.n_coils=1,12,32 backend.name=cufinufft,finufft trajectory=./trajs/radial_256x256_0.5.bin,./trajs/stack2D_of_spiral_256x256_0.5.bin `

For running on Jean Zay, follow [this step](https://github.com/zaccharieramzi/jz-hydra-submitit-launcher) for the installation.
Then you can run :
`hydra-submitit-launch 10_benchmark_perf.py dev max_time=5.0 data.n_coils=1,12,32 trajectory=./radial_256x256_0.5.bin,./stack2D_of_spiral_256x256_0.5.bin backend.name=tensorflow `

In every case don't forget to install the necessary dependencies for each backend.
- The Quality benchmark that check how the pair trajectory/backend performs for the reconstruction. All the configuration is modifiable in `qual` folder.
To launch the quality benchmark run `python 20_benchmark_quality.py`
3. Generate some analysis figures using `python 30_perf_analysis.py` + title of the figures
At the start of the script, you need to indicate which folder the performance files are in.
Caution: to get beautiful graphs, you'll probably have to change the plot parameters (bar colors, abscissa max, number of digits after the decimal point, text size on the plots, etc.).
3. Generate some analysis figures using `python 30_perf_analysis.py` + path to the folder with all .csv + trajectory dimension use (2 or 3) + title of the figures we gonna save.

## Some results :
# Benchmark backend performance on 2D images and trajs.

On cuda11 :
![result2D_old](results/2D/result2D_cuda11.png)

On cuda12, with the new version on (cu)finufft 2.3 install with pip :
![result2D_new](results/2D/resuld2D_release_with_pip.png)

On cuda12, with the new version on (cu)finufft 2.3 install with 'pip install --no-binary finufft finufft' :
Lenoush marked this conversation as resolved.
Show resolved Hide resolved
![result2D_new0.1](results/2D/result2D_release_with_no_binary.png)



# Benchmark for GPU backend performance on 3D images and trajs.

On cuda11 :
![result3D](results/3D/result3D_very_old_gpu.png)

On cuda12, with the new version on (cu)finufft 2.3 install with pip :
![result3D](results/3D/result3D_gpu_release_with_pip.png)

On cuda12, with the new version on (cu)finufft 2.3 install with 'pip install --no-binary finufft finufft' :
Lenoush marked this conversation as resolved.
Show resolved Hide resolved
![result3D](results/3D/result3D_gpu_release_no_binary.png)



This is some result :
Benchmark backend performance on 2D images and trajs.
![result2D](results/result2D.png)
# Benchmark for CPU backend performance on 3D images and trajs.

Benchmark for GPU backend performance on 3D images and trajs.
![result3D](results/result3D_gpu.png)
On cuda11 :
![result3D](results/3D/result3D_very_old_cpu.png)

Benchmark for CPU backend performance on 3D images and trajs.
![result3D](results/result3D_cpu.png)
On cuda12, with the new version on (cu)finufft 2.3 install with pip :
![result3D](results/3D/result3D_cpu_release_with_pip.png)
81 changes: 0 additions & 81 deletions auto_benchmark_perf.py

This file was deleted.

8 changes: 5 additions & 3 deletions perf/benchmark_config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,13 +9,12 @@ data:
smaps: false
dtype: complex64

trajectory: "./trajs/floret_176x256x256_0.5.bin"
trajectory: "./trajs/radial_256x256_0.5.bin"
task:
- forward
- adjoint
- grad


backend:
name: finufft
eps: 1e-3
Expand All @@ -29,4 +28,7 @@ hydra:
job:
chdir: true
run:
dir: outputs/${now:%Y-%m-%d}/${now:%H-%M-%S}/
dir: outputs/running/${backend.name}_${now:%H-%M-%S}/
sweep:
dir: outputs/running/${backend.name}_${now:%H-%M-%S}/

Loading