Skip to content

blonded04/composable-parallel-scheduler-microbench

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

TLDR: One command to run it all

Right after setting up conda environment, do the following:

bash another-run.sh N

Where N is number of CPUs on 1 NUMA node. It will produce a folder microresults with raw .json files and plots in png and svg format.

You can also use another-run.sh instead of first-time-run.sh if you've already ran first-time-run.sh: it doesn't spend time to initialize conda environment.

Hybrid work distribution for parallel programs

Summary

In modern computing systems, the increasing number of cores in processors emphasizes the need for efficient utilization of resources. To achieve this goal, it is important to distribute the work generated by a parallel algorithm optimally among the cores.

Typically, there are two paradigms to achieve that: static and dynamic. The standard static way, used in OpenMP, is just the fork barrier that splits the whole work into the fixed number of presumably "equal" parts, e.g., split into equally-sized ranges for a parallel_for. The classic dynamic approaches are: 1) work-stealing task schedulers like in OneTBB or BOLT, or 2) work-sharing. As one can guess, the static approach has a very low overhead on the work distribution itself, but it does not work well in terms of the distribution optimality of parallel programs for general-purpose tasks, e.g., nested complex parallelism or uneven iterations of parallel_for. Here, we present a task scheduler that unites two discussed dynamic work distribution paradigms at the same time and achieves a low overhead with reasonably good task distribution.

Install dependencies

Using conda:

conda env create -f environment.yml
conda activate benchmarks

Build & Run

numactl --cpunodebind 0 make bench # build, runs benchmarks and saves results to ./raw_results

Depenping on runtime, the approtiate way to determine max number of threads will be used. You can limit the number of threads by setting the environment variable BENCH_NUM_THREADS.

Also LB4OMP runtime was supported, can be executed using make bench_lb4omp. You can run it via following command:

numactl --cpunodebind 0 make bench_lb4omp

Changing number of threads

To change number of threads you can:

  • Set BENCH_NUM_THREADS environment variable
  • Just modify a constant in GetNumThreads function body from ./include/num_threads.h file.

Plot results

You should modify filtered_modes list in ./benchplot.py script to control which modes are about to be plotted

conda activate benchmarks
python3 benchplot.py # plots benchmark results from ./raw_results and saves images to ./bench_results

About

Microbenchmarks for PPoPP 2025 paper on work-sharing scheduler

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published