You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Based on our experience during last weeks hackathon, Kokkos::AUTO for the team_size in hierarchical kernel may not yield optimal performance (with variation up to 20% depending on block size and problem size).
So I'm wondering if we could add a runtime optimization, for example, by adding a par_for_tune abstraction that would translate to running the same kernel with various team and vector sizes, measure the runtime, and storing the best version for future iterations.
Of course this would work only for kernels that do not modify the "input" data.
Based on our experience during last weeks hackathon,
Kokkos::AUTO
for the team_size in hierarchical kernel may not yield optimal performance (with variation up to 20% depending on block size and problem size).So I'm wondering if we could add a runtime optimization, for example, by adding a
par_for_tune
abstraction that would translate to running the same kernel with various team and vector sizes, measure the runtime, and storing the best version for future iterations.Of course this would work only for kernels that do not modify the "input" data.
We should also check to which degree this can be done with or how this compares to kokkos auto tuning, see, e.g., https://indico.math.cnrs.fr/event/12037/attachments/5040/8156/KokkosTutorial_07_Tools.pdf and https://www.olcf.ornl.gov/wp-content/uploads/2021/06/ToolsTutorialOLCF.pptx.pdf
The text was updated successfully, but these errors were encountered: