-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add OpenFOAM HPC Motorbike performance test #27
base: main
Are you sure you want to change the base?
Add OpenFOAM HPC Motorbike performance test #27
Conversation
The size of the test is not a big issue as long as it is correctly tagged (e.g. "big"). |
Tests in the range of the hour are anyway easier to handle, but some compute hogs are also necessary. |
'astaff', 'badmin', 'gadminforever', 'l_sysadmin' have a project folder in |
Fixed in c351137. Also, 12120fc ensures that tests with the |
Fixed in 9802675. |
Issues:
|
Tried to run with the exclusive flag on Hortense with Foss on 1 node, but no performance benefit has been obtained. |
This pull requests adds an OpenFOAM benchmark test. The test case is taken from the OpenFOAM HPC Technical Committee (https://develop.openfoam.com/committees/hpc/-/wikis/home), assuming they know how to construct decent test cases.
Compared to the other tests thus far included in this repository, it requires considerably more resources: the case using a single node requires a few hours to complete. If you think this is outside the scope of vsc-test-suite, I could try to reduce the resource usage. The problem is that the meshing (which is an initialization before the actual simulation runs) takes very long for the Motorbike example that is considered. One solution would be to store the (large) meshing files somewhere, another would be to consider an example different from the Motorbike.
I ran the tests on genius-hortense-hydra-vaughan and added performance reference values. On hortense (@boegel) there are still two issues:
${VSC_SCRATCH}
. On hortense however, the quota on${VSC_SCRATCH}
is too restrictive. For now, I hard-coded a project scratch directory to which only I have access. Not sure how this can be dealt with so it would work for everybody.runParallel
that ship with OpenFOAM (see$WM_PROJECT_DIR/bin/tools/RunFunctions
) and call mpirun under the hood. When using an intel toolchain, things seem to be ok but with a foss toolchain, processes do not end up on the correct nodes. I think it would be necessary to ditch therunParallel
function and usemympirun
directly? Is that the advice you give to OpenFOAM users on the clusters in Ghent? To do such a thing in the proposed test would require modifying the scripts from the OpenFOAM HPC committee, so another workaround would be nice.