You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You can run RC on multiple GPUs on different nodes. You just need to provide mapping from MPI rank to CUDA device ID on a particular node. You can do it using **-devices** argument.
#100
You can run RC on multiple GPUs on different nodes. You just need to provide mapping from MPI rank to CUDA device ID on a particular node. You can do it using **-devices** argument.
Example: You want to run simpleFoam on four GPUs spread across two nodes. The first node will host MPI ranks 0 and 2, the second node will host ranks 1 and 3. Then your command line argument should look like this:
Sir @daniel-jasinski
I have 10 HPC nodes each node having 2 GPUs hence i have a total of 10x2=20 Gpus
Then sir how exactly would i write the simpleFoam command to run parallely accross those GPUs
i did not get the exact command as specified in issue #24
sir please help in this regard
Thank you sir
Example: You want to run simpleFoam on four GPUs spread across two nodes. The first node will host MPI ranks 0 and 2, the second node will host ranks 1 and 3. Then your command line argument should look like this:
mpirun -np 4 -hosts ... simpleFoam -parallel -devices "(0 0 1 1)"
The list after -devices tells RC to use devices with ID 0 for ranks 0 and 1, and devices with ID 1 for ranks 2 and 3.
Originally posted by @daniel-jasinski in #24 (comment)
The text was updated successfully, but these errors were encountered: