Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

You can run RC on multiple GPUs on different nodes. You just need to provide mapping from MPI rank to CUDA device ID on a particular node. You can do it using **-devices** argument. #100

Open
Dcn303 opened this issue Dec 6, 2022 · 1 comment

Comments

@Dcn303
Copy link

Dcn303 commented Dec 6, 2022

    You can run RC on multiple GPUs on different nodes. You just need to provide mapping from MPI rank to CUDA device ID on a particular node. You can do it using **-devices** argument.

Example: You want to run simpleFoam on four GPUs spread across two nodes. The first node will host MPI ranks 0 and 2, the second node will host ranks 1 and 3. Then your command line argument should look like this:

mpirun -np 4 -hosts ... simpleFoam -parallel -devices "(0 0 1 1)"

The list after -devices tells RC to use devices with ID 0 for ranks 0 and 1, and devices with ID 1 for ranks 2 and 3.

Originally posted by @daniel-jasinski in #24 (comment)

@Dcn303
Copy link
Author

Dcn303 commented Dec 6, 2022

Sir @daniel-jasinski
I have 10 HPC nodes each node having 2 GPUs hence i have a total of 10x2=20 Gpus
Then sir how exactly would i write the simpleFoam command to run parallely accross those GPUs
i did not get the exact command as specified in issue #24
sir please help in this regard
Thank you sir

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant