-
Notifications
You must be signed in to change notification settings - Fork 1
/
satori-cuda-aware-mpi.html
680 lines (546 loc) · 48.4 KB
/
satori-cuda-aware-mpi.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
<!DOCTYPE html>
<html class="writer-html5" lang="en" >
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Using MPI and CUDA on Satori — MIT Satori User Documentation documentation</title>
<link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<link rel="canonical" href="https://researchcomputing.mit.edu/satori-cuda-aware-mpi.html"/>
<!--[if lt IE 9]>
<script src="_static/js/html5shiv.min.js"></script>
<![endif]-->
<script type="text/javascript" id="documentation_options" data-url_root="./" src="_static/documentation_options.js"></script>
<script src="_static/jquery.js"></script>
<script src="_static/underscore.js"></script>
<script src="_static/doctools.js"></script>
<script type="text/javascript" src="_static/js/theme.js"></script>
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="search.html" />
<link rel="next" title="Example machine learning LSF jobs" href="lsf-templates/satori-lsf-ml-examples.html" />
<link rel="prev" title="R on Satori" href="satori-R.html" />
</head>
<body class="wy-body-for-nav">
<div class="wy-grid-for-nav">
<nav data-toggle="wy-nav-shift" class="wy-nav-side">
<div class="wy-side-scroll">
<div class="wy-side-nav-search" >
<a href="index.html" class="icon icon-home"> MIT Satori User Documentation
</a>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
<div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="satori-basics.html">Satori Basics</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-basics.html#what-is-satori">What is Satori?</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-basics.html#how-can-i-get-an-account">How can I get an account?</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-basics.html#getting-help">Getting help?</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-ssh.html">Satori Login</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-ssh.html#web-portal-login">Web Portal Login</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-ssh.html#ssh-login">SSH Login</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-getting-started.html">Starting up on Satori</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-started.html#getting-your-account">Getting Your Account</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-started.html#shared-hpc-clusters">Shared HPC Clusters</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-started.html#logging-in-to-satori">Logging in to Satori</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-started.html#the-satori-portal">The Satori Portal</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-started.html#setting-up-your-environment">Setting up Your Environment</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-started.html#transferring-files">Transferring Files</a><ul>
<li class="toctree-l3"><a class="reference internal" href="satori-getting-started.html#using-scp-or-rysnc">Using scp or rysnc</a></li>
<li class="toctree-l3"><a class="reference internal" href="satori-getting-started.html#satori-portal-file-explorer">Satori Portal File Explorer</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-started.html#types-of-jobs">Types of Jobs</a><ul>
<li class="toctree-l3"><a class="reference internal" href="satori-getting-started.html#running-interactive-jobs">Running Interactive Jobs</a></li>
<li class="toctree-l3"><a class="reference internal" href="satori-getting-started.html#running-batch-jobs">Running Batch Jobs</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-training.html">Training for faster onboarding in the system HW and SW architecture</a></li>
<li class="toctree-l1"><a class="reference internal" href="satori-workload-manager-using-slurm.html">Running your AI training jobs on Satori using Slurm</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-workload-manager-using-slurm.html#a-note-on-exclusivity">A Note on Exclusivity</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-workload-manager-using-slurm.html#interactive-jobs">Interactive Jobs</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-workload-manager-using-slurm.html#batch-scripts">Batch Scripts</a><ul>
<li class="toctree-l3"><a class="reference internal" href="satori-workload-manager-using-slurm.html#monitoring-jobs">Monitoring Jobs</a></li>
<li class="toctree-l3"><a class="reference internal" href="satori-workload-manager-using-slurm.html#canceling-jobs">Canceling Jobs</a></li>
<li class="toctree-l3"><a class="reference internal" href="satori-workload-manager-using-slurm.html#scheduling-policy">Scheduling Policy</a></li>
<li class="toctree-l3"><a class="reference internal" href="satori-workload-manager-using-slurm.html#batch-queue-policy">Batch Queue Policy</a></li>
<li class="toctree-l3"><a class="reference internal" href="satori-workload-manager-using-slurm.html#queue-policies">Queue Policies</a></li>
<li class="toctree-l3"><a class="reference internal" href="satori-workload-manager-using-slurm.html#running-jobs-in-series">Running jobs in series</a></li>
<li class="toctree-l3"><a class="reference internal" href="satori-workload-manager-using-slurm.html#note-on-pytorch-1-4">Note on Pytorch 1.4</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-troubleshooting.html">Troubleshooting</a></li>
<li class="toctree-l1"><a class="reference internal" href="satori-ai-frameworks.html">IBM Watson Machine Learning Community Edition (WML-CE) and Open Cognitive Environment (Open-CE)</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-ai-frameworks.html#install-anaconda">[1] Install Anaconda</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-ai-frameworks.html#wml-ce-and-open-ce-setting-up-the-software-repository">[2] WML-CE and Open-CE: Setting up the software repository</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-ai-frameworks.html#wml-ce-and-open-ce-creating-and-activate-conda-environments-recommended">[3] WML-CE and Open-CE: Creating and activate conda environments (recommended)</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-ai-frameworks.html#wml-ce-installing-all-frameworks-at-the-same-time">[4] WML-CE: Installing all frameworks at the same time</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-ai-frameworks.html#wml-ce-testing-ml-dl-frameworks-pytorch-tensorflow-etc-installation">[5] WML-CE: Testing ML/DL frameworks (Pytorch, TensorFlow etc) installation</a><ul>
<li class="toctree-l3"><a class="reference internal" href="satori-ai-frameworks.html#controlling-wml-ce-release-packages">Controlling WML-CE release packages</a></li>
<li class="toctree-l3"><a class="reference internal" href="satori-ai-frameworks.html#additional-conda-channels">Additional conda channels</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="satori-ai-frameworks.html#the-wml-ce-supplementary-channel-is-available-at-https-anaconda-org-powerai">The WML CE Supplementary channel is available at: https://anaconda.org/powerai/.</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-ai-frameworks.html#the-wml-ce-early-access-channel-is-available-at-https-public-dhe-ibm-com-ibmdl-export-pub-software-server-ibm-ai-conda-early-access">The WML-CE Early Access channel is available at: https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda-early-access/.</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-distributed-deeplearning.html">Distributed Deep Learning</a></li>
<li class="toctree-l1"><a class="reference internal" href="satori-large-model-support.html">IBM Large Model Support (LMS)</a></li>
<li class="toctree-l1"><a class="reference internal" href="satori-julia.html">Julia on Satori</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-julia.html#getting-started">Getting started</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-julia.html#getting-help">Getting help?</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-julia.html#a-simple-batch-script-example">A simple batch script example</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-julia.html#recipe-for-running-single-gpu-single-threaded-interactive-session-with-cuda-aware-mpi">Recipe for running single GPU, single threaded interactive session with CUDA aware MPI</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-julia.html#running-a-multi-process-julia-program-somewhat-interactively">Running a multi-process julia program somewhat interactively</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-julia.html#an-example-of-installing-https-github-com-clima-climatemachine-jl-on-satori">An example of installing https://github.com/clima/climatemachine.jl on Satori</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-R.html">R on Satori</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-R.html#getting-started-with-r">Getting Started with R</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-R.html#installing-packages">Installing Packages</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-R.html#a-simple-batch-script-example">A Simple Batch Script Example</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-R.html#r-and-python">R and Python</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-R.html#running-r-in-a-container">Running R in a container</a></li>
</ul>
</li>
<li class="toctree-l1 current"><a class="current reference internal" href="#">Using MPI and CUDA on Satori</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#getting-started">Getting started</a></li>
<li class="toctree-l2"><a class="reference internal" href="#compiling">Compiling</a></li>
<li class="toctree-l2"><a class="reference internal" href="#submiting-a-batch-script">Submiting a batch script</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#batch-script-header">Batch script header</a></li>
<li class="toctree-l3"><a class="reference internal" href="#assigning-gpus-to-mpi-ranks">Assigning GPUs to MPI ranks</a></li>
<li class="toctree-l3"><a class="reference internal" href="#running-the-mpi-program-within-the-batch-script">Running the MPI program within the batch script</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="#a-complete-example-slurm-batch-script">A complete example SLURM batch script</a></li>
<li class="toctree-l2"><a class="reference internal" href="#using-alternate-mpi-builds">Using alternate MPI builds</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="lsf-templates/satori-lsf-ml-examples.html">Example machine learning LSF jobs</a><ul>
<li class="toctree-l2"><a class="reference internal" href="lsf-templates/satori-lsf-ml-examples.html#a-single-node-4-gpu-keras-example">A single node, 4 GPU Keras example</a></li>
<li class="toctree-l2"><a class="reference internal" href="lsf-templates/satori-lsf-ml-examples.html#a-single-node-4-gpu-caffe-example">A single node, 4 GPU Caffe example</a></li>
<li class="toctree-l2"><a class="reference internal" href="lsf-templates/satori-lsf-ml-examples.html#a-multi-node-pytorch-example">A multi-node, pytorch example</a></li>
<li class="toctree-l2"><a class="reference internal" href="lsf-templates/satori-lsf-ml-examples.html#a-multi-node-pytorch-example-with-the-horovod-conda-environment">A multi-node, pytorch example with the horovod conda environment</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-howto-videos.html">Satori Howto Video Sessions</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-howto-videos.html#installing-wmcle-on-satori">Installing WMCLE on Satori</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-howto-videos.html#pytorch-with-ddl-on-satori">Pytorch with DDL on Satori</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-howto-videos.html#tensorflow-with-ddl-on-satori">Tensorflow with DDL on Satori</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-howto-videos.html#jupyterlab-with-ssh-tunnel-on-satori">Jupyterlab with SSH Tunnel on Satori</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-public-datasets.html">Satori Public Datasets</a></li>
<li class="toctree-l1"><a class="reference internal" href="singularity.html">Singularity for Satorians</a><ul>
<li class="toctree-l2"><a class="reference internal" href="singularity.html#fast-start">Fast start</a></li>
<li class="toctree-l2"><a class="reference internal" href="singularity.html#other-notes">Other notes</a></li>
<li class="toctree-l2"><a class="reference internal" href="singularity.html#interactive-allocation">Interactive Allocation:</a></li>
<li class="toctree-l2"><a class="reference internal" href="singularity.html#non-interactive-batch-mode">Non interactive / batch mode</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-relion-cryoem.html">Relion Cryoem for Satorians</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-relion-cryoem.html#prerequisites">Prerequisites</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-relion-cryoem.html#quick-start">Quick start</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-relion-cryoem.html#other-notes">Other notes</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-copy-large-filesets.html">Copying larger files and large file sets</a></li>
<li class="toctree-l1"><a class="reference internal" href="satori-copy-large-filesets.html#using-mrsync">Using mrsync</a></li>
<li class="toctree-l1"><a class="reference internal" href="satori-copy-large-filesets.html#using-aspera-for-remote-file-transfer-to-satori-cluster">Using Aspera for remote file transfer to Satori cluster</a></li>
<li class="toctree-l1"><a class="reference internal" href="satori-doc-examples-contributing.html">FAQ</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-doc-examples-contributing.html#tips-tricks-and-questions">Tips, tricks and questions</a><ul>
<li class="toctree-l3"><a class="reference internal" href="tips-and-tricks/storage/index.html">How can I see disk usage?</a></li>
<li class="toctree-l3"><a class="reference internal" href="tips-and-tricks/storage/index.html#where-should-i-put-world-or-project-shared-datasets">Where should I put world or project shared datasets?</a></li>
<li class="toctree-l3"><a class="reference internal" href="portal-howto/customization.html">How can I create custom Jupyter kernels for the Satori web portal?</a><ul>
<li class="toctree-l4"><a class="reference internal" href="portal-howto/customization.html#steps-to-create-a-kernel">Steps to create a kernel</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tips-and-tricks/carlos-quick-start-commands/index.html">How do I set up a basic conda environment?</a></li>
<li class="toctree-l3"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html">System software queries</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#what-linux-distribution-version-am-i-running">What Linux distribution version am I running?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#what-linux-kernel-level-am-i-running">What Linux kernel level am I running?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#what-software-levels-are-installed-on-the-system">What software levels are installed on the system?</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#system-hardware-queries">System hardware queries</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#what-is-my-cpu-configuration">What is my CPU configuration?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#how-much-ram-is-there-on-my-nodes">How much RAM is there on my nodes?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#what-smt-mode-are-my-nodes-in">What SMT mode are my nodes in?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#what-cpu-governor-is-in-effect-on-my-nodes">What CPU governor is in effect on my nodes?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#what-are-the-logical-ids-and-uuids-for-the-gpus-on-my-nodes">What are the logical IDs and UUIDs for the GPUs on my nodes?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#what-is-the-ibm-model-of-my-system">What is the IBM model of my system?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#which-logical-cpus-belong-to-which-socket">Which logical CPUs belong to which socket?</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#questions-about-my-jobs">Questions about my jobs</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#how-can-i-establish-which-logical-cpu-ids-my-process-is-bound-to">How can I establish which logical CPU IDs my process is bound to?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#can-i-see-the-output-of-my-job-before-it-completes">Can I see the output of my job before it completes?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#i-have-a-job-waiting-in-the-queue-and-i-want-to-modify-the-options-i-had-selected">I have a job waiting in the queue, and I want to modify the options I had selected</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#i-have-submitted-my-job-several-times-but-i-get-no-output">I have submitted my job several times, but I get no output</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#how-do-i-set-a-time-limit-on-my-job">How do I set a time limit on my job?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#can-i-make-a-jobs-startup-depend-on-the-completion-of-a-previous-one">Can I make a job’s startup depend on the completion of a previous one?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#how-do-i-select-a-specific-set-of-hosts-for-my-job">How do I select a specific set of hosts for my job?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#how-do-i-deselect-specific-nodes-for-my-job">How do I deselect specific nodes for my job?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#my-jobs-runtime-environment-is-different-from-what-i-expected">My job’s runtime environment is different from what I expected</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#i-want-to-know-precisely-what-my-jobs-runtime-environment-is">I want to know precisely what my job’s runtime environment is</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tips-and-tricks/ondemand_portal_queries/index.html">Portal queries</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/ondemand_portal_queries/index.html#i-see-no-active-sessions-in-my-interactive-sessions">I see no active sessions in My Interactive Sessions?</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tips-and-tricks/singularity-tips/index.html">How do I build a Singularity image from scratch?</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/singularity-tips/index.html#set-up-to-run-docker-in-ppc64le-mode-on-an-x86-machine">Set up to run Docker in ppc64le mode on an x86 machine</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/singularity-tips/index.html#run-docker-in-ppc64le-mode-on-an-x86-machine-to-generate-an-image-for-satori">Run Docker in ppc64le mode on an x86 machine to generate an image for Satori</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/singularity-tips/index.html#import-new-docker-hub-image-into-singularity-on-satori">Import new Docker hub image into Singularity on Satori</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/singularity-tips/index.html#using-singularity-instead-of-docker">Using Singularity instead of Docker</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-tutorial-examples.html">Green Up Hackathon IAP 2020</a><ul>
<li class="toctree-l2"><a class="reference internal" href="tutorial-examples/index.html">Tutorial Examples</a><ul>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/pytorch-style-transfer/index.html">Pytorch Style Transfer</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/pytorch-style-transfer/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/pytorch-style-transfer/index.html#commands-to-run-this-example">Commands to run this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/pytorch-style-transfer/index.html#code-and-input-data-repositories-for-this-example">Code and input data repositories for this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/pytorch-style-transfer/index.html#useful-references">Useful references</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/neural-network-dna-demo/index.html">Neural network DNA</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/neural-network-dna-demo/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/neural-network-dna-demo/index.html#commands-to-run-this-example">Commands to run this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/neural-network-dna-demo/index.html#code-and-input-data-repositories-for-this-example">Code and input data repositories for this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/neural-network-dna-demo/index.html#useful-references">Useful references</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/transfer-learning-pathology/index.html">Pathology Image Classification Transfer Learning</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/transfer-learning-pathology/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/transfer-learning-pathology/index.html#commands-to-run-this-example">Commands to run this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/transfer-learning-pathology/index.html#code-and-input-data-repositories-for-this-example">Code and input data repositories for this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/transfer-learning-pathology/index.html#useful-references">Useful references</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/tensorflow-2.x-multi-gpu-multi-node/index.html">Multi Node Multi GPU TensorFlow 2.0 Distributed Training Example</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/tensorflow-2.x-multi-gpu-multi-node/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/tensorflow-2.x-multi-gpu-multi-node/index.html#prerequisites-if-you-are-not-yet-running-tensorflow-2-0">Prerequisites if you are not yet running TensorFlow 2.0</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/tensorflow-2.x-multi-gpu-multi-node/index.html#commands-to-run-this-example">Commands to run this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/tensorflow-2.x-multi-gpu-multi-node/index.html#what-s-going-on-here">What’s going on here?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/tensorflow-2.x-multi-gpu-multi-node/index.html#code-and-input-data-repositories-for-this-example">Code and input data repositories for this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/tensorflow-2.x-multi-gpu-multi-node/index.html#useful-references">Useful references</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/eric-fiala-wmlce-notebooks-master/index.html">WMLCE demonstration notebooks</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/eric-fiala-wmlce-notebooks-master/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/eric-fiala-wmlce-notebooks-master/index.html#commands-to-run-this-example">Commands to run this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/eric-fiala-wmlce-notebooks-master/index.html#code-and-input-data-repositories-for-this-example">Code and input data repositories for this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/eric-fiala-wmlce-notebooks-master/index.html#useful-references">Useful references</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/unsupervised-learning-on-ocean-ecosystem-model/index.html">Finding clusters in high-dimensional data using tSNE and DB-SCAN</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/unsupervised-learning-on-ocean-ecosystem-model/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/unsupervised-learning-on-ocean-ecosystem-model/index.html#commands-to-run-this-example">Commands to run this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/unsupervised-learning-on-ocean-ecosystem-model/index.html#code-and-input-data-repositories-for-this-example">Code and input data repositories for this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/unsupervised-learning-on-ocean-ecosystem-model/index.html#useful-references">Useful references</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/biggan-pytorch/index.html">BigGAN-PyTorch</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/biggan-pytorch/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/biggan-pytorch/index.html#commands-to-run-this-example">Commands to run this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/biggan-pytorch/index.html#code-and-input-data-repositories-for-this-example">Code and input data repositories for this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/biggan-pytorch/index.html#useful-references">Useful references</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="tutorial-examples/index.html#measuring-resource-use">Measuring Resource Use</a><ul>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/energy-profiling/index.html">Intergrated energy use profiling</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/energy-profiling/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/energy-profiling/index.html#commands-to-run-this-example">Commands to run this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/energy-profiling/index.html#code-and-input-data-repositories-for-this-example">Code and input data repositories for this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/energy-profiling/index.html#useful-references">Useful references</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/nvprof-profiling/index.html">Profiling code with nvprof</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/nvprof-profiling/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/nvprof-profiling/index.html#commands-to-run-the-examples">Commands to run the examples</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/nvprof-profiling/index.html#useful-references">Useful references</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-getting-help.html">Getting help on Satori</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-help.html#email-help">Email help</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-help.html#slack">Slack</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-help.html#slack-or-satori-support-techsquare-com">Slack or satori-support@techsquare.com</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-help.html#satori-office-hours">Satori Office Hours</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-help.html#tips-and-tricks">Tips and Tricks</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="ause-coc.html">Acceptable Use and Code of Conduct</a><ul>
<li class="toctree-l2"><a class="reference internal" href="ause-coc.html#acceptable-use-guidelines">Acceptable Use Guidelines</a></li>
<li class="toctree-l2"><a class="reference internal" href="ause-coc.html#code-of-conduct">Code of Conduct</a></li>
</ul>
</li>
</ul>
</div>
</div>
</nav>
<section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">
<nav class="wy-nav-top" aria-label="top navigation">
<i data-toggle="wy-nav-top" class="fa fa-bars"></i>
<a href="index.html">MIT Satori User Documentation</a>
</nav>
<div class="wy-nav-content">
<div class="rst-content style-external-links">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li><a href="index.html" class="icon icon-home"></a> »</li>
<li>Using MPI and CUDA on Satori</li>
<li class="wy-breadcrumbs-aside">
<a href="https://github.com/mit-satori/getting-started/blob/master/satori-cuda-aware-mpi.rst" class="fa fa-github"> Edit on GitHub</a>
</li>
</ul>
<hr/>
</div>
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<div class="section" id="using-mpi-and-cuda-on-satori">
<h1>Using MPI and CUDA on Satori<a class="headerlink" href="#using-mpi-and-cuda-on-satori" title="Permalink to this headline">¶</a></h1>
<p>Leveraging multiple GPUs in a CUDA program with MPI is supported by CUDA aware MPI installations on Satori.
CUDA aware MPI from Slurm batch scripts are supported through system modules based on OpenMPI. To use
CUDA aware MPI, source codes and libraries that
involve MPI may need to be recompiled with the correct OpenMPI modules.</p>
<div class="section" id="getting-started">
<h2>Getting started<a class="headerlink" href="#getting-started" title="Permalink to this headline">¶</a></h2>
<p>The following modules are needed to work with CUDA aware MPI codes</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>module purge all
module add spack
module add cuda/10.1
module load openmpi/3.1.4-pmi-cuda-ucx
</pre></div>
</div>
</div>
<div class="section" id="compiling">
<h2>Compiling<a class="headerlink" href="#compiling" title="Permalink to this headline">¶</a></h2>
<p>Codes and libraries that make MPI calls against CUDA device memory pointers need
to be compiled using the MPI compilation wrappers e.g. <code class="docutils literal notranslate"><span class="pre">mpicc</span></code>, <code class="docutils literal notranslate"><span class="pre">mpiCC</span></code>, <code class="docutils literal notranslate"><span class="pre">mpicxx</span></code>, <code class="docutils literal notranslate"><span class="pre">mpic++</span></code>,
<code class="docutils literal notranslate"><span class="pre">mpif77</span></code>, <code class="docutils literal notranslate"><span class="pre">mpif90</span></code>, <code class="docutils literal notranslate"><span class="pre">mpifort</span></code> from the <code class="docutils literal notranslate"><span class="pre">openmpi/3.1.4-pmi-cuda-ucx</span></code> OpenMPI
module. The CUDA runtime library needs to be added as a link
library, e.g. <code class="docutils literal notranslate"><span class="pre">-lcudart</span></code>.</p>
<p>A typical compilation setup is</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>module purge all
module add spack
module add cuda/10.1
module load openmpi/3.1.4-pmi-cuda-ucx
mpiCC MYFILE.cc -lcudart
</pre></div>
</div>
</div>
<div class="section" id="submiting-a-batch-script">
<h2>Submiting a batch script<a class="headerlink" href="#submiting-a-batch-script" title="Permalink to this headline">¶</a></h2>
<div class="section" id="batch-script-header">
<h3>Batch script header<a class="headerlink" href="#batch-script-header" title="Permalink to this headline">¶</a></h3>
<p>The following example SLURM batch script heading illustrates requesting 8 GPUs on 2 nodes with exclusive access. In this
example the <em>#SBATCH</em> control commands are requesting one MPI rank for each GPU, so <code class="docutils literal notranslate"><span class="pre">cpus-per-task</span></code>, <code class="docutils literal notranslate"><span class="pre">ntasks-per-core</span></code> and <code class="docutils literal notranslate"><span class="pre">threads-per-core</span></code>
are set to <code class="docutils literal notranslate"><span class="pre">1</span></code>. The start of the batch scripts selects the modules needed for OpenMPI CUDA aware MPI with SLURM integration.</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="ch">#!/bin/bash</span>
<span class="c1">#SBATCH --nodes=2</span>
<span class="c1">#SBATCH --ntasks-per-node=4</span>
<span class="c1">#SBATCH --gres=gpu:4</span>
<span class="c1">#SBATCH --cpus-per-task=1</span>
<span class="c1">#SBATCH --ntasks-per-core=1</span>
<span class="c1">#SBATCH --threads-per-core=1</span>
<span class="c1">#SBATCH --mem=1T</span>
<span class="c1">#SBATCH --exclusive</span>
<span class="c1">#SBATCH --time 00:05:00</span>
module purge all
module add spack
module add cuda/10.1
module load openmpi/3.1.4-pmi-cuda-ucx
</pre></div>
</div>
</div>
<div class="section" id="assigning-gpus-to-mpi-ranks">
<h3>Assigning GPUs to MPI ranks<a class="headerlink" href="#assigning-gpus-to-mpi-ranks" title="Permalink to this headline">¶</a></h3>
<p>The batch script will be allocated 4 GPUs on each node in the batch session. Individual MPI ranks then need to
attach to specific GPUs to run in parallel. There are two ways to do this.</p>
<ol class="arabic">
<li><p class="first">Attach GPU to a rank using a bash script.</p>
<p>In this approach a bash script is written that is used as a launcher for the MPI program to be run. This
bash script modifies the environment variable <code class="docutils literal notranslate"><span class="pre">CUDA_VISIBLE_DEVICES</span></code> so that the MPI program will only see
the GPU it has been allocated. An example script is shown below:</p>
</li>
</ol>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="ch">#!/bin/bash</span>
<span class="c1">#</span>
<span class="c1"># Choose a CUDA device based on ${SLURM_LOCALID}</span>
<span class="c1">#</span>
<span class="nv">ngpu</span><span class="o">=</span><span class="sb">`</span>nvidia-smi -L <span class="p">|</span> grep UUID <span class="p">|</span> wc -l<span class="sb">`</span>
<span class="nv">mygpu</span><span class="o">=</span><span class="k">$((</span><span class="si">${</span><span class="nv">SLURM_LOCALID</span><span class="si">}</span> <span class="o">%</span> <span class="si">${</span><span class="nv">ngpu</span><span class="si">}</span> <span class="k">))</span>
<span class="nb">export</span> <span class="nv">CUDA_VISIBLE_DEVICES</span><span class="o">=</span><span class="si">${</span><span class="nv">mygpu</span><span class="si">}</span>
<span class="nb">exec</span> <span class="nv">$*</span>
</pre></div>
</div>
<ol class="arabic" start="2">
<li><p class="first">Attach a GPU to a rank using CUDA library runtime code.</p>
<p>In this approach the MPI program source must be modified to include GPU device selection code
before <code class="docutils literal notranslate"><span class="pre">MPI_Init()</span></code> is invoked. An example code fragment for GPU device selection (based on the
environment variable SLURM_LOCALID) is shown below:</p>
</li>
</ol>
<div class="highlight-C notranslate"><div class="highlight"><pre><span></span><span class="cp">#include</span> <span class="cpf"><mpi.h></span><span class="cp"></span>
<span class="cp">#include</span> <span class="cpf"><stdio.h></span><span class="cp"></span>
<span class="cp">#include</span> <span class="cpf"><cuda_runtime.h></span><span class="cp"></span>
<span class="kt">int</span> <span class="nf">main</span><span class="p">(</span><span class="kt">int</span> <span class="n">argc</span><span class="p">,</span> <span class="kt">char</span><span class="o">**</span> <span class="n">argv</span><span class="p">,</span> <span class="kt">char</span> <span class="o">*</span><span class="n">envp</span><span class="p">[])</span> <span class="p">{</span>
<span class="kt">char</span> <span class="o">*</span> <span class="n">localRankStr</span> <span class="o">=</span> <span class="nb">NULL</span><span class="p">;</span>
<span class="kt">int</span> <span class="n">localrank</span> <span class="o">=</span> <span class="mi">0</span><span class="p">,</span> <span class="n">devCount</span> <span class="o">=</span> <span class="mi">0</span><span class="p">,</span> <span class="n">mydev</span><span class="p">;</span>
<span class="c1">// We extract the local rank initialization using an environment variable</span>
<span class="k">if</span> <span class="p">((</span><span class="n">localRankStr</span> <span class="o">=</span> <span class="n">getenv</span><span class="p">(</span><span class="s">"SLURM_LOCALID"</span><span class="p">))</span> <span class="o">!=</span> <span class="nb">NULL</span><span class="p">)</span> <span class="p">{</span>
<span class="n">localrank</span> <span class="o">=</span> <span class="n">atoi</span><span class="p">(</span><span class="n">localRankStr</span><span class="p">);</span>
<span class="p">}</span>
<span class="n">cudaGetDeviceCount</span><span class="p">(</span><span class="o">&</span><span class="n">devCount</span><span class="p">);</span>
<span class="n">mydev</span><span class="o">=</span><span class="n">localrank</span> <span class="o">%</span> <span class="n">devCount</span><span class="p">;</span>
<span class="n">cudaSetDevice</span><span class="p">(</span><span class="n">mydev</span><span class="p">);</span>
<span class="o">:</span>
<span class="o">:</span>
<span class="n">MPI_Init</span><span class="p">(</span><span class="nb">NULL</span><span class="p">,</span> <span class="nb">NULL</span><span class="p">);</span>
<span class="o">:</span>
<span class="o">:</span>
</pre></div>
</div>
</div>
<div class="section" id="running-the-mpi-program-within-the-batch-script">
<h3>Running the MPI program within the batch script<a class="headerlink" href="#running-the-mpi-program-within-the-batch-script" title="Permalink to this headline">¶</a></h3>
<p>To run the MPI program the SLURM command <code class="docutils literal notranslate"><span class="pre">srun</span></code> is used (and not <code class="docutils literal notranslate"><span class="pre">mpirun</span></code> or <code class="docutils literal notranslate"><span class="pre">mpiexec</span></code>). The <code class="docutils literal notranslate"><span class="pre">srun</span></code> command
works like the MPI run or exec commands but it creates the environment variables needed to select which rank
works with which GPU prior to any calls to MIT_Init(). An example of using srun with a launch script is shown
below.</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>srun ./launch.sh ./a.out
</pre></div>
</div>
<p>the equivalent without a launch script is</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>srun ./a.out
</pre></div>
</div>
</div>
</div>
<div class="section" id="a-complete-example-slurm-batch-script">
<h2>A complete example SLURM batch script<a class="headerlink" href="#a-complete-example-slurm-batch-script" title="Permalink to this headline">¶</a></h2>
<p>The script below shows a working full example of the steps for CUDA and MPI using multiple GPUs on multiple nodes under SLURM. The example
shows both the bash script launcher and the CUDA runtime call approaches for assigning GPUs to ranks. Only one of these approaches is
needed in practice, both are shown to illustrate the two approaches.</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="ch">#!/bin/bash</span>
<span class="c1">#</span>
<span class="c1"># Example SLURM batch script to run example CUDA aware MPI program with one rank on</span>
<span class="c1"># each GPU, using two nodes with 4 GPUs on each node.</span>
<span class="c1">#</span>
<span class="c1">#SBATCH --nodes=2</span>
<span class="c1">#SBATCH --ntasks-per-node=4</span>
<span class="c1">#SBATCH --gres=gpu:4</span>
<span class="c1">#SBATCH --cpus-per-task=1</span>
<span class="c1">#SBATCH --ntasks-per-core=1</span>
<span class="c1">#SBATCH --threads-per-core=1</span>
<span class="c1">#SBATCH --mem=1T</span>
<span class="c1">#SBATCH --exclusive</span>
<span class="c1">#SBATCH --time 00:05:00</span>
module purge all
module add spack
module add cuda/10.1
module load openmpi/3.1.4-pmi-cuda-ucx
cat > launch.sh <span class="s"><<'EOFA'</span>
<span class="s">#!/bin/bash</span>
<span class="s"># Choose a CUDA device number ($mygpu) based on ${SLURM_LOCALID}, cycling through</span>
<span class="s"># the available GPU devices ($ngpu) on the node.</span>
<span class="s">ngpu=`nvidia-smi -L | grep UUID | wc -l`</span>
<span class="s">mygpu=$((${SLURM_LOCALID} % ${ngpu} ))</span>
<span class="s">export CUDA_VISIBLE_DEVICES=${mygpu}</span>
<span class="s"># Run MPI program with any arguments</span>
<span class="s">exec $*</span>
<span class="s">EOFA</span>
cat > x.cc <span class="s"><<'EOFA'</span>
<span class="s">#include <mpi.h></span>
<span class="s">#include <stdio.h></span>
<span class="s">#include <cuda_runtime.h></span>
<span class="s">int main(int argc, char** argv, char *envp[]) {</span>
<span class="s"> char * localRankStr = NULL;</span>
<span class="s"> int localrank = 0, devCount = 0, mydev;</span>
<span class="s"> // We extract the local rank initialization using an environment variable</span>
<span class="s"> if ((localRankStr = getenv("SLURM_LOCALID")) != NULL) {</span>
<span class="s"> localrank = atoi(localRankStr);</span>
<span class="s"> }</span>
<span class="s"> cudaGetDeviceCount(&devCount);</span>
<span class="s"> mydev=localrank % devCount;</span>
<span class="s"> cudaSetDevice(mydev);</span>
<span class="s"> MPI_Init(NULL, NULL);</span>
<span class="s"> int world_size;</span>
<span class="s"> MPI_Comm_size(MPI_COMM_WORLD, &world_size);</span>
<span class="s"> int world_rank;</span>
<span class="s"> MPI_Comm_rank(MPI_COMM_WORLD, &world_rank);</span>
<span class="s"> char processor_name[MPI_MAX_PROCESSOR_NAME];</span>
<span class="s"> int name_len;</span>
<span class="s"> MPI_Get_processor_name(processor_name, &name_len);</span>
<span class="s"> // Let check which CUDA device we have</span>
<span class="s"> char pciBusId[13];</span>
<span class="s"> cudaDeviceGetPCIBusId ( pciBusId, 13, mydev );</span>
<span class="s"> printf("MPI rank %d of %d on host %s is using GPU with PCI id %s.\n",world_rank,world_size,processor_name,pciBusId);</span>
<span class="s"> MPI_Finalize();</span>
<span class="s">}</span>
<span class="s">EOFA</span>
mpic++ x.cc -lcudart
srun ./launch.sh ./a.out
</pre></div>
</div>
</div>
<div class="section" id="using-alternate-mpi-builds">
<h2>Using alternate MPI builds<a class="headerlink" href="#using-alternate-mpi-builds" title="Permalink to this headline">¶</a></h2>
<p>It is also possible to build custom MPI modules in individual user accounts using the
spack ( <a class="reference external" href="https://spack.readthedocs.io/en/latest/" target="_blank">https://spack.readthedocs.io/en/latest/</a> ) package management tool. These builds should use the UCX communcation
features and PMI job management features to integrate with SLURM and the Satori high-speed network.</p>
</div>
</div>
</div>
</div>
<footer>
<div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
<a href="lsf-templates/satori-lsf-ml-examples.html" class="btn btn-neutral float-right" title="Example machine learning LSF jobs" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right" aria-hidden="true"></span></a>
<a href="satori-R.html" class="btn btn-neutral float-left" title="R on Satori" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left" aria-hidden="true"></span> Previous</a>
</div>
<hr/>
<div role="contentinfo">
<p>
© Copyright 2021, MIT Satori Project.
</p>
</div>
Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
<a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
jQuery(function () {
SphinxRtdTheme.Navigation.enable(true);
});
</script>
</body>
</html>