-
Notifications
You must be signed in to change notification settings - Fork 1
/
satori-workload-manager-using-slurm.html
727 lines (600 loc) · 53.1 KB
/
satori-workload-manager-using-slurm.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
<!DOCTYPE html>
<html class="writer-html5" lang="en" >
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>Running your AI training jobs on Satori using Slurm — MIT Satori User Documentation documentation</title>
<link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<link rel="canonical" href="https://researchcomputing.mit.edu/satori-workload-manager-using-slurm.html"/>
<!--[if lt IE 9]>
<script src="_static/js/html5shiv.min.js"></script>
<![endif]-->
<script type="text/javascript" id="documentation_options" data-url_root="./" src="_static/documentation_options.js"></script>
<script src="_static/jquery.js"></script>
<script src="_static/underscore.js"></script>
<script src="_static/doctools.js"></script>
<script type="text/javascript" src="_static/js/theme.js"></script>
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="search.html" />
<link rel="next" title="Troubleshooting" href="satori-troubleshooting.html" />
<link rel="prev" title="Training for faster onboarding in the system HW and SW architecture" href="satori-training.html" />
</head>
<body class="wy-body-for-nav">
<div class="wy-grid-for-nav">
<nav data-toggle="wy-nav-shift" class="wy-nav-side">
<div class="wy-side-scroll">
<div class="wy-side-nav-search" >
<a href="index.html" class="icon icon-home"> MIT Satori User Documentation
</a>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="search.html" method="get">
<input type="text" name="q" placeholder="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
<div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="satori-basics.html">Satori Basics</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-basics.html#what-is-satori">What is Satori?</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-basics.html#how-can-i-get-an-account">How can I get an account?</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-basics.html#getting-help">Getting help?</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-ssh.html">Satori Login</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-ssh.html#web-portal-login">Web Portal Login</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-ssh.html#ssh-login">SSH Login</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-getting-started.html">Starting up on Satori</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-started.html#getting-your-account">Getting Your Account</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-started.html#shared-hpc-clusters">Shared HPC Clusters</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-started.html#logging-in-to-satori">Logging in to Satori</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-started.html#the-satori-portal">The Satori Portal</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-started.html#setting-up-your-environment">Setting up Your Environment</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-started.html#transferring-files">Transferring Files</a><ul>
<li class="toctree-l3"><a class="reference internal" href="satori-getting-started.html#using-scp-or-rysnc">Using scp or rysnc</a></li>
<li class="toctree-l3"><a class="reference internal" href="satori-getting-started.html#satori-portal-file-explorer">Satori Portal File Explorer</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-started.html#types-of-jobs">Types of Jobs</a><ul>
<li class="toctree-l3"><a class="reference internal" href="satori-getting-started.html#running-interactive-jobs">Running Interactive Jobs</a></li>
<li class="toctree-l3"><a class="reference internal" href="satori-getting-started.html#running-batch-jobs">Running Batch Jobs</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-training.html">Training for faster onboarding in the system HW and SW architecture</a></li>
<li class="toctree-l1 current"><a class="current reference internal" href="#">Running your AI training jobs on Satori using Slurm</a><ul>
<li class="toctree-l2"><a class="reference internal" href="#a-note-on-exclusivity">A Note on Exclusivity</a></li>
<li class="toctree-l2"><a class="reference internal" href="#interactive-jobs">Interactive Jobs</a></li>
<li class="toctree-l2"><a class="reference internal" href="#batch-scripts">Batch Scripts</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#monitoring-jobs">Monitoring Jobs</a></li>
<li class="toctree-l3"><a class="reference internal" href="#canceling-jobs">Canceling Jobs</a></li>
<li class="toctree-l3"><a class="reference internal" href="#scheduling-policy">Scheduling Policy</a></li>
<li class="toctree-l3"><a class="reference internal" href="#batch-queue-policy">Batch Queue Policy</a></li>
<li class="toctree-l3"><a class="reference internal" href="#queue-policies">Queue Policies</a></li>
<li class="toctree-l3"><a class="reference internal" href="#running-jobs-in-series">Running jobs in series</a></li>
<li class="toctree-l3"><a class="reference internal" href="#note-on-pytorch-1-4">Note on Pytorch 1.4</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-troubleshooting.html">Troubleshooting</a></li>
<li class="toctree-l1"><a class="reference internal" href="satori-ai-frameworks.html">IBM Watson Machine Learning Community Edition (WML-CE) and Open Cognitive Environment (Open-CE)</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-ai-frameworks.html#install-anaconda">[1] Install Anaconda</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-ai-frameworks.html#wml-ce-and-open-ce-setting-up-the-software-repository">[2] WML-CE and Open-CE: Setting up the software repository</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-ai-frameworks.html#wml-ce-and-open-ce-creating-and-activate-conda-environments-recommended">[3] WML-CE and Open-CE: Creating and activate conda environments (recommended)</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-ai-frameworks.html#wml-ce-installing-all-frameworks-at-the-same-time">[4] WML-CE: Installing all frameworks at the same time</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-ai-frameworks.html#wml-ce-testing-ml-dl-frameworks-pytorch-tensorflow-etc-installation">[5] WML-CE: Testing ML/DL frameworks (Pytorch, TensorFlow etc) installation</a><ul>
<li class="toctree-l3"><a class="reference internal" href="satori-ai-frameworks.html#controlling-wml-ce-release-packages">Controlling WML-CE release packages</a></li>
<li class="toctree-l3"><a class="reference internal" href="satori-ai-frameworks.html#additional-conda-channels">Additional conda channels</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="satori-ai-frameworks.html#the-wml-ce-supplementary-channel-is-available-at-https-anaconda-org-powerai">The WML CE Supplementary channel is available at: https://anaconda.org/powerai/.</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-ai-frameworks.html#the-wml-ce-early-access-channel-is-available-at-https-public-dhe-ibm-com-ibmdl-export-pub-software-server-ibm-ai-conda-early-access">The WML-CE Early Access channel is available at: https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda-early-access/.</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-distributed-deeplearning.html">Distributed Deep Learning</a></li>
<li class="toctree-l1"><a class="reference internal" href="satori-large-model-support.html">IBM Large Model Support (LMS)</a></li>
<li class="toctree-l1"><a class="reference internal" href="satori-julia.html">Julia on Satori</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-julia.html#getting-started">Getting started</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-julia.html#getting-help">Getting help?</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-julia.html#a-simple-batch-script-example">A simple batch script example</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-julia.html#recipe-for-running-single-gpu-single-threaded-interactive-session-with-cuda-aware-mpi">Recipe for running single GPU, single threaded interactive session with CUDA aware MPI</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-julia.html#running-a-multi-process-julia-program-somewhat-interactively">Running a multi-process julia program somewhat interactively</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-julia.html#an-example-of-installing-https-github-com-clima-climatemachine-jl-on-satori">An example of installing https://github.com/clima/climatemachine.jl on Satori</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-R.html">R on Satori</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-R.html#getting-started-with-r">Getting Started with R</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-R.html#installing-packages">Installing Packages</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-R.html#a-simple-batch-script-example">A Simple Batch Script Example</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-R.html#r-and-python">R and Python</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-R.html#running-r-in-a-container">Running R in a container</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-cuda-aware-mpi.html">Using MPI and CUDA on Satori</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-cuda-aware-mpi.html#getting-started">Getting started</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-cuda-aware-mpi.html#compiling">Compiling</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-cuda-aware-mpi.html#submiting-a-batch-script">Submiting a batch script</a><ul>
<li class="toctree-l3"><a class="reference internal" href="satori-cuda-aware-mpi.html#batch-script-header">Batch script header</a></li>
<li class="toctree-l3"><a class="reference internal" href="satori-cuda-aware-mpi.html#assigning-gpus-to-mpi-ranks">Assigning GPUs to MPI ranks</a></li>
<li class="toctree-l3"><a class="reference internal" href="satori-cuda-aware-mpi.html#running-the-mpi-program-within-the-batch-script">Running the MPI program within the batch script</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="satori-cuda-aware-mpi.html#a-complete-example-slurm-batch-script">A complete example SLURM batch script</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-cuda-aware-mpi.html#using-alternate-mpi-builds">Using alternate MPI builds</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="lsf-templates/satori-lsf-ml-examples.html">Example machine learning LSF jobs</a><ul>
<li class="toctree-l2"><a class="reference internal" href="lsf-templates/satori-lsf-ml-examples.html#a-single-node-4-gpu-keras-example">A single node, 4 GPU Keras example</a></li>
<li class="toctree-l2"><a class="reference internal" href="lsf-templates/satori-lsf-ml-examples.html#a-single-node-4-gpu-caffe-example">A single node, 4 GPU Caffe example</a></li>
<li class="toctree-l2"><a class="reference internal" href="lsf-templates/satori-lsf-ml-examples.html#a-multi-node-pytorch-example">A multi-node, pytorch example</a></li>
<li class="toctree-l2"><a class="reference internal" href="lsf-templates/satori-lsf-ml-examples.html#a-multi-node-pytorch-example-with-the-horovod-conda-environment">A multi-node, pytorch example with the horovod conda environment</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-howto-videos.html">Satori Howto Video Sessions</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-howto-videos.html#installing-wmcle-on-satori">Installing WMCLE on Satori</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-howto-videos.html#pytorch-with-ddl-on-satori">Pytorch with DDL on Satori</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-howto-videos.html#tensorflow-with-ddl-on-satori">Tensorflow with DDL on Satori</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-howto-videos.html#jupyterlab-with-ssh-tunnel-on-satori">Jupyterlab with SSH Tunnel on Satori</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-public-datasets.html">Satori Public Datasets</a></li>
<li class="toctree-l1"><a class="reference internal" href="singularity.html">Singularity for Satorians</a><ul>
<li class="toctree-l2"><a class="reference internal" href="singularity.html#fast-start">Fast start</a></li>
<li class="toctree-l2"><a class="reference internal" href="singularity.html#other-notes">Other notes</a></li>
<li class="toctree-l2"><a class="reference internal" href="singularity.html#interactive-allocation">Interactive Allocation:</a></li>
<li class="toctree-l2"><a class="reference internal" href="singularity.html#non-interactive-batch-mode">Non interactive / batch mode</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-relion-cryoem.html">Relion Cryoem for Satorians</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-relion-cryoem.html#prerequisites">Prerequisites</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-relion-cryoem.html#quick-start">Quick start</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-relion-cryoem.html#other-notes">Other notes</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-copy-large-filesets.html">Copying larger files and large file sets</a></li>
<li class="toctree-l1"><a class="reference internal" href="satori-copy-large-filesets.html#using-mrsync">Using mrsync</a></li>
<li class="toctree-l1"><a class="reference internal" href="satori-copy-large-filesets.html#using-aspera-for-remote-file-transfer-to-satori-cluster">Using Aspera for remote file transfer to Satori cluster</a></li>
<li class="toctree-l1"><a class="reference internal" href="satori-doc-examples-contributing.html">FAQ</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-doc-examples-contributing.html#tips-tricks-and-questions">Tips, tricks and questions</a><ul>
<li class="toctree-l3"><a class="reference internal" href="tips-and-tricks/storage/index.html">How can I see disk usage?</a></li>
<li class="toctree-l3"><a class="reference internal" href="tips-and-tricks/storage/index.html#where-should-i-put-world-or-project-shared-datasets">Where should I put world or project shared datasets?</a></li>
<li class="toctree-l3"><a class="reference internal" href="portal-howto/customization.html">How can I create custom Jupyter kernels for the Satori web portal?</a><ul>
<li class="toctree-l4"><a class="reference internal" href="portal-howto/customization.html#steps-to-create-a-kernel">Steps to create a kernel</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tips-and-tricks/carlos-quick-start-commands/index.html">How do I set up a basic conda environment?</a></li>
<li class="toctree-l3"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html">System software queries</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#what-linux-distribution-version-am-i-running">What Linux distribution version am I running?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#what-linux-kernel-level-am-i-running">What Linux kernel level am I running?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#what-software-levels-are-installed-on-the-system">What software levels are installed on the system?</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#system-hardware-queries">System hardware queries</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#what-is-my-cpu-configuration">What is my CPU configuration?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#how-much-ram-is-there-on-my-nodes">How much RAM is there on my nodes?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#what-smt-mode-are-my-nodes-in">What SMT mode are my nodes in?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#what-cpu-governor-is-in-effect-on-my-nodes">What CPU governor is in effect on my nodes?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#what-are-the-logical-ids-and-uuids-for-the-gpus-on-my-nodes">What are the logical IDs and UUIDs for the GPUs on my nodes?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#what-is-the-ibm-model-of-my-system">What is the IBM model of my system?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#which-logical-cpus-belong-to-which-socket">Which logical CPUs belong to which socket?</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#questions-about-my-jobs">Questions about my jobs</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#how-can-i-establish-which-logical-cpu-ids-my-process-is-bound-to">How can I establish which logical CPU IDs my process is bound to?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#can-i-see-the-output-of-my-job-before-it-completes">Can I see the output of my job before it completes?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#i-have-a-job-waiting-in-the-queue-and-i-want-to-modify-the-options-i-had-selected">I have a job waiting in the queue, and I want to modify the options I had selected</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#i-have-submitted-my-job-several-times-but-i-get-no-output">I have submitted my job several times, but I get no output</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#how-do-i-set-a-time-limit-on-my-job">How do I set a time limit on my job?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#can-i-make-a-jobs-startup-depend-on-the-completion-of-a-previous-one">Can I make a job’s startup depend on the completion of a previous one?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#how-do-i-select-a-specific-set-of-hosts-for-my-job">How do I select a specific set of hosts for my job?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#how-do-i-deselect-specific-nodes-for-my-job">How do I deselect specific nodes for my job?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#my-jobs-runtime-environment-is-different-from-what-i-expected">My job’s runtime environment is different from what I expected</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/sys_queries/index.html#i-want-to-know-precisely-what-my-jobs-runtime-environment-is">I want to know precisely what my job’s runtime environment is</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tips-and-tricks/ondemand_portal_queries/index.html">Portal queries</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/ondemand_portal_queries/index.html#i-see-no-active-sessions-in-my-interactive-sessions">I see no active sessions in My Interactive Sessions?</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tips-and-tricks/singularity-tips/index.html">How do I build a Singularity image from scratch?</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/singularity-tips/index.html#set-up-to-run-docker-in-ppc64le-mode-on-an-x86-machine">Set up to run Docker in ppc64le mode on an x86 machine</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/singularity-tips/index.html#run-docker-in-ppc64le-mode-on-an-x86-machine-to-generate-an-image-for-satori">Run Docker in ppc64le mode on an x86 machine to generate an image for Satori</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/singularity-tips/index.html#import-new-docker-hub-image-into-singularity-on-satori">Import new Docker hub image into Singularity on Satori</a></li>
<li class="toctree-l4"><a class="reference internal" href="tips-and-tricks/singularity-tips/index.html#using-singularity-instead-of-docker">Using Singularity instead of Docker</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-tutorial-examples.html">Green Up Hackathon IAP 2020</a><ul>
<li class="toctree-l2"><a class="reference internal" href="tutorial-examples/index.html">Tutorial Examples</a><ul>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/pytorch-style-transfer/index.html">Pytorch Style Transfer</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/pytorch-style-transfer/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/pytorch-style-transfer/index.html#commands-to-run-this-example">Commands to run this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/pytorch-style-transfer/index.html#code-and-input-data-repositories-for-this-example">Code and input data repositories for this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/pytorch-style-transfer/index.html#useful-references">Useful references</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/neural-network-dna-demo/index.html">Neural network DNA</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/neural-network-dna-demo/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/neural-network-dna-demo/index.html#commands-to-run-this-example">Commands to run this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/neural-network-dna-demo/index.html#code-and-input-data-repositories-for-this-example">Code and input data repositories for this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/neural-network-dna-demo/index.html#useful-references">Useful references</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/transfer-learning-pathology/index.html">Pathology Image Classification Transfer Learning</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/transfer-learning-pathology/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/transfer-learning-pathology/index.html#commands-to-run-this-example">Commands to run this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/transfer-learning-pathology/index.html#code-and-input-data-repositories-for-this-example">Code and input data repositories for this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/transfer-learning-pathology/index.html#useful-references">Useful references</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/tensorflow-2.x-multi-gpu-multi-node/index.html">Multi Node Multi GPU TensorFlow 2.0 Distributed Training Example</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/tensorflow-2.x-multi-gpu-multi-node/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/tensorflow-2.x-multi-gpu-multi-node/index.html#prerequisites-if-you-are-not-yet-running-tensorflow-2-0">Prerequisites if you are not yet running TensorFlow 2.0</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/tensorflow-2.x-multi-gpu-multi-node/index.html#commands-to-run-this-example">Commands to run this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/tensorflow-2.x-multi-gpu-multi-node/index.html#what-s-going-on-here">What’s going on here?</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/tensorflow-2.x-multi-gpu-multi-node/index.html#code-and-input-data-repositories-for-this-example">Code and input data repositories for this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/tensorflow-2.x-multi-gpu-multi-node/index.html#useful-references">Useful references</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/eric-fiala-wmlce-notebooks-master/index.html">WMLCE demonstration notebooks</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/eric-fiala-wmlce-notebooks-master/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/eric-fiala-wmlce-notebooks-master/index.html#commands-to-run-this-example">Commands to run this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/eric-fiala-wmlce-notebooks-master/index.html#code-and-input-data-repositories-for-this-example">Code and input data repositories for this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/eric-fiala-wmlce-notebooks-master/index.html#useful-references">Useful references</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/unsupervised-learning-on-ocean-ecosystem-model/index.html">Finding clusters in high-dimensional data using tSNE and DB-SCAN</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/unsupervised-learning-on-ocean-ecosystem-model/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/unsupervised-learning-on-ocean-ecosystem-model/index.html#commands-to-run-this-example">Commands to run this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/unsupervised-learning-on-ocean-ecosystem-model/index.html#code-and-input-data-repositories-for-this-example">Code and input data repositories for this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/unsupervised-learning-on-ocean-ecosystem-model/index.html#useful-references">Useful references</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/biggan-pytorch/index.html">BigGAN-PyTorch</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/biggan-pytorch/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/biggan-pytorch/index.html#commands-to-run-this-example">Commands to run this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/biggan-pytorch/index.html#code-and-input-data-repositories-for-this-example">Code and input data repositories for this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/biggan-pytorch/index.html#useful-references">Useful references</a></li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="tutorial-examples/index.html#measuring-resource-use">Measuring Resource Use</a><ul>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/energy-profiling/index.html">Intergrated energy use profiling</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/energy-profiling/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/energy-profiling/index.html#commands-to-run-this-example">Commands to run this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/energy-profiling/index.html#code-and-input-data-repositories-for-this-example">Code and input data repositories for this example</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/energy-profiling/index.html#useful-references">Useful references</a></li>
</ul>
</li>
<li class="toctree-l3"><a class="reference internal" href="tutorial-examples/nvprof-profiling/index.html">Profiling code with nvprof</a><ul>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/nvprof-profiling/index.html#description">Description</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/nvprof-profiling/index.html#commands-to-run-the-examples">Commands to run the examples</a></li>
<li class="toctree-l4"><a class="reference internal" href="tutorial-examples/nvprof-profiling/index.html#useful-references">Useful references</a></li>
</ul>
</li>
</ul>
</li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="satori-getting-help.html">Getting help on Satori</a><ul>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-help.html#email-help">Email help</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-help.html#slack">Slack</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-help.html#slack-or-satori-support-techsquare-com">Slack or satori-support@techsquare.com</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-help.html#satori-office-hours">Satori Office Hours</a></li>
<li class="toctree-l2"><a class="reference internal" href="satori-getting-help.html#tips-and-tricks">Tips and Tricks</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="ause-coc.html">Acceptable Use and Code of Conduct</a><ul>
<li class="toctree-l2"><a class="reference internal" href="ause-coc.html#acceptable-use-guidelines">Acceptable Use Guidelines</a></li>
<li class="toctree-l2"><a class="reference internal" href="ause-coc.html#code-of-conduct">Code of Conduct</a></li>
</ul>
</li>
</ul>
</div>
</div>
</nav>
<section data-toggle="wy-nav-shift" class="wy-nav-content-wrap">
<nav class="wy-nav-top" aria-label="top navigation">
<i data-toggle="wy-nav-top" class="fa fa-bars"></i>
<a href="index.html">MIT Satori User Documentation</a>
</nav>
<div class="wy-nav-content">
<div class="rst-content style-external-links">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="wy-breadcrumbs">
<li><a href="index.html" class="icon icon-home"></a> »</li>
<li>Running your AI training jobs on Satori using Slurm</li>
<li class="wy-breadcrumbs-aside">
<a href="https://github.com/mit-satori/getting-started/blob/master/satori-workload-manager-using-slurm.rst" class="fa fa-github"> Edit on GitHub</a>
</li>
</ul>
<hr/>
</div>
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<div class="section" id="running-your-ai-training-jobs-on-satori-using-slurm">
<h1>Running your AI training jobs on Satori using Slurm<a class="headerlink" href="#running-your-ai-training-jobs-on-satori-using-slurm" title="Permalink to this headline">¶</a></h1>
<p>Computational work on Satori is quickly being migrated to use the Slurm workload manager. A typical job consists of several
components:</p>
<ul class="simple">
<li>A submission script</li>
<li>An executable file (python sript or C/C++ script)</li>
<li>Training data needed by the ML/DL script</li>
<li>Output files created by the training/inference job</li>
</ul>
<p>There are two types for jobs:</p>
<ul class="simple">
<li>interactive / online</li>
<li>batch</li>
</ul>
<p>In general, the process for running a batch job is to:</p>
<ul class="simple">
<li>Prepare executables and input files</li>
<li>Modify provided SLURM job template for the batch script or write a new
one</li>
<li>Submit the batch script to the Workload Manager</li>
<li>Monitor the job’s progress before and during execution</li>
</ul>
<div class="section" id="a-note-on-exclusivity">
<h2>A Note on Exclusivity<a class="headerlink" href="#a-note-on-exclusivity" title="Permalink to this headline">¶</a></h2>
<p>To make best use of Satori’s GPU resource default job submissiosn are not
exclusive. That means that unless you ask otherwise, the GPUs on the node(s)
you are assigned may already be in use by another user. That means if you
request a node with 2GPU’s the 2 other GPUs on that node may be engaged by
another job. This allows us to more efficently allocate all of the GPU
resources. This may require some additional checking to make sure you can
uniquely use all of the GPU’s on a machine. If you’re in doubt, you can request
the node to be ‘exclusive’ . See below on how to request exclusive access in
an interactive and batch situation.</p>
</div>
<div class="section" id="interactive-jobs">
<h2>Interactive Jobs<a class="headerlink" href="#interactive-jobs" title="Permalink to this headline">¶</a></h2>
<p>Most users will find batch jobs to be the easiest way to interact with
the system, since they permit you to hand off a job to the scheduler and
then work on other tasks; however, it is sometimes preferable to run
interactively on the system. This is especially true when developing,
modifying, or debugging code.</p>
<p>Since all compute resources are managed/scheduled by SLURM, it is not
possible to simply log into the system and begin running a parallel code
interactively. You must request the appropriate resources from the
system and, if necessary, wait until they are available. This is done
with an “interactive batch” job. Interactive batch jobs are submitted
via the command line, which supports the same options that are passed
via <strong>#SBATCH</strong> parameters in a batch script. The final options on the command
line are what makes the job “interactive batch”: -Is followed by a shell
name. For example, to request an interactive batch job (with bash as the
shell) equivalent to the sample batch script above, you would use the
command:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>srun --gres<span class="o">=</span>gpu:4 -N <span class="m">1</span> --mem<span class="o">=</span>1T --time <span class="m">1</span>:00:00 -I --pty /bin/bash
</pre></div>
</div>
<p>This will request an AC922 node with 4x GPUs from the Satori (normal
queue) for 1 hour.</p>
<p>If you need to make sure no one else can allocate the unused GPU’s on the machine you can use</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>srun --gres<span class="o">=</span>gpu:4 -N <span class="m">1</span> --exclusive --mem<span class="o">=</span>1T --time <span class="m">1</span>:00:00 -I --pty /bin/bash
</pre></div>
</div>
<p>this will request exclusive use of an interactive node with 4GPU’s</p>
</div>
<div class="section" id="batch-scripts">
<h2>Batch Scripts<a class="headerlink" href="#batch-scripts" title="Permalink to this headline">¶</a></h2>
<p>The most common way to interact with the batch system is via batch jobs.
A batch job is simply a shell script with added directives to request
various resources from or provide certain information to the batch
scheduling system. Aside from the lines containing SLURM options, the
batch script is simply the series commands needed to set up and run your
AI job.</p>
<p>To submit a batch script, use the bsub command:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>sbatch myjob.slurm
</pre></div>
</div>
<p>As an example, consider the following batch script for 4x V100 GPUs
(single AC922 node):</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="ch">#!/bin/bash</span>
<span class="c1">#SBATCH -J myjob_4GPUs</span>
<span class="c1">#SBATCH -o myjob_4GPUs_%j.out</span>
<span class="c1">#SBATCH -e myjob_4GPUs_%j.err</span>
<span class="c1">#SBATCH [email protected]</span>
<span class="c1">#SBATCH --mail-type=ALL</span>
<span class="c1">#SBATCH --gres=gpu:4</span>
<span class="c1">#SBATCH --gpus-per-node=4</span>
<span class="c1">#SBATCH --nodes=1</span>
<span class="c1">#SBATCH --ntasks-per-node=4</span>
<span class="c1">#SBATCH --mem=0</span>
<span class="c1">#SBATCH --time=24:00:00</span>
<span class="c1">#SBATCH --exclusive</span>
<span class="c1">## User python environment</span>
<span class="nv">HOME2</span><span class="o">=</span>/nobackup/users/<span class="k">$(</span>whoami<span class="k">)</span>
<span class="nv">PYTHON_VIRTUAL_ENVIRONMENT</span><span class="o">=</span>wmlce-1.7.0
<span class="nv">CONDA_ROOT</span><span class="o">=</span><span class="nv">$HOME2</span>/anaconda3
<span class="c1">## Activate WMLCE virtual environment</span>
<span class="nb">source</span> <span class="si">${</span><span class="nv">CONDA_ROOT</span><span class="si">}</span>/etc/profile.d/conda.sh
conda activate <span class="nv">$PYTHON_VIRTUAL_ENVIRONMENT</span>
<span class="nb">ulimit</span> -s unlimited
<span class="c1">## Creating SLURM nodes list</span>
<span class="nb">export</span> <span class="nv">NODELIST</span><span class="o">=</span>nodelist.$
srun -l bash -c <span class="s1">'hostname'</span> <span class="p">|</span> sort -k <span class="m">2</span> -u <span class="p">|</span> awk -vORS<span class="o">=</span>, <span class="s1">'{print $2":4"}'</span> <span class="p">|</span> sed <span class="s1">'s/,$//'</span> > <span class="nv">$NODELIST</span>
<span class="c1">## Number of total processes</span>
<span class="nb">echo</span> <span class="s2">" "</span>
<span class="nb">echo</span> <span class="s2">" Nodelist:= "</span> <span class="nv">$SLURM_JOB_NODELIST</span>
<span class="nb">echo</span> <span class="s2">" Number of nodes:= "</span> <span class="nv">$SLURM_JOB_NUM_NODES</span>
<span class="nb">echo</span> <span class="s2">" GPUs per node:= "</span> <span class="nv">$SLURM_JOB_GPUS</span>
<span class="nb">echo</span> <span class="s2">" Ntasks per node:= "</span> <span class="nv">$SLURM_NTASKS_PER_NODE</span>
<span class="c1">#### Use MPI for communication with Horovod - this can be hard-coded during installation as well.</span>
<span class="nb">export</span> <span class="nv">HOROVOD_GPU_ALLREDUCE</span><span class="o">=</span>MPI
<span class="nb">export</span> <span class="nv">HOROVOD_GPU_ALLGATHER</span><span class="o">=</span>MPI
<span class="nb">export</span> <span class="nv">HOROVOD_GPU_BROADCAST</span><span class="o">=</span>MPI
<span class="nb">export</span> <span class="nv">NCCL_DEBUG</span><span class="o">=</span>DEBUG
<span class="nb">echo</span> <span class="s2">" Running on multiple nodes/GPU devices"</span>
<span class="nb">echo</span> <span class="s2">""</span>
<span class="nb">echo</span> <span class="s2">" Run started at:- "</span>
date
<span class="c1">## Horovod execution</span>
horovodrun -np <span class="nv">$SLURM_NTASKS</span> -H <span class="sb">`</span>cat <span class="nv">$NODELIST</span><span class="sb">`</span> python /data/ImageNet/pytorch_mnist.py
<span class="nb">echo</span> <span class="s2">"Run completed at:- "</span>
date
</pre></div>
</div>
<p>In the above template you can change:</p>
<ul class="simple">
<li>line 2-4: with your desired job name, but remember to keep _out for the
job output file and _err for the file with the related job errors</li>
<li>line 7: <code class="docutils literal notranslate"><span class="pre">--gres=gpu:4</span></code> here you can consider the no of GPUs you need <strong>per node</strong> <em>e.g.</em> a value of 1 means you want only
1 GPU on each node, a value of 4 means you want all GPUS’s on the node.</li>
<li>line 9: <code class="docutils literal notranslate"><span class="pre">--nodes=1</span></code> here you put how many nodes you need. <em>e.g.</em> a value of 1 means 1 node, a value of 2 means 2 nodes,
etc. <strong>Note:</strong> the total number of GPUS is the product of the <code class="docutils literal notranslate"><span class="pre">--gres</span></code> and the <code class="docutils literal notranslate"><span class="pre">--nodes</span></code> settings. <em>e.g.</em> a value of
<code class="docutils literal notranslate"><span class="pre">--gres=gpu:4</span></code> and <code class="docutils literal notranslate"><span class="pre">--nodes=2</span></code> = 4 x 2 = 8 GPU’s in total.</li>
<li>line 12: <code class="docutils literal notranslate"><span class="pre">--time=24:00:00</span></code> indicates the maximum run time you wish to allow. <strong>Note</strong> If this number is larger than the
runtime limit of the queue you are in, your job will be terminated at the queue limit. <strong>It is good practice to make use
of checkpointing in order not to lose your work if your job is terminated.</strong></li>
<li>line 13: <code class="docutils literal notranslate"><span class="pre">--exclusive</span></code> means that you want full use of the GPUS on the nodes you are reserving. Leaving this out allows
the GPU resources you’re not using on the node to be shared.</li>
<li>line 17: change to your conda virtual environment defined at installation of WMLCE (or other conda environment)</li>
<li>line 49: change as need for what you will want to run and from where. <strong>Note</strong> while horovod isn’t strictly needed for
single node runs, we recommend it in case you need to expand to more nodes.</li>
<li>The environment variables below can be used to change Horovod communication from MPI to NCCL2; In case of the MPI, allgather allocates an output tensor which is proportionate to the number of processes participating in the training. If you find yourself running out of GPU memory and you can force allgather to happen on CPU by passing device_sparse=’/cpu:0’ to hvd.DistributedOptimizer.</li>
</ul>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="nb">export</span> <span class="nv">HOROVOD_GPU_ALLREDUCE</span><span class="o">=</span>MPI
<span class="nb">export</span> <span class="nv">HOROVOD_GPU_ALLGATHER</span><span class="o">=</span>MPI
<span class="nb">export</span> <span class="nv">HOROVOD_GPU_BROADCAST</span><span class="o">=</span>MPI
</pre></div>
</div>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="nb">export</span> <span class="nv">HOROVOD_GPU_ALLREDUCE</span><span class="o">=</span>NCCL
<span class="nb">export</span> <span class="nv">HOROVOD_GPU_BROADCAST</span><span class="o">=</span>NCLL
</pre></div>
</div>
<p><strong>Note</strong> you may need to install Horovod by activating your conda environment and installing it. <em>e.g.</em></p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>conda active wlmce-1.6.3. <or whatever your conda environment is called>
conda install horovod
</pre></div>
</div>
<p>For your convenience additional SLURM batch job templates have been created to cover distributed deep learning trainings across Satori cluster. You can find them <a class="reference external" href="https://github.com/mit-satori/getting-started/tree/master/slurm-templates" target="_blank">here</a></p>
<div class="section" id="monitoring-jobs">
<h3>Monitoring Jobs<a class="headerlink" href="#monitoring-jobs" title="Permalink to this headline">¶</a></h3>
<p>SLURM provides several utilities with which you can monitor jobs. These
include monitoring the queue, getting details about a particular job,
viewing STDOUT/STDERR of running jobs, and more.</p>
<p>The most straightforward monitoring is the command:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>squeue
</pre></div>
</div>
<p>This command will show the current queue, including both pending and running
jobs.</p>
</div>
<div class="section" id="canceling-jobs">
<h3>Canceling Jobs<a class="headerlink" href="#canceling-jobs" title="Permalink to this headline">¶</a></h3>
<p>SLURM allows you to kill jobs you’ve already submitted by using the command:</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>scancel <jobnumber>
</pre></div>
</div>
<p>where <jobnumber> is the slurm job number see wiht you submit the job or found by running squeue.</p>
</div>
<div class="section" id="scheduling-policy">
<h3>Scheduling Policy<a class="headerlink" href="#scheduling-policy" title="Permalink to this headline">¶</a></h3>
<p>In a simple batch queue system, jobs run in a first-in, first-out (FIFO)
order. This often does not make effective use of the system. A large job
may be next in line to run. If the system is using a strict FIFO queue,
many processors sit idle while the large job waits to run. Backfilling
would allow smaller, shorter jobs to use those otherwise idle resources,
and with the proper algorithm, the start time of the large job would not
be delayed. While this does make more effective use of the system, it
indirectly encourages the submission of smaller jobs.</p>
</div>
<div class="section" id="batch-queue-policy">
<h3>Batch Queue Policy<a class="headerlink" href="#batch-queue-policy" title="Permalink to this headline">¶</a></h3>
<p>New users are granted access to a default batch queue. It
enforces the following policies: Limit of (1) executing job per
user with a maximum wall time of 12 hours. Jobs in excess of the per
user limit above will be placed into a
queued state, and will change to eligible-to-run at the appropriate time.</p>
</div>
<div class="section" id="queue-policies">
<h3>Queue Policies<a class="headerlink" href="#queue-policies" title="Permalink to this headline">¶</a></h3>
<p>Account holders who are comfortable with basic practices of how to use the
system productively (understanding basic Linux commands, learning interactive
and batch scheduling techniques, developing basic strategies for managing large
numbers of files etc… ) are able to access higher level queues on the system.
These can be useful for urgent time constraints such as paper deadlines and for
more involved workflows. To request acccess to the priority queue first make
sure you are comfortable with the technical and social norms of using a shared
system. Then please email <a class="reference external" href="mailto:support-satori%40techsquare.com" target="_blank">support-satori<span>@</span>techsquare<span>.</span>com</a> and indicate that you
would like to access higher level queue features.</p>
<p>A set higher level queues has initially been set up in two configurations, 1
Node with 4 GPUs and 2 Nodes with 8 GPU’s. Job run length will be capped at 24
hours so please use checkpointing. There will be a limit of 2 parallel jobs per
user running during peak times. If these queue settings do not meet your project
goals, please email <a class="reference external" href="mailto:support-satori%40techsquare.com" target="_blank">support-satori<span>@</span>techsquare<span>.</span>com</a> with your needed requirments
and we will consider them.</p>
</div>
<div class="section" id="running-jobs-in-series">
<h3>Running jobs in series<a class="headerlink" href="#running-jobs-in-series" title="Permalink to this headline">¶</a></h3>
<p>Slurm provides numerous mechanisms for chaining jobs together to run unattended in sequence. A simple example of this sort
of job is shown below</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span><span class="ch">#!/bin/bash</span>
<span class="nv">MYSCRIPT</span><span class="o">=</span>/home/<span class="si">${</span><span class="nv">USER</span><span class="si">}</span>/foo.slurm
<span class="nv">MYSUBDIR</span><span class="o">=</span>/home/<span class="si">${</span><span class="nv">USER</span><span class="si">}</span>
<span class="nv">JID</span><span class="o">=</span><span class="si">${</span><span class="nv">SLURM_JOB_ID</span><span class="si">}</span>
ssh service0001 <span class="s2">"cd </span><span class="nv">$MYSUBDIR</span><span class="s2">; pwd; sbatch --dependency=afterok:</span><span class="si">${</span><span class="nv">JID</span><span class="si">}</span><span class="s2"> </span><span class="si">${</span><span class="nv">MYSCRIPT</span><span class="si">}</span><span class="s2">"</span>
sleep <span class="m">60</span>
</pre></div>
</div>
<p>submitting this job, for example, as</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>sbatch --gres<span class="o">=</span>gpu:4 -N <span class="m">1</span> --exclusive --mem<span class="o">=</span>1T --time <span class="m">1</span>:00:00 foo.slurm
</pre></div>
</div>
<p>will create a series of jobs that runs one after another. Together with checkpointing this sort of
approach can be used to run extended workloads in an largely automated manner. The Slurm <a class="reference external" href="https://slurm.schedmd.com/documentation.html" target="_blank">documentation</a> describes many
features for managing sequences of jobs. Some more involved examples can be found at the <a class="reference external" href="https://hpc.nih.gov/docs/job_dependencies.html" target="_blank">NIH Biowulf</a> site. Fully automating workflows can be a little
fiddly and time consuming to get going, but once it is in place you no longer have to get up in the
middle of the night to check on every computational experiment.</p>
</div>
<div class="section" id="note-on-pytorch-1-4">
<h3>Note on Pytorch 1.4<a class="headerlink" href="#note-on-pytorch-1-4" title="Permalink to this headline">¶</a></h3>
<p>Note.. we have recently updated the CUDA drivers on the part of Satori running Slurm. You can install Pytorch 1.4 for use with Slurm using these commands</p>
<div class="highlight-bash notranslate"><div class="highlight"><pre><span></span>conda config --prepend channels https://public.dhe.ibm.com/ibmdl/export/pub/software/server/ibm-ai/conda-early-access/
conda create -n wmlce-ea <span class="nv">python</span><span class="o">=</span><span class="m">3</span>.7
conda activate wmlce-ea
conda install <span class="nv">pytorch</span><span class="o">=</span><span class="m">1</span>.4.0<span class="o">=</span><span class="m">23447</span>.g18a1a27
</pre></div>
</div>
</div>
</div>
</div>
</div>
</div>
<footer>
<div class="rst-footer-buttons" role="navigation" aria-label="footer navigation">
<a href="satori-troubleshooting.html" class="btn btn-neutral float-right" title="Troubleshooting" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right" aria-hidden="true"></span></a>
<a href="satori-training.html" class="btn btn-neutral float-left" title="Training for faster onboarding in the system HW and SW architecture" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left" aria-hidden="true"></span> Previous</a>
</div>
<hr/>
<div role="contentinfo">
<p>
© Copyright 2021, MIT Satori Project.
</p>
</div>
Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
<a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script type="text/javascript">
jQuery(function () {
SphinxRtdTheme.Navigation.enable(true);
});
</script>
</body>
</html>