-
Notifications
You must be signed in to change notification settings - Fork 54
/
CITATION.cff
52 lines (52 loc) · 2.61 KB
/
CITATION.cff
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
cff-version: 1.2.0
title: Efficient Distributed GPU Programming for Exascale
message: >-
If you use this software, please cite it using the
metadata from this file.
authors:
- given-names: Andreas
family-names: Herten
email: [email protected]
affiliation: Jülich Supercomputing Centre
orcid: 'https://orcid.org/0000-0002-7150-2505'
- given-names: Lena
family-names: Oden
email: [email protected]
affiliation: FernUni Hagen
orcid: 'https://orcid.org/0000-0002-9670-5296'
- given-names: Simon
family-names: Garcia de Gonzalo
email: [email protected]
affiliation: Sandia National Laboratories
orcid: 'https://orcid.org/0000-0002-5699-1793'
- given-names: Jiri
family-names: Kraus
email: [email protected]
affiliation: NVIDIA
orcid: 'https://orcid.org/0000-0002-5240-3317'
- given-names: Markus
family-names: Hrywniak
email: [email protected]
affiliation: NVIDIA
orcid: 'https://orcid.org/0000-0002-6015-8788'
identifiers:
- type: doi
value: 10.5281/zenodo.5745504
description: Year-agnostic Zenodo Identifier
repository-code: 'https://github.com/FZJ-JSC/tutorial-multi-gpu/'
abstract: >-
Over the past decade, GPUs became ubiquitous in HPC installations around the world, delivering the majority of performance of some of the largest supercomputers (e.g. Summit, Sierra, JUWELS Booster). This trend continues in the recently deployed and upcoming Pre-Exascale and Exascale systems (JUPITER, LUMI, Leonardo; El Capitan, Frontier, Aurora): GPUs are chosen as the core computing devices to enter this next era of HPC.
To take advantage of future GPU-accelerated systems with tens of thousands of devices, application developers need to have the proper skills and tools to understand, manage, and optimize distributed GPU applications.
In this tutorial, participants will learn techniques to efficiently program large-scale multi-GPU systems. While programming multiple GPUs with MPI is explained in detail, also advanced tuning techniques and complementing programming models like NCCL and NVSHMEM are presented. Tools for analysis are shown and used to motivate and implement performance optimizations. The tutorial teaches fundamental concepts that apply to GPU-accelerated systems in general, taking the NVIDIA platform as an example. It is a combination of lectures and hands-on exercises, using a development system for JUPITER (JEDI), for interactive learning and discovery.
keywords:
- NVIDIA
- GPU
- CUDA
- Exascale
- MPI
- NCCL
- NVSHMEM
- Distributed Programming
license: MIT
version: '7.0-sc24'
date-released: '2024-11-17'