-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.xml
452 lines (402 loc) · 26.4 KB
/
index.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
<?xml version="1.0" encoding="utf-8" standalone="yes" ?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>Forrest Glines</title>
<link>https://forrestglines.github.io/</link>
<atom:link href="https://forrestglines.github.io/index.xml" rel="self" type="application/rss+xml" />
<description>Forrest Glines</description>
<generator>Wowchemy (https://wowchemy.com)</generator><language>en-us</language><lastBuildDate>Sat, 01 Jun 2030 13:00:00 +0000</lastBuildDate>

<item>
<title>Example Talk</title>
<link>https://forrestglines.github.io/talk/example-talk/</link>
<pubDate>Sat, 01 Jun 2030 13:00:00 +0000</pubDate>
<guid>https://forrestglines.github.io/talk/example-talk/</guid>
<description><div class="alert alert-note">
<div>
Click on the <strong>Slides</strong> button above to view the built-in slides feature.
</div>
</div>
<p>Slides can be added in a few ways:</p>
<ul>
<li><strong>Create</strong> slides using Wowchemy&rsquo;s <a href="https://wowchemy.com/docs/managing-content/#create-slides" target="_blank" rel="noopener"><em>Slides</em></a> feature and link using <code>slides</code> parameter in the front matter of the talk file</li>
<li><strong>Upload</strong> an existing slide deck to <code>static/</code> and link using <code>url_slides</code> parameter in the front matter of the talk file</li>
<li><strong>Embed</strong> your slides (e.g. Google Slides) or presentation video on this page using <a href="https://wowchemy.com/docs/writing-markdown-latex/" target="_blank" rel="noopener">shortcodes</a>.</li>
</ul>
<p>Further event details, including <a href="https://wowchemy.com/docs/writing-markdown-latex/" target="_blank" rel="noopener">page elements</a> such as image galleries, can be added to the body of this page.</p>
</description>
</item>
<item>
<title>Exascale simulations of magnetized AGN jets on Frontier</title>
<link>https://forrestglines.github.io/project/incite_2023/</link>
<pubDate>Sat, 15 Jul 2023 00:00:00 +0000</pubDate>
<guid>https://forrestglines.github.io/project/incite_2023/</guid>
<description><p>Myself and a group of collaborators were awarded an <a href="https://www.anl.gov/article/incite-program-awards-supercomputing-time-to-56-projects-to-accelerate-science-and-engineering" target="_blank" rel="noopener">INCITE 2023
Award</a>
to perform exascale simulations of galaxy clusters with magnetized jets powered
by a central active galactic nuclei (AGN). Galaxy clusters, as the largest
gravitationally bound structures, provide a unique probe of large scale
structure in the universe. The magnetized AGN jets play a key role in the
dynamics of baryonic matter in galaxy clusters and thus observational
signatures of clusters.</p>
<p>These simulations are enabled by
<a href="https://github.com/parthenon-hpc-lab/athenapk" target="_blank" rel="noopener">AthenaPK</a>, our exascale capable
performance portable astrophysics code. I am currently working on optimizing
these AthenaPK simulations on the exascale supercomputer Frontier.</p>
<p>Read MSU&rsquo;s reporting on the award <a href="https://www.egr.msu.edu/news/2023/03/22/msu-led-research-team-study-galaxies-never" target="_blank" rel="noopener">here</a>.</p>
</description>
</item>
<item>
<title>AthenaPK and Parthenon</title>
<link>https://forrestglines.github.io/project/athenapk_and_parthenon/</link>
<pubDate>Tue, 15 Jun 2021 00:00:00 +0000</pubDate>
<guid>https://forrestglines.github.io/project/athenapk_and_parthenon/</guid>
<description><p>AthenaPK is an in-development performance-portable conversion of Athena++ build
on the Parthenon adaptive mesh refinement (AMR) framework using the Kokkos
performance portability library. I am one of the main developers for AthenaPK
and a co-developer for Parthenon. The Parthenon framework is designed to be
massively scalable and efficient on both CPUs and GPUs, enabling
next-generation AMR simulations on a variety of hardware architectures. Kernels
and data are managed by Kokkos, which enables high performance on any
architecture supported by Kokkos, including CPUs, NVIDIA and AMD GPUs, and
future Intel GPUs. AthenaPK uses the robust solvers from Athena++ within the
Parthenon framework to enable future exascale astrophysical simulations.</p>
<p>Parthenon is publicly available on <a href="https://github.com/lanl/parthenon" target="_blank" rel="noopener">github</a>.</p>
<p>AthenaPK is publicly available on <a href="https://gitlab.com/theias/hpc/jmstone/athena-parthenon/athenapk" target="_blank" rel="noopener">gitlab</a>.</p>
</description>
</item>
<item>
<title>K-Athena</title>
<link>https://forrestglines.github.io/project/kathena/</link>
<pubDate>Tue, 15 Jun 2021 00:00:00 +0000</pubDate>
<guid>https://forrestglines.github.io/project/kathena/</guid>
<description><p>K-Athena is a partial conversion of Athena++, using Kokkos for performance
portability, meaning that it runs efficiently on CPUs and GPUs. The code is a
precursor to the Parthenon and AthenaPK projects, implementing only uniform
grids efficiently when running on GPUs. However, the code was a valuable proof
of concept for a performance-portable magnetohydrodynamics code, allowing
future exascale simulations to be unconstrained by niche architectures.</p>
<p>K-Athena is publicly available on <a href="https://gitlab.com/pgrete/kathena" target="_blank" rel="noopener">gitlab</a>.</p>
<p>As part of the development effort, we quantified the performance portability of
code using roofline models. We constructed roofline models on each
of the CPU and GPU devices on which we tested K-Athena. Roofline models allow
estimations of the theoretical peak throughput of a code as limited by its
arithmetic intensity (the number of operations execute per byte loaded) and by
the bandwidths and computational throughputs of the hardware. By comparing the
actual efficiency achieved to the theoretical efficiency for each architecture,
we obtain a performance efficiency for each machine that can be directly
compared, even if the architectures are very different.</p>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img alt="" srcset="
/project/kathena/gpu-roofline_hu92834981f023fb98b90699d135d89f40_39861_97362de59d2e0e0f84704517a9ede06d.png 400w,
/project/kathena/gpu-roofline_hu92834981f023fb98b90699d135d89f40_39861_13c3c550c23d6f4372a5f9fe3f97a497.png 760w,
/project/kathena/gpu-roofline_hu92834981f023fb98b90699d135d89f40_39861_1200x1200_fit_lanczos_3.png 1200w"
src="https://forrestglines.github.io/project/kathena/gpu-roofline_hu92834981f023fb98b90699d135d89f40_39861_97362de59d2e0e0f84704517a9ede06d.png"
width="525"
height="390"
loading="lazy" data-zoomable /></div>
</div></figure>
Roofline model of an NVIDIA Tesla V100 with the arithmetic intensity of
K-Athena, showing performance in TFLOPS versus arithmetic intensity in floating
point operations execute per byte loaded and written. Throughputs appear as
horizontal ceilings, bandwidths of the different memory spaces of the hardware
appear as diagonal ceilings, and arithmetic intensities of the code appear as
vertical lines. The intersect of an arithmetic intensity with a bandwidth or
throughput ceiling show the theoretical throughput ceiling imposed by that
bandwidth or throughput. We generated rooflines for all architectures on which
we profiled K-Athena.</p>
<p>
<figure >
<div class="d-flex justify-content-center">
<div class="w-100" ><img alt="" srcset="
/project/kathena/featured_hua9c4d5f5e0b6839025ea1e284575d3fa_28317_9afa34cdd1b87bc61495122d08a7c94d.png 400w,
/project/kathena/featured_hua9c4d5f5e0b6839025ea1e284575d3fa_28317_83f474163499778ccf150c28cb1a75cb.png 760w,
/project/kathena/featured_hua9c4d5f5e0b6839025ea1e284575d3fa_28317_1200x1200_fit_lanczos_3.png 1200w"
src="https://forrestglines.github.io/project/kathena/featured_hua9c4d5f5e0b6839025ea1e284575d3fa_28317_9afa34cdd1b87bc61495122d08a7c94d.png"
width="520"
height="409"
loading="lazy" data-zoomable /></div>
</div></figure>
Efficiency achieved on each architecture on which we profiled K-Athena, showing
the percentage performance achieved out of the theoretical performance as
limited by the DRAM and L1 memory for each architecture. By taking the harmonic
mean of these efficiencies we arrive at a performance portability measure. The
implementation of K-Athena (and similar MHD codes) is typically limited by the
DRAM bandwidth, leading to a performance portability of 62.8%. Less efficiency
utilization of the L1 cache on almost all architectures leads to a 7.7%
performance portability with respect to the L1 cache.</p>
<p>Our full method description and performance analysis can be found in
<a href="https://doi.org/10.1109/TPDS.2020.3010016" target="_blank" rel="noopener">IEEE Transactions on Parallel and Distributed Systems</a>.</p>
</description>
</item>
<item>
<title>Magnetized Turbulence in the Taylor-Green Vortex</title>
<link>https://forrestglines.github.io/project/taylor_green/</link>
<pubDate>Tue, 15 Jun 2021 00:00:00 +0000</pubDate>
<guid>https://forrestglines.github.io/project/taylor_green/</guid>
<description><p>To investigate the development of turbulence in intermittently driven plasmas
such as the intracluster medium, we used our newly developed K-Athena code to
run a series of simulations of the magnetized Taylor-Green vortex. The
turbulence that arises from the unsteady flow of the magnetized Taylor-Green
vortex models the turbulence that develops from large scale infrequent events
such as galaxy cluster mergers disturbing the intracluster medium. In our
paper, we examine the magnitude and the spectra of the kinetic, magnetic, and
thermal energy reservoirs within the plasma. Additionally, we apply an energy
transfer analysis to study the movement of energy between different length
scale and between different reservoirs</p>
<p>Our full setup, explanation, and analysis can be found in
<a href="https://doi.org/10.1103/PhysRevE.103.043203" target="_blank" rel="noopener">Physical Review E</a>.</p>
</description>
</item>
<item>
<title>Magnetized decaying turbulence in the weakly compressible Taylor-Green vortex </title>
<link>https://forrestglines.github.io/publication/taylor_green/</link>
<pubDate>Tue, 13 Apr 2021 00:00:00 +0000</pubDate>
<guid>https://forrestglines.github.io/publication/taylor_green/</guid>
<description><p>Magnetohydrodynamic (MHD) turbulence affects both terrestrial and astrophysical
plasmas. The properties of magnetized turbulence must be better understood to
more accurately characterize these systems. This work presents ideal MHD
simulations of the compressible Taylor-Green vortex under a range of initial
subsonic Mach numbers and magnetic field strengths. We find that regardless of
the initial field strength, the magnetic energy becomes dominant over the
kinetic energy on all scales after at most several dynamical times. The
spectral indices of the kinetic and magnetic energy spectra become shallower
than $k^{−5/3}$ over time and generally fluctuate. Using a shell-to-shell energy
transfer analysis framework, we find that the magnetic fields facilitate a
significant amount of the energy flux and that the kinetic energy cascade is
suppressed. Moreover, we observe nonlocal energy transfer from the large-scale
kinetic energy to intermediate and small-scale magnetic energy via magnetic
tension. We conclude that even in intermittently or singularly driven weakly
magnetized systems, the dynamical effects of magnetic fields cannot be
neglected.</p>
</description>
</item>
<item>
<title>Environmental Dependence of Self-regulating Black Hole Feedback in Massive Galaxies</title>
<link>https://forrestglines.github.io/publication/self_regulating_bh_feedback/</link>
<pubDate>Tue, 01 Dec 2020 00:00:00 +0000</pubDate>
<guid>https://forrestglines.github.io/publication/self_regulating_bh_feedback/</guid>
<description><p>In the universe&rsquo;s most massive galaxies, active galactic
nucleus (AGN) feedback appears to limit star formation. The
accumulation of cold gas near the central black hole fuels
powerful AGN outbursts, keeping the ambient medium in a
state marginally unstable to condensation and formation of
cold gas clouds. However, the ability of that mechanism to
self-regulate may depend on numerous environmental factors,
including the depth of the potential well and the pressure
of the surrounding circumgalactic medium (CGM). Here we
present a suite of numerical simulations, with halo mass
ranging from $2\times10^{12} M_\odot$ to $8\times10^{14} M_\odot$, exploring the
dependence of AGN feedback on those environmental factors.
We include the spatially extended mass and energy input
from the massive galaxy&rsquo;s old stellar population capable of
sweeping gas out of the galaxy if the confining CGM
pressure is sufficiently low. Our simulations show that
this feedback mechanism is tightly self-regulating in a
massive galaxy with a deep central potential and low CGM
pressure, permitting only small amounts of multiphase gas
to accumulate and allowing no star formation. In a
similar-mass galaxy with shallower central potential and
greater CGM pressure the feedback mechanism is more
episodic, producing extended multiphase gas and allowing
small rates of star formation ($\sim0.1 M_\odot \text{yr}^{−1}$). At the
low-mass end, the mechanism becomes implausibly explosive,
perhaps because the CGM initially has no angular momentum,
which would have reduced the amount of condensed gas
capable of fueling feedback.</p>
</description>
</item>
<item>
<title>Tests of AGN Feedback Kernels in Simulated Galaxy Clusters </title>
<link>https://forrestglines.github.io/publication/agn_thermal/</link>
<pubDate>Tue, 01 Sep 2020 00:00:00 +0000</pubDate>
<guid>https://forrestglines.github.io/publication/agn_thermal/</guid>
<description><p>In cool-core galaxy clusters with central cooling times much shorter than a Hubble time, condensation of the ambient central gas is regulated by a heating mechanism, probably an active galactic nucleus. Previous analytical work has suggested that certain radial distributions of heat input may result in convergence to a quasi-steady global state that does not substantively change on the timescale for radiative cooling, even if the heating and cooling are not locally in balance. To test this hypothesis, we simulate idealized galaxy cluster halos using the ENZO code with an idealized, spherically symmetric heat input kernel intended to emulate. Thermal energy is distributed with radius according to a range of kernels, in which total heating is updated to match total cooling every 10 Myr. Some heating kernels can maintain quasi-steady global configurations, but no kernel we tested produces a quasi-steady state with central entropy as low as those observed in cool-core clusters. The general behavior of the simulations depends on the proportion of heating in the inner 10 kpc, with low central heating leading to central cooling catastrophes, high central heating creating a central convective zone with an inverted entropy gradient, and intermediate central heating resulting in a flat central entropy profile that exceeds observations. The timescale on which our simulated halos fall into an unsteady multiphase state is proportional to the square of the cooling time of the lowest-entropy gas, allowing more centrally concentrated heating to maintain a longer-lasting steady state.</p>
</description>
</item>
<item>
<title> K-Athena: a performance portable structured grid finite volume magnetohydrodynamics code </title>
<link>https://forrestglines.github.io/publication/kathena/</link>
<pubDate>Fri, 17 Jul 2020 00:00:00 +0000</pubDate>
<guid>https://forrestglines.github.io/publication/kathena/</guid>
<description><p>Large scale simulations are a key pillar of modern research and require
ever-increasing computational resources. Different novel manycore architectures
have emerged in recent years on the way towards the exascale era. Performance
portability is required to prevent repeated non-trivial refactoring of a code
for different architectures. We combine ATHENA++, an existing
magnetohydrodynamics (MHD) CPU code, with KOKKOS, a performance portable
on-node parallel programming paradigm, into K-ATHENA to allow efficient
simulations on multiple architectures using a single codebase. We present
profiling and scaling results for different platforms including Intel Skylake
CPUs, Intel Xeon Phis, and NVIDIA GPUs. K-ATHENA achieves $&gt; 10^8$ cell-updates/s
on a single V100 GPU for second-order double precision MHD calculations, and a
speedup of 30 on up to 24,576 GPUs on Summit (compared to 172,032 CPU cores),
reaching $1:94\times10^12$ total cell-updates/s at 76 percent parallel efficiency.
Using a roofline analysis we demonstrate that the overall performance is
currently limited by DRAM bandwidth and calculate a performance portability
metric of 62.8 percent. Finally, we present the implementation strategies used
and the challenges encountered in maximizing performance. This will provide
other research groups with a straightforward approach to prepare their own
codes for the exascale era. K-ATHENA is available at
<a href="https://gitlab.com/pgrete/kathena" target="_blank" rel="noopener">https://gitlab.com/pgrete/kathena</a>.</p>
</description>
</item>
<item>
<title>Slides</title>
<link>https://forrestglines.github.io/slides/example/</link>
<pubDate>Tue, 05 Feb 2019 00:00:00 +0000</pubDate>
<guid>https://forrestglines.github.io/slides/example/</guid>
<description><h1 id="create-slides-in-markdown-with-wowchemy">Create slides in Markdown with Wowchemy</h1>
<p><a href="https://wowchemy.com/" target="_blank" rel="noopener">Wowchemy</a> | <a href="https://owchemy.com/docs/managing-content/#create-slides" target="_blank" rel="noopener">Documentation</a></p>
<hr>
<h2 id="features">Features</h2>
<ul>
<li>Efficiently write slides in Markdown</li>
<li>3-in-1: Create, Present, and Publish your slides</li>
<li>Supports speaker notes</li>
<li>Mobile friendly slides</li>
</ul>
<hr>
<h2 id="controls">Controls</h2>
<ul>
<li>Next: <code>Right Arrow</code> or <code>Space</code></li>
<li>Previous: <code>Left Arrow</code></li>
<li>Start: <code>Home</code></li>
<li>Finish: <code>End</code></li>
<li>Overview: <code>Esc</code></li>
<li>Speaker notes: <code>S</code></li>
<li>Fullscreen: <code>F</code></li>
<li>Zoom: <code>Alt + Click</code></li>
<li><a href="https://github.com/hakimel/reveal.js#pdf-export" target="_blank" rel="noopener">PDF Export</a>: <code>E</code></li>
</ul>
<hr>
<h2 id="code-highlighting">Code Highlighting</h2>
<p>Inline code: <code>variable</code></p>
<p>Code block:</p>
<pre><code class="language-python">porridge = &quot;blueberry&quot;
if porridge == &quot;blueberry&quot;:
print(&quot;Eating...&quot;)
</code></pre>
<hr>
<h2 id="math">Math</h2>
<p>In-line math: $x + y = z$</p>
<p>Block math:</p>
<p>$$
f\left( x \right) = ;\frac{{2\left( {x + 4} \right)\left( {x - 4} \right)}}{{\left( {x + 4} \right)\left( {x + 1} \right)}}
$$</p>
<hr>
<h2 id="fragments">Fragments</h2>
<p>Make content appear incrementally</p>
<pre><code>{{% fragment %}} One {{% /fragment %}}
{{% fragment %}} **Two** {{% /fragment %}}
{{% fragment %}} Three {{% /fragment %}}
</code></pre>
<p>Press <code>Space</code> to play!</p>
<span class="fragment " >
One
</span>
<span class="fragment " >
**Two**
</span>
<span class="fragment " >
Three
</span>
<hr>
<p>A fragment can accept two optional parameters:</p>
<ul>
<li><code>class</code>: use a custom style (requires definition in custom CSS)</li>
<li><code>weight</code>: sets the order in which a fragment appears</li>
</ul>
<hr>
<h2 id="speaker-notes">Speaker Notes</h2>
<p>Add speaker notes to your presentation</p>
<pre><code class="language-markdown">{{% speaker_note %}}
- Only the speaker can read these notes
- Press `S` key to view
{{% /speaker_note %}}
</code></pre>
<p>Press the <code>S</code> key to view the speaker notes!</p>
<aside class="notes">
<ul>
<li>Only the speaker can read these notes</li>
<li>Press <code>S</code> key to view</li>
</ul>
</aside>
<hr>
<h2 id="themes">Themes</h2>
<ul>
<li>black: Black background, white text, blue links (default)</li>
<li>white: White background, black text, blue links</li>
<li>league: Gray background, white text, blue links</li>
<li>beige: Beige background, dark text, brown links</li>
<li>sky: Blue background, thin dark text, blue links</li>
</ul>
<hr>
<ul>
<li>night: Black background, thick white text, orange links</li>
<li>serif: Cappuccino background, gray text, brown links</li>
<li>simple: White background, black text, blue links</li>
<li>solarized: Cream-colored background, dark green text, blue links</li>
</ul>
<hr>
<section data-noprocess data-shortcode-slide
data-background-image="/media/boards.jpg"
>
<h2 id="custom-slide">Custom Slide</h2>
<p>Customize the slide style and background</p>
<pre><code class="language-markdown">{{&lt; slide background-image=&quot;/media/boards.jpg&quot; &gt;}}
{{&lt; slide background-color=&quot;#0000FF&quot; &gt;}}
{{&lt; slide class=&quot;my-style&quot; &gt;}}
</code></pre>
<hr>
<h2 id="custom-css-example">Custom CSS Example</h2>
<p>Let&rsquo;s make headers navy colored.</p>
<p>Create <code>assets/css/reveal_custom.css</code> with:</p>
<pre><code class="language-css">.reveal section h1,
.reveal section h2,
.reveal section h3 {
color: navy;
}
</code></pre>
<hr>
<h1 id="questions">Questions?</h1>
<p><a href="https://github.com/wowchemy/wowchemy-hugo-modules/discussions" target="_blank" rel="noopener">Ask</a></p>
<p><a href="https://wowchemy.com/docs/managing-content/#create-slides" target="_blank" rel="noopener">Documentation</a></p>
</description>
</item>
<item>
<title>Scalable Relativistic High-Resolution Shock-Capturing for Heterogeneous Computing</title>
<link>https://forrestglines.github.io/publication/scalable_shock_capturing/</link>
<pubDate>Tue, 01 Sep 2015 00:00:00 +0000</pubDate>
<guid>https://forrestglines.github.io/publication/scalable_shock_capturing/</guid>
<description><p>A shift is underway in high performance computing (HPC) towards heterogeneous
parallel architectures that emphasize medium and fine grain thread parallelism.
Many scientific computing algorithms, including simple finite-differencing
methods, have already been mapped to heterogeneous architectures with
order-of-magnitude gains in performance as a result. Recent case studies
examining high-resolution shock-capturing (HRSC) algorithms suggest that these
finite-volume methods are good candidates for emerging heterogeneous
architectures. HRSC methods form a key scientific kernel for compressible
inviscid solvers that appear in astrophysics and engineering applications and
tend to require enormous memory and computing resources. This work presents a
case study of an HRSC method executed on a heterogeneous parallel architecture
utilizing hundreds of GPU enabled nodes with remote direct memory access to the
GPUs for a non-trivial shock application using the relativistic
magnetohydrodynamics model.</p>
</description>
</item>
<item>
<title></title>
<link>https://forrestglines.github.io/admin/config.yml</link>
<pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
<guid>https://forrestglines.github.io/admin/config.yml</guid>
<description></description>
</item>
</channel>
</rss>