+++ title = "JuliaGPU" +++
<div id=home-jumbotron class="jumbotron text-center">
<h1><img height=70 src="/assets/logo_crop.png">
JuliaGPU</h1>
<p class=font-125>
High-performance GPU programming in a high-level language.
</p>
</div>
JuliaGPU is a Github organization created to unify the many packages for programming GPUs in Julia. With its high-level syntax and flexible compiler, Julia is well positioned to productively program hardware accelerators like GPUs without sacrificing performance.
Several GPU platforms are supported, but there are large differences in features and stability. On this website, you can find a brief introduction of the supported platforms and links to the respective home pages.
{{youtube Hz9IMJuW5hU}}
\
The best supported GPU platform in Julia is NVIDIA CUDA, with mature and full-featured packages for both low-level kernel programming as well as working with high-level operations on arrays. All versions of Julia are supported, on Linux and Windows, and the functionality is actively used by a variety of applications and libraries.
Similar, but much newer capabilities exist for Intel GPUs with oneAPI. Currently, full-featured kernel programming capabilities are available, but there is no support for vendor libraries such as oneMKL or oneDNN yet.
Maturing support exists for AMD GPUs running on the ROCm stack. These GPUs can again be programmed in Julia at the kernel level or using high-level operations on arrays. Latest versions of Julia are supported, and the functionality is increasingly used by a variety of applications and libraries.
Experimental support also exists for Apple GPUs. Array programming and kernel programming are both supported.
Almost 300 packages rely directly or indirectly on Julia's GPU capabilities. A few noteworthy examples are:
- DiffEqGPU.jl as part of the DifferentialEquations.jl ecosystem, for using GPUs in differential equation solvers
- Flux.jl library for machine-learning
- Oceananigans.jl to accelerate a non-hydrostatic ocean modeling application
- Yao.jl framework for quantum information research
- KernelAbstractions.jl for working with CPUs and GPUs alike using vendor-neutral abstractions
- GemmKernels.jl providing flexible and performant GEMM kernels
Many other Julia applications and libraries can be used with GPUs, too: By means of GPU-specific array types like CuArray from CUDA.jl or ROCArray from AMDGPU.jl, existing software that uses the Julia array interfaces can often be executed as-is on a GPU.
Much of Julia's GPU support was developed as part of academic research. If you would like to help support it, please star the relevant repositories as such metrics may help us secure funding in the future. If you use our software as part of your research, teaching, or other activities, we would be grateful if you could cite our work:
- Tim Besard, Christophe Foket, and Bjorn De Sutter. "Effective extensible programming: Unleashing Julia on GPUs." IEEE Transactions on Parallel and Distributed Systems (2018).
- Tim Besard, Valentin Churavy, Alan Edelman and Bjorn De Sutter. "Rapid software prototyping for heterogeneous and distributed platforms." Advances in Engineering Software (2019).
- Thomas Faingnaert, Tim Besard, and Bjorn De Sutter. "Flexible Performant GEMM Kernels on GPUs." IEEE Transactions on Parallel and Distributed Systems (2021).
If you need help, or have questions about GPU programming in Julia, you can find members of the community at:
- Julia Discourse, with a dedicated GPU section
- Julia Slack (register here), on the #gpu channel
- JuliaGPU office hours, every other week at 2PM CET (check the Julia community calendar for more details).