🎯
Focusing
There are no solutions, there are only trade-offs.
- Santa Clara, CA
-
11:21
(UTC -07:00) - https://boxiangw.github.io/
Highlights
- Pro
Pinned Loading
-
NVIDIA/Megatron-LM
NVIDIA/Megatron-LM PublicOngoing research training transformer models at scale
-
NVIDIA/NeMo
NVIDIA/NeMo PublicA scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
-
NVIDIA/TransformerEngine
NVIDIA/TransformerEngine PublicA library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper and Ada GPUs, to provide better performance with lower memory utilizatio…
-
NVIDIA/NeMo-Framework-Launcher
NVIDIA/NeMo-Framework-Launcher PublicProvides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.
-
hpcaitech/ColossalAI
hpcaitech/ColossalAI PublicMaking large AI models cheaper, faster and more accessible
-
Dao-AILab/flash-attention
Dao-AILab/flash-attention PublicFast and memory-efficient exact attention
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.