The fidelity of relighting is bounded by both geometry and appearance representations. For geometry, both mesh and volumetric approaches have difficulty modeling intricate structures like 3D hair geometry. For appearance, existing relighting models are limited in fidelity and often too slow to render in real-time with high-resolution continuous environments. In this work, we present Relightable Gaussian Codec Avatars, a method to build high-fidelity relightable head avatars that can be animated to generate novel expressions. Our geometry model based on 3D Gaussians can capture 3D-consistent sub-millimeter details such as hair strands and pores on dynamic face sequences. To support diverse materials of human heads such as the eyes, skin, and hair in a unified manner, we present a novel relightable appearance model based on learnable radiance transfer. Together with global illumination-aware spherical harmonics for the diffuse components, we achieve real-time relighting with spatially all-frequency reflections using spherical Gaussians. This appearance model can be efficiently relit under both point light and continuous illumination. We further improve the fidelity of eye reflections and enable explicit gaze control by introducing relightable explicit eye models. Our method outperforms existing approaches without compromising real-time performance. We also demonstrate real-time relighting of avatars on a tethered consumer VR headset, showcasing the efficiency and fidelity of our avatars.
在照明再现的真实度方面,几何和外观表示都是限制因素。对于几何来说,网格和体积方法在模拟诸如三维头发几何这样复杂结构时都存在困难。在外观方面,现有的照明再现模型在真实度上有限,并且在高分辨率连续环境中实时渲染过于缓慢。在这项工作中,我们提出了 Relightable Gaussian Codec Avatars 方法,用于构建高保真、可重新照明的头部形象,这些形象可以被动画化以生成新的表情。我们基于三维高斯的几何模型能够捕获三维一致的亚毫米级细节,如动态面部序列中的头发丝和毛孔。为了统一支持人类头部的各种材料,如眼睛、皮肤和头发,我们提出了一种基于可学习辐射传递的新颖可照明外观模型。结合全局照明感知的球谐函数用于漫反射组件,我们使用球形高斯实现了实时照明,具有空间上全频率的反射。这种外观模型可以在点光源和连续照明下有效地重新照明。我们进一步通过引入可照明的显式眼睛模型来提高眼睛反射的真实度,并实现显式凝视控制。我们的方法在不妥协实时性能的情况下,超越了现有方法。我们还展示了在有线消费者 VR 头显上实时重新照明形象,展示了我们形象的效率和真实度。