We introduce a new hair modeling method that uses a dual representation of classical hair strands and 3D Gaussians to produce accurate and realistic strand-based reconstructions from multi-view data. In contrast to recent approaches that leverage unstructured Gaussians to model human avatars, our method reconstructs the hair using 3D polylines, or strands. This fundamental difference allows the use of the resulting hairstyles out-of-the-box in modern computer graphics engines for editing, rendering, and simulation. Our 3D lifting method relies on unstructured Gaussians to generate multi-view ground truth data to supervise the fitting of hair strands. The hairstyle itself is represented in the form of the so-called strand-aligned 3D Gaussians. This representation allows us to combine strand-based hair priors, which are essential for realistic modeling of the inner structure of hairstyles, with the differentiable rendering capabilities of 3D Gaussian Splatting. Our method, named Gaussian Haircut, is evaluated on synthetic and real scenes and demonstrates state-of-the-art performance in the task of strand-based hair reconstruction.
我们介绍了一种新的头发建模方法,该方法采用经典的发束和3D高斯的双重表示,从多视角数据中生成精确且真实的基于发束的重建。与最近使用非结构化高斯模型重建人类头像的方式不同,我们的方法通过3D折线(即发束)来重建头发。这一根本性差异使得生成的发型能够直接在现代计算机图形引擎中用于编辑、渲染和模拟。我们的3D提升方法依赖于非结构化高斯生成多视角的真实数据,以监督发束的拟合。发型本身以所谓的“发束对齐的3D高斯”形式表示。这种表示使我们能够结合基于发束的头发先验——这是对发型内在结构进行真实建模所必需的,同时还能利用3D Gaussian Splatting的可微分渲染能力。我们的方法命名为Gaussian Haircut,在合成和真实场景中进行了评估,并在基于发束的头发重建任务中展示了最先进的性能。