Neural rendering methods have significantly advanced photo-realistic 3D scene rendering in various academic and industrial applications. The recent 3D Gaussian Splatting method has achieved the state-of-the-art rendering quality and speed combining the benefits of both primitive-based representations and volumetric representations. However, it often leads to heavily redundant Gaussians that try to fit every training view, neglecting the underlying scene geometry. Consequently, the resulting model becomes less robust to significant view changes, texture-less area and lighting effects. We introduce Scaffold-GS, which uses anchor points to distribute local 3D Gaussians, and predicts their attributes on-the-fly based on viewing direction and distance within the view frustum. Anchor growing and pruning strategies are developed based on the importance of neural Gaussians to reliably improve the scene coverage. We show that our method effectively reduces redundant Gaussians while delivering high-quality rendering. We also demonstrates an enhanced capability to accommodate scenes with varying levels-of-detail and view-dependent observations, without sacrificing the rendering speed.
神经渲染方法在各种学术和工业应用中显著推进了真实感3D场景渲染。最近的3D高斯喷溅方法结合了基于原始体表示和体积表示的优势,实现了最新的渲染质量和速度。然而,它经常导致大量冗余的高斯尝试适应每个训练视图,而忽视了潜在的场景几何。因此,生成的模型对重大视图变化、无纹理区域和光照效果的鲁棒性变差。我们引入了Scaffold-GS,它使用锚点来分布局部3D高斯,并根据观看方向和视锥内的距离即时预测它们的属性。我们根据神经高斯的重要性,开发了锚点生长和剪枝策略,可靠地提高场景覆盖率。我们展示了我们的方法有效地减少了冗余高斯,同时提供高质量的渲染。我们还展示了在不牺牲渲染速度的情况下,增强了适应具有不同细节级别和视图依赖观察的场景的能力。