Skip to content

Latest commit

 

History

History
6 lines (4 loc) · 2.04 KB

2412.15215.md

File metadata and controls

6 lines (4 loc) · 2.04 KB

EnvGS: Modeling View-Dependent Appearance with Environment Gaussian

Reconstructing complex reflections in real-world scenes from 2D images is essential for achieving photorealistic novel view synthesis. Existing methods that utilize environment maps to model reflections from distant lighting often struggle with high-frequency reflection details and fail to account for near-field reflections. In this work, we introduce EnvGS, a novel approach that employs a set of Gaussian primitives as an explicit 3D representation for capturing reflections of environments. These environment Gaussian primitives are incorporated with base Gaussian primitives to model the appearance of the whole scene. To efficiently render these environment Gaussian primitives, we developed a ray-tracing-based renderer that leverages the GPU's RT core for fast rendering. This allows us to jointly optimize our model for high-quality reconstruction while maintaining real-time rendering speeds. Results from multiple real-world and synthetic datasets demonstrate that our method produces significantly more detailed reflections, achieving the best rendering quality in real-time novel view synthesis.

从2D图像中重建真实场景中的复杂反射对于实现逼真的新视角合成至关重要。现有利用环境贴图来模拟远距离光照反射的方法通常难以捕捉高频反射细节,并且无法有效处理近场反射问题。在本文中,我们提出了一种新方法 EnvGS,通过一组高斯原语作为显式3D表示来捕捉环境的反射。这些环境高斯原语与基础高斯原语相结合,用于建模整个场景的外观。 为了高效渲染这些环境高斯原语,我们开发了一种基于光线追踪的渲染器,利用GPU的RT核心实现快速渲染。这使得我们能够在保持实时渲染速度的同时,对模型进行高质量重建的联合优化。来自多个真实场景和合成数据集的实验结果表明,我们的方法能够显著生成更加细致的反射效果,在实时新视角合成任务中实现了最佳渲染质量。