Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2.04 KB

2406.08300.md

File metadata and controls

5 lines (3 loc) · 2.04 KB

From Chaos to Clarity: 3DGS in the Dark

Novel view synthesis from raw images provides superior high dynamic range (HDR) information compared to reconstructions from low dynamic range RGB images. However, the inherent noise in unprocessed raw images compromises the accuracy of 3D scene representation. Our study reveals that 3D Gaussian Splatting (3DGS) is particularly susceptible to this noise, leading to numerous elongated Gaussian shapes that overfit the noise, thereby significantly degrading reconstruction quality and reducing inference speed, especially in scenarios with limited views. To address these issues, we introduce a novel self-supervised learning framework designed to reconstruct HDR 3DGS from a limited number of noisy raw images. This framework enhances 3DGS by integrating a noise extractor and employing a noise-robust reconstruction loss that leverages a noise distribution prior. Experimental results show that our method outperforms LDR/HDR 3DGS and previous state-of-the-art (SOTA) self-supervised and supervised pre-trained models in both reconstruction quality and inference speed on the RawNeRF dataset across a broad range of training views.

从原始图像合成新视角提供了比从低动态范围RGB图像重构更优越的高动态范围(HDR)信息。然而,未处理原始图像中固有的噪声却影响了3D场景表示的准确性。我们的研究显示,3D高斯喷溅(3DGS)对这种噪声特别敏感,导致生成了许多过度拟合噪声的拉长高斯形状,从而显著降低了重构质量并减缓了推理速度,尤其是在视图有限的情况下。为了解决这些问题,我们引入了一种新的自监督学习框架,旨在从有限数量的噪声原始图像中重构HDR 3DGS。该框架通过整合噪声提取器并采用一种利用噪声分布先验的抗噪声重构损失来增强3DGS。实验结果显示,我们的方法在RawNeRF数据集上,跨广泛的训练视角,无论是重构质量还是推理速度,都超越了LDR/HDR 3DGS和之前的最新自监督及监督预训练模型。