DGNS: Deformable Gaussian Splatting and Dynamic Neural Surface for Monocular Dynamic 3D Reconstruction
Dynamic scene reconstruction from monocular video is critical for real-world applications. This paper tackles the dual challenges of dynamic novel-view synthesis and 3D geometry reconstruction by introducing a hybrid framework: Deformable Gaussian Splatting and Dynamic Neural Surfaces (DGNS), in which both modules can leverage each other for both tasks. During training, depth maps generated by the deformable Gaussian splatting module guide the ray sampling for faster processing and provide depth supervision within the dynamic neural surface module to improve geometry reconstruction. Simultaneously, the dynamic neural surface directs the distribution of Gaussian primitives around the surface, enhancing rendering quality. To further refine depth supervision, we introduce a depth-filtering process on depth maps derived from Gaussian rasterization. Extensive experiments on public datasets demonstrate that DGNS achieves state-of-the-art performance in both novel-view synthesis and 3D reconstruction.
从单目视频中进行动态场景重建对于实际应用至关重要。本文提出了一种混合框架:可变形高斯点绘与动态神经表面(DGNS),以解决动态新视图合成和三维几何重建的双重挑战。该框架的两个模块能够相互利用,以同时优化这两项任务。 在训练过程中,由可变形高斯点绘模块生成的深度图引导光线采样,从而加速处理,并为动态神经表面模块提供深度监督以改进几何重建。同时,动态神经表面模块调整高斯原语在表面周围的分布,从而提升渲染质量。 为了进一步优化深度监督,我们对通过高斯光栅化生成的深度图引入了深度过滤过程。大量在公开数据集上的实验表明,DGNS在新视图合成和三维重建两个任务中均达到了当前最先进的性能。