Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 2.03 KB

2406.03697.md

File metadata and controls

5 lines (3 loc) · 2.03 KB

Superpoint Gaussian Splatting for Real-Time High-Fidelity Dynamic Scene Reconstruction

Rendering novel view images in dynamic scenes is a crucial yet challenging task. Current methods mainly utilize NeRF-based methods to represent the static scene and an additional time-variant MLP to model scene deformations, resulting in relatively low rendering quality as well as slow inference speed. To tackle these challenges, we propose a novel framework named Superpoint Gaussian Splatting (SP-GS). Specifically, our framework first employs explicit 3D Gaussians to reconstruct the scene and then clusters Gaussians with similar properties (e.g., rotation, translation, and location) into superpoints. Empowered by these superpoints, our method manages to extend 3D Gaussian splatting to dynamic scenes with only a slight increase in computational expense. Apart from achieving state-of-the-art visual quality and real-time rendering under high resolutions, the superpoint representation provides a stronger manipulation capability. Extensive experiments demonstrate the practicality and effectiveness of our approach on both synthetic and real-world datasets.

在动态场景中渲染新视角图像是一个关键但具有挑战性的任务。当前方法主要利用基于NeRF的方法来表示静态场景,并使用额外的时变多层感知机(MLP)来模拟场景变形,这导致渲染质量相对较低以及推理速度慢。为了解决这些挑战,我们提出了一个名为Superpoint Gaussian Splatting(SP-GS)的新框架。具体来说,我们的框架首先使用明确的3D高斯来重建场景,然后将具有类似属性(例如,旋转、平移和位置)的高斯聚集到超点中。通过这些超点的支持,我们的方法成功地将3D高斯涂抹扩展到动态场景中,仅略微增加了计算开销。除了在高分辨率下实现了业界领先的视觉质量和实时渲染外,超点表示还提供了更强的操纵能力。广泛的实验表明我们的方法在合成和现实世界数据集上的实用性和有效性。