It is vital to infer a signed distance function (SDF) in multi-view based surface reconstruction. 3D Gaussian splatting (3DGS) provides a novel perspective for volume rendering, and shows advantages in rendering efficiency and quality. Although 3DGS provides a promising neural rendering option, it is still hard to infer SDFs for surface reconstruction with 3DGS due to the discreteness, the sparseness, and the off-surface drift of 3D Gaussians. To resolve these issues, we propose a method that seamlessly merge 3DGS with the learning of neural SDFs. Our key idea is to more effectively constrain the SDF inference with the multi-view consistency. To this end, we dynamically align 3D Gaussians on the zero-level set of the neural SDF using neural pulling, and then render the aligned 3D Gaussians through the differentiable rasterization. Meanwhile, we update the neural SDF by pulling neighboring space to the pulled 3D Gaussians, which progressively refine the signed distance field near the surface. With both differentiable pulling and splatting, we jointly optimize 3D Gaussians and the neural SDF with both RGB and geometry constraints, which recovers more accurate, smooth, and complete surfaces with more geometry details. Our numerical and visual comparisons show our superiority over the state-of-the-art results on the widely used benchmarks.
在基于多视角的表面重建中,推断有符号距离函数(SDF)是至关重要的。3D高斯散射(3DGS)为体积渲染提供了一种新颖的视角,并在渲染效率和质量方面展现了优势。尽管3DGS是一种有前景的神经渲染选择,但由于3D高斯的离散性、稀疏性以及偏离表面的漂移问题,使用3DGS推断SDF进行表面重建仍然具有困难。为了解决这些问题,我们提出了一种将3DGS与神经SDF学习无缝融合的方法。我们的关键思想是通过多视角一致性更有效地约束SDF的推断。为此,我们通过神经拉动动态地将3D高斯对齐到神经SDF的零级集上,然后通过可微光栅化渲染对齐后的3D高斯。同时,我们通过拉动邻近空间到被拉动的3D高斯上,逐步优化靠近表面的有符号距离场。通过可微分的拉动与散射,我们结合RGB和几何约束,共同优化3D高斯和神经SDF,从而恢复出更加精确、平滑且完整的表面,并展现更多的几何细节。我们的数值和视觉比较结果表明,在广泛使用的基准上,我们的方法优于当前最先进的成果。