Point cloud is a critical 3D representation with many emerging applications. Because of the point sparsity and irregularity, high-quality rendering of point clouds is challenging and often requires complex computations to recover the continuous surface representation. On the other hand, to avoid visual discomfort, the motion-to-photon latency has to be very short, under 10 ms. Existing rendering solutions lack in either quality or speed. To tackle these challenges, we present a framework that unlocks interactive, free-viewing and high-fidelity point cloud rendering. We train a generic neural network to estimate 3D elliptical Gaussians from arbitrary point clouds and use differentiable surface splatting to render smooth texture and surface normal for arbitrary views. Our approach does not require per-scene optimization, and enable real-time rendering of dynamic point cloud. Experimental results demonstrate the proposed solution enjoys superior visual quality and speed, as well as generalizability to different scene content and robustness to compression artifacts.
点云是一种关键的三维表示形式,具有众多新兴应用。然而,由于点的稀疏性和不规则性,高质量的点云渲染具有挑战性,通常需要复杂的计算来恢复连续的表面表示。另一方面,为了避免视觉不适,运动到图像呈现的延迟必须非常短,低于10毫秒。现有的渲染解决方案要么缺乏质量,要么缺乏速度。为应对这些挑战,我们提出了一个框架,实现了交互式、自由视角的高保真点云渲染。我们训练了一个通用的神经网络,能够从任意点云中估计三维椭圆高斯,并使用可微分的表面溅射技术渲染平滑的纹理和表面法线,以适应任意视角。我们的方法无需每个场景单独优化,能够实时渲染动态点云。实验结果表明,所提出的解决方案在视觉质量和速度上具有显著优势,同时在处理不同场景内容和应对压缩伪影方面展现了良好的泛化能力和鲁棒性。