UniGaussian: Driving Scene Reconstruction from Multiple Camera Models via Unified Gaussian Representations
Urban scene reconstruction is crucial for real-world autonomous driving simulators. Although existing methods have achieved photorealistic reconstruction, they mostly focus on pinhole cameras and neglect fisheye cameras. In fact, how to effectively simulate fisheye cameras in driving scene remains an unsolved problem. In this work, we propose UniGaussian, a novel approach that learns a unified 3D Gaussian representation from multiple camera models for urban scene reconstruction in autonomous driving. Our contributions are two-fold. First, we propose a new differentiable rendering method that distorts 3D Gaussians using a series of affine transformations tailored to fisheye camera models. This addresses the compatibility issue of 3D Gaussian splatting with fisheye cameras, which is hindered by light ray distortion caused by lenses or mirrors. Besides, our method maintains real-time rendering while ensuring differentiability. Second, built on the differentiable rendering method, we design a new framework that learns a unified Gaussian representation from multiple camera models. By applying affine transformations to adapt different camera models and regularizing the shared Gaussians with supervision from different modalities, our framework learns a unified 3D Gaussian representation with input data from multiple sources and achieves holistic driving scene understanding. As a result, our approach models multiple sensors (pinhole and fisheye cameras) and modalities (depth, semantic, normal and LiDAR point clouds). Our experiments show that our method achieves superior rendering quality and fast rendering speed for driving scene simulation.
城市场景重建对现实世界自动驾驶模拟器至关重要。尽管现有方法已实现照片级真实的重建,但它们大多专注于针孔相机而忽略了鱼眼相机。事实上,在驾驶场景中如何有效模拟鱼眼相机仍是一个未解决的问题。在这项工作中,我们提出了UniGaussian,一种新颖的方法,通过多种相机模型学习统一的三维高斯表示,用于自动驾驶中的城市场景重建。 我们的贡献有两方面。首先,我们提出了一种新的可微分渲染方法,通过一系列针对鱼眼相机模型设计的仿射变换来扭曲三维高斯。这解决了由于镜头或反射镜导致的光线扭曲,使得三维高斯点渲染无法兼容鱼眼相机的问题。此外,我们的方法在保持实时渲染的同时保证了可微分性。其次,基于这一可微分渲染方法,我们设计了一个新的框架,用于从多种相机模型中学习统一的高斯表示。通过对不同相机模型应用仿射变换,并结合来自不同模态的监督对共享的高斯进行正则化,我们的框架能够从多源输入数据中学习统一的三维高斯表示,实现对驾驶场景的整体理解。 因此,我们的方法能够建模多种传感器(针孔相机和鱼眼相机)和模态(深度、语义、法线以及激光雷达点云)。实验结果表明,我们的方法在驾驶场景模拟中实现了更高的渲染质量和快速的渲染速度。