With the advancement of computer vision, the recently emerged 3D Gaussian Splatting (3DGS) has increasingly become a popular scene reconstruction algorithm due to its outstanding performance. Distributed 3DGS can efficiently utilize edge devices to directly train on the collected images, thereby offloading computational demands and enhancing efficiency. However, traditional distributed frameworks often overlook computational and communication challenges in real-world environments, hindering large-scale deployment and potentially posing privacy risks. In this paper, we propose Radiant, a hierarchical 3DGS algorithm designed for large-scale scene reconstruction that considers system heterogeneity, enhancing the model performance and training efficiency. Via extensive empirical study, we find that it is crucial to partition the regions for each edge appropriately and allocate varying camera positions to each device for image collection and training. The core of Radiant is partitioning regions based on heterogeneous environment information and allocating workloads to each device accordingly. Furthermore, we provide a 3DGS model aggregation algorithm that enhances the quality and ensures the continuity of models' boundaries. Finally, we develop a testbed, and experiments demonstrate that Radiant improved reconstruction quality by up to 25.7% and reduced up to 79.6% end-to-end latency.
随着计算机视觉的进步,新兴的3D高斯点云(3D Gaussian Splatting, 3DGS)因其卓越的性能,日益成为受欢迎的场景重建算法。分布式3DGS可以高效利用边缘设备直接对采集的图像进行训练,从而减少计算需求并提高效率。然而,传统的分布式框架通常忽视了现实环境中的计算和通信挑战,阻碍了大规模部署,并可能带来隐私风险。 本文提出了一种层次化的3DGS算法——Radiant,专为大规模场景重建设计,充分考虑系统异构性,从而提升模型性能和训练效率。通过广泛的实证研究,我们发现,合理划分边缘设备的区域并为每个设备分配不同的摄像机位置进行图像采集和训练至关重要。Radiant的核心在于基于异构环境信息划分区域,并据此分配工作负载到每个设备。此外,我们还提供了一种3DGS模型聚合算法,用于提高重建质量并确保模型边界的连续性。 最后,我们开发了一个测试平台。实验结果表明,Radiant将重建质量提高了多达25.7%,并将端到端时延减少了多达79.6%,展示了其在大规模场景重建中的显著优势。