Snapshot Compressive Imaging (SCI) offers a possibility for capturing information in high-speed dynamic scenes, requiring efficient reconstruction method to recover scene information. Despite promising results, current deep learning-based and NeRF-based reconstruction methods face challenges: 1) deep learning-based reconstruction methods struggle to maintain 3D structural consistency within scenes, and 2) NeRF-based reconstruction methods still face limitations in handling dynamic scenes. To address these challenges, we propose SCIGS, a variant of 3DGS, and develop a primitive-level transformation network that utilizes camera pose stamps and Gaussian primitive coordinates as embedding vectors. This approach resolves the necessity of camera pose in vanilla 3DGS and enhances multi-view 3D structural consistency in dynamic scenes by utilizing transformed primitives. Additionally, a high-frequency filter is introduced to eliminate the artifacts generated during the transformation. The proposed SCIGS is the first to reconstruct a 3D explicit scene from a single compressed image, extending its application to dynamic 3D scenes. Experiments on both static and dynamic scenes demonstrate that SCIGS not only enhances SCI decoding but also outperforms current state-of-the-art methods in reconstructing dynamic 3D scenes from a single compressed image.
Snapshot Compressive Imaging (SCI) 为捕捉高速动态场景信息提供了一种可能性,但需要高效的重建方法来恢复场景信息。尽管当前基于深度学习和 NeRF 的重建方法已取得一定进展,但仍存在以下挑战:1)基于深度学习的重建方法难以在场景中保持 3D 结构的一致性;2)基于 NeRF 的重建方法在处理动态场景时仍存在局限性。 为解决这些问题,我们提出了 SCIGS,这是 3D Gaussian Splatting (3DGS) 的一种变体,并开发了一个基元级变换网络。该网络利用相机姿态标记和高斯基元坐标作为嵌入向量,解决了传统 3DGS 中对相机姿态的依赖,同时通过变换后的基元增强动态场景中多视角 3D 结构的一致性。此外,我们引入了一个高频滤波器,消除变换过程中产生的伪影。 SCIGS 是首个能够从单张压缩图像重建 3D 显式场景的方法,并将其应用扩展至动态 3D 场景。对静态和动态场景的实验表明,SCIGS 不仅提升了 SCI 的解码能力,还在从单张压缩图像重建动态 3D 场景方面超越了当前最先进的方法。