We introduce HyperGS, a novel framework for Hyperspectral Novel View Synthesis (HNVS), based on a new latent 3D Gaussian Splatting (3DGS) technique. Our approach enables simultaneous spatial and spectral renderings by encoding material properties from multi-view 3D hyperspectral datasets. HyperGS reconstructs high-fidelity views from arbitrary perspectives with improved accuracy and speed, outperforming currently existing methods. To address the challenges of high-dimensional data, we perform view synthesis in a learned latent space, incorporating a pixel-wise adaptive density function and a pruning technique for increased training stability and efficiency. Additionally, we introduce the first HNVS benchmark, implementing a number of new baselines based on recent SOTA RGB-NVS techniques, alongside the small number of prior works on HNVS. We demonstrate HyperGS's robustness through extensive evaluation of real and simulated hyperspectral scenes with a 14db accuracy improvement upon previously published models.
我们提出了 HyperGS,一种基于新型潜在三维高斯点云技术(3D Gaussian Splatting, 3DGS)的高光谱新视图合成(Hyperspectral Novel View Synthesis, HNVS)框架。HyperGS 通过对多视角三维高光谱数据集的材质属性编码,实现了空间与光谱的同步渲染,能够从任意视角高保真地重建场景,同时在精度和速度上优于现有方法。 为应对高维数据的挑战,我们在一个学习的潜在空间中进行视图合成,并引入了一种像素级自适应密度函数和修剪技术,以提高训练的稳定性和效率。此外,我们提出了首个 HNVS 基准测试,结合最新的 SOTA RGB 新视图合成技术和少量现有的 HNVS 方法,构建了一系列新基准。 通过对真实和模拟高光谱场景的广泛评估,HyperGS 展现了出色的鲁棒性,相较于已有模型实现了高达 14 dB 的精度提升,为高光谱场景的高效、高精度合成设立了新标杆。