Skip to content

"HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting" (NeurIPS 2024)

License

Notifications You must be signed in to change notification settings

caiyuanhao1998/HDR-GS

Repository files navigation

 

PWC

arXiv video zhihu AK MrNeRF

HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting

Scene Bathroom Scene Chair Scene Dog
Scene Bear Scene Desk Scene Sponza

 

Introduction

This is the official implementation of our NeurIPS 2024 paper "HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting". We have run the SfM algorithm to recalibrate the data. If you find this repo useful, please give it a star ⭐ and consider citing our paper. Thank you.

News

  • 2024.12.01 : We provide code for direct loading model to test and render spiral demo video. Welcome to have a try! 🤗
  • 2024.11.30 : We set up a leaderboard on the paper-with-code website! Welcome to submit your entry! 🏆
  • 2024.11.26 : Code, recalibrated data following the opencv standard, and training logs have been released. Feel free to check and have a try! 🤗
  • 2024.07.01 : Our HDR-GS has been accepted by NeurIPS 2024! Code will be released before the start date of the conference (2024.12.10). Stay tuned. 🚀
  • 2024.05.24 : Our paper is on arxiv now. Code, data, and training logs will be released. Stay tuned. 💫

Performance

Synthetic Datasets

results1

results2

Real Datasets

results1

results2

 

Interactive Results

Scene Bathroom Scene Chair Scene Diningroom
Scene Dog Scene Sofa Scene Sponza

 

1. Create Environment

We recommend using Conda to set up the environment.

# cloning our repo
git clone https://github.com/caiyuanhao1998/HDR-GS --recursive


SET DISTUTILS_USE_SDK=1 # Windows only

# install the official environment of 3DGS
cd HDR-GS
conda env create --file environment.yml
conda activate hdr_gs

 

2. Prepare Dataset

Download our recalibrated and reorganized datasets from Google drive. Then put the downloaded datasets into the folder data_hdr/ as

  |--data_hdr
    |--synthetic
      |--bathroom
        |--exr
          |--0.exr
          |--1.exr
          ...
        |--images
          |--0_0.png
          |--0_1.png
          ...
        |--sparse
          |--0
            |--cameras.bin
            |--images.bin
            |--points3D.bin
            |--points3D.ply  
            |--project.ini
      |--bear
      ...
    |--real
      |--flower
        |--input_images
          |--000_0.jpg
          |--000_1.jpg
          ...
        |--poses_bounds_exps.npy
        |--sparse
          |--0
            |--cameras.bin
            |--images.bin
            |--points3D.bin
            |--points3D.ply  
            |--project.ini
      |--computer
      ...

Note: The original datasets are collected by HDR-NeRF. But the camera poses follow the normalized device coordinates, which are not suitable for 3DGS. Besides, HDR-NeRF does not provide the initial point clouds. So we use the Structure-from-Motion algorithm to recalibrate the camera poses and generate the initial point clouds. We also organize the datasets according to the description of HDR-NeRF, which is different from its official repo.

 

3. Testing

We write the code for directly loading the model to test and render spiral video. Please download our pre-trained weights bathroom from Google Drive and then put it into the folder pretrained_weights.

# For synthetic scenes
python3 train_synthetic.py --config config/bathroom.yaml --eval --gpu_id 0 --syn --load_path pretrained_weights/bathroom  --test_only

Besides, if you train a model with config: bathroom.yaml, you will get a profile as:

  |--output
    |--mlp
      |--bathroom
        |--exp-time
          |--point_cloud
            |interation_x
              |--point_cloud.ply
              |--tone_mapper.pth
            ...
          |--test_set_vis
          |--videos
          |--cameras.json
          |--cfg_args
          |--input.ply
          |--log.txt

Then the --load_path should be "output/mlp/bathroom/exp-time/point_cloud/interation_x"

4. Training

We provide training logs for your convienience to debug. Please download them from the Google Drive.

You can run the .sh file by

# For synthetic scenes
bash train_synthetic.sh

# for real scenes
bash train_real.sh

Or you can directly train on specific scenes as

# For synthetic scenes
python3 train_synthetic.py --config config/sponza.yaml --eval --gpu_id 0 --syn

python3 train_synthetic.py --config config/sofa.yaml --eval --gpu_id 0 --syn

python3 train_synthetic.py --config config/bear.yaml --eval --gpu_id 0 --syn

python3 train_synthetic.py --config config/chair.yaml --eval --gpu_id 0 --syn

python3 train_synthetic.py --config config/desk.yaml --eval --gpu_id 0 --syn

python3 train_synthetic.py --config config/diningroom.yaml --eval --gpu_id 0 --syn

python3 train_synthetic.py --config config/dog.yaml --eval --gpu_id 0 --syn

python3 train_synthetic.py --config config/bathroom.yaml --eval --gpu_id 0 --syn

# for real scenes
python3 train_real.py --config config/flower.yaml --eval --gpu_id 0

python3 train_real.py --config config/computer.yaml --eval --gpu_id 0

python3 train_real.py --config config/box.yaml --eval --gpu_id 0

python3 train_real.py --config config/luckycat.yaml --eval --gpu_id 0

 

4. Citation

@inproceedings{hdr_gs,
  title={HDR-GS: Efficient High Dynamic Range Novel View Synthesis at 1000x Speed via Gaussian Splatting},
  author={Yuanhao Cai and Zihao Xiao and Yixun Liang and Minghan Qin and Yulun Zhang and Xiaokang Yang and Yaoyao Liu and Alan Yuille},
  booktitle={NeurIPS},
  year={2024}
}

Releases

No releases published

Packages

No packages published