conda create -n eiamvs python=3.7.9
conda activate etmvsnet
pip install -r requirements.txt
pip install torch==1.12.1+cu116 torchvision==0.13.1+cu116 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html
Training data. We use the same DTU training data as mentioned in MVSNet and CasMVSNet. Download DTU training data and Depth raw. Unzip and organize them as:
dtu_training
├── Cameras
├── Depths
├── Depths_raw
└── Rectified
Testing Data. Download DTU testing data. Unzip it as:
dtu_testing
├── scan1
├── scan4
├── ...
Download BlendedMVS and unzip it as:
blendedmvs
├── 5a0271884e62597cdee0d0eb
├── 5a3ca9cb270f0e3f14d0eddb
├── ...
├── training_list.txt
├── ...
Download Tanks and Temples and unzip it as:
tanksandtemples
├── advanced
│ ├── Auditorium
│ ├── ...
└── intermediate
├── Family
├── ...
We use the camera parameters of short depth range version (included in your download), you should replace the cams
folder in intermediate
folder with the short depth range version manually.
To train the model from scratch on DTU, specify DTU_TRAINING
in train_dtu.sh
first and then run:
bash train_dtu.sh
To fine-tune the model on BlendedMVS, you need specify BLD_TRAINING
and BLD_CKPT_FILE
in train_bld.sh
first, then run:
bash train_bld.sh
For DTU testing, we use the model (pretrained model) trained on DTU training dataset. Specify DTU_TESTPATH
and DTU_CKPT_FILE
in test_dtu.sh
first, then run the following command to generate point cloud results.
bash test_dtu.sh
For quantitative evaluation, download SampleSet and Points from DTU's website. Unzip them and place Points
folder in SampleSet/MVS Data/
. The structure is just like:
SampleSet
├──MVS Data
└──Points
Specify datapath
, plyPath
, resultsPath
in evaluations/dtu/BaseEvalMain_web.m
and datapath
, resultsPath
in evaluations/dtu/ComputeStat_web.m
, then run the following command to obtain the quantitative metics.
cd evaluations/dtu
matlab -nodisplay
BaseEvalMain_web
ComputeStat_web
We recommend using the finetuned model (pretrained model) to test on Tanks and Temples benchmark. Similarly, specify TNT_TESTPATH
and TNT_CKPT_FILE
in test_tnt_inter.sh
and test_tnt_adv.sh
. To generate point cloud results, just run:
bash test_tnt_inter.sh
bash test_tnt_adv.sh
For quantitative evaluation, you can upload your point clouds to Tanks and Temples benchmark.
@ARTICLE{wang2024eiamvs,
author={Wang, Shaoqian and Li, Bo and Yang, Jian and Dai, Yuchao},
journal={IEEE Robotics and Automation Letters},
title={Adaptive Feature Enhanced Multi-View Stereo With Epipolar Line Information Aggregation},
year={2024},
volume={9},
number={11},
pages={10439-10446}}
Our work is partially based on these opening source work: MVSNet, cascade-stereo, ET-MVSNet.
We appreciate their contributions to the MVS community.