Please see each subsection for training on different datasets. Available training datasets:
Download the preprocessed DTU training data and Depth_raw from original MVSNet repo and unzip. For the description of how the data is created, please refer to the original paper.
Run (example)
python train.py \
--dataset_name dtu \
--root_dir $DTU_DIR \
--num_epochs 16 --batch_size 2 \
--depth_interval 2.65 --n_depths 8 32 48 --interval_ratios 1.0 2.0 4.0 \
--optimizer adam --lr 1e-3 --lr_scheduler cosine \
--exp_name exp
Note that the model consumes huge GPU memory, so the batch size is generally small.
See opt.py for all configurations.
Run
python train.py \
--dataset_name blendedmvs \
--root_dir $BLENDEDMVS_LOW_RES_DIR \
--num_epochs 16 --batch_size 2 \
--depth_interval 192.0 --n_depths 8 32 48 --interval_ratios 1.0 2.0 4.0 \
--optimizer adam --lr 1e-3 --lr_scheduler cosine \
--exp_name exp
The --depth_interval 192.0
is the product of the coarsest n_depth
and the coarsest --interval_ratio
: 192.0=48x4.0
.
Run
python eval.py --dataset_name dtu \
--root_dir MVS/dtu_test \
--img_wh 1152 864 \
--ckpt_path epoch=15.ckpt \
--deform_conv 0 1 0 0 1 0 0 1 \
--split test \
--conf 0.1 \
--min_geo_consistent 5 \
#--save_visual \
#--sacn $SCAN
Run
python eval.py --dataset_name blendedmvs \
--root_dir dataset_low_res \
--img_wh 768 576 \
--ckpt_path epoch=15.ckpt \
--save_visual \
--deform_conv 0 1 0 0 1 0 0 1 \
--split val \
--conf 0.1 \
--min_geo_consistent 5 \
--depth_interval 192 \
#--save_visual \
#--sacn $SCAN
Run
python eval.py --dataset_name tanks \ --root_dir tankandtemples/ \
--ckpt_path epoch=15.ckpt \
--deform_conv 0 1 0 0 1 0 0 1 \
--split intermediate \
--min_geo_consistent 5 \
#--save_visual \
#--sacn $SCAN