This is code of "Look-Ahead Training with Learned Reflectance Loss for Single-Image SVBRDF Estimation" Project | Paper
To set up environment, please run this command below (tesed in Linux environment):
conda env create -f env.yml
Before running inference, please download:
please save the download model to ./ckpt/
and extract data to ./dataset
:
To run inference on our and MaterialGAN's dataset with ground truth, please use this command:
python meta_test.py --fea all_N1 --wN_outer 80 --gamma --cuda --test_img $mode --name $name --val_step 7 --wR_outer 5 --loss_after1 TD --Wfea_vgg 5e-2 --Wdren_outer 10 --WTDren_outer 10 --adjust_light
where $mode
set as OurReal2
for our test dataset and MGReal2
for MaterialGAN dataset, $name
represents the saved path
To run inference on real captured dataset without ground truth, please first centeralized the specular highlight of input image and then run this command:
python meta_test.py --val_root $path ---fea all_N1 --wN_outer 80 --gamma --cuda --test_img Real --name $name --val_step 7 --wR_outer 5 --loss_after1 TD --Wfea_vgg 5e-2 --Wdren_outer 10 --WTDren_outer 10 --adjust_light
where $path
point to the directory of test real images, $name
represents the saved path. The final feature maps are saved to $name\fea
and optimization process at step 0,1,2,5,7 are saved to $name\pro
We also provide higher resolution version of unscaled real scenes: link
If you find this work useful for your research, please cite:
@article{zhou2022look,
title={Look-Ahead Training with Learned Reflectance Loss for Single-Image SVBRDF Estimation},
author={Zhou, Xilong and Kalantari, Nima Khademi},
journal={ACM Transactions on Graphics (TOG)},
volume={41},
number={6},
pages={1--12},
year={2022},
publisher={ACM New York, NY, USA}
}
This code is not clean version, will clean it up soon. feel free to email me if you have any questions: [email protected]. Thanks for your understanding!