This repo contains the code and results of the AAAI 2020 paper:
Towards Ghost-free Shadow Removal via
Dual Hierarchical Aggregation Network and Shadow Matting GAN
Xiaodong Cun, Chi-Man Pun*, Cheng Shi
University of Macau
Syn. Datasets | Models | Results | Paper | Supp. | Poster | 🔥Online Demo!(Google CoLab)
We plot a result of our model with the input shown in yellow square. From two zoomed regions, our method removes the shadow and reduces the ghost successfully.
#4 inconstency between the code and Figure.2, Thanks @naoto0804
Shadow removal is an essential task for scene understanding. Many studies consider only matching the image contents, which often causes two types of ghosts: color in-consistencies in shadow regions or artifacts on shadow boundaries. In this paper, we try to tackle these issues in two aspects. On the one hand, to carefully learn the border artifacts-free image, we propose a novel network structure named the Dual Hierarchically Aggregation Network(DHAN). It contains a series of growth dilated convolutions as the backbone without any down-samplings, and we hierarchically aggregate multi-context features for attention and prediction respectively. On the other hand, we argue that training on a limited dataset restricts the textural understanding of the network, which leads to the shadow region color in-consistencies. Currently, the largest dataset contains 2k+ shadow/shadow-free images in pairs. However, it has only 0.1k+ unique scenes since many samples share exactly the same background with different shadow positions. Thus, we design a Shadow Matting Generative Adversarial Network~(SMGAN) to synthesize realistic shadow mattings from a given shadow mask and shadow-free image. With the help of novel masks or scenes, we enhance the current datasets using synthesized shadow images. Experiments show that our DHAN can erase the shadows and produce high-quality ghost-free images. After training on the synthesized and real datasets, our network outperforms other state-of-the-art methods by a large margin.
Comparison on the shadow removal datasets, The first two samples are from ISTD dataset while the bottom two samples are from SRD dataset. In (d), the top two samples are from ST-CGAN and the bottom two samples are from DeShadowNet.
-
Training on ISTD dataset and generating shadow using USR dataset: Syn. Shadow
-
Extracted Shadow Mask in SRD dataset: SRD Mask
- ISTD dataset
- USR: Unpaired Shadow Removal dataset
- SRD Dataset (please email the authors to get assess).
Creating the conda environments following here.
- download the pre-trained model from above. SRD+ is recommanded.
- download pretrained-vgg19 from MatConvNet.
- Uncompress pretrained models into 'Models/' as shown in the folders.
- Starting a jupyter server and run the demo code following the instructions in
demo.ipynb
It has been tested both in MacOS 10.15 and Ubuntu 18.04 LTS. Both CPU and GPU are supported (But running on CPU is quite slow).
OR an online demo is hosted in Google CoLab by this url
The data folders should be:
ISTD_DATA_ROOT
* train
- train_A # shadow image
- train_B # shadow mask
- train_C # shadowfree image
- shadow_free # USR shadowfree images
- synC # our Syn. shadow
* test
- test_A # shadow image
- test_B # shadow mask
- test_C # shadowfree image
SRD_DATA_ROOT
* train
- train_A # renaming the original `shadow` folder in `SRD`.
- train_B # the extracted shadow mask by ourself.
- train_C # renaming the original `shadow_free` folder in `SRD`.
- shadow_free # USR shadowfree images
- synC # our Syn. shadow
* test
- train_A # renaming the original `shadow` folder in `SRD`.
- train_B # the extracted shadow mask by ourself.
- train_C # renaming the original `shadow_free` folder in `SRD`.
Downloading the ISTD
from the source, download the USR dataset and unzip it into unzip it into $YOUR_DATA_ROOT/ISTD_dataset/train/
. Train the GAN by:
python train_ss.py \
--task YOUR_TASK_NAME \
--data_dir $YOUR_DATA_ROOT/$ISTD_DATASET_ROOT/train/ \
--use_gpu 0 # <0 for CPU \
--is_training 1 # 0 for testing \
Downloading the ISTD
from the source, download our synthesized dataset and unzip it into $YOUR_DATA_ROOT/ISTD_dataset/train/
. Train the network by:
python train_sr.py \
--task YOUR_TASK_NAME \
--data_dir $YOUR_DATA_ROOT/$ISTD_DATASET_ROOT/train/ \
--use_gpu 1 # <0 for cpu \
--is_training 1 # 0 for testing \
--use_da 0.5 # the percentage of synthesized dataset
Download and unzip the SRD
dataset from source. Reorganizing the dataset as described above.
python train_sr.py \
--task YOUR_TASK_NAME \
--data_dir $YOUR_DATA_ROOT/$SRD_DATASET_ROOT/train/ \
--use_gpu 1 # <0 for cpu \
--is_training 1 # 0 for testing \
--use_da 0.5 # the percentage of synthesized dataset
# ISTD DATASET
python train_sr.py \
--task YOUR_TASK_NAME # path to the pre-trained model [logs/YOUR_TASK_NAME] \
--data_dir $YOUR_DATA_ROOT/$ISTD_DATASET_ROOT/test/ \
--use_gpu 1 # <0 for cpu \
--is_training 0 # 0 for testing \
# SRD DATASET
python train_sr.py \
--task YOUR_TASK_NAME # path to the pre-trained model [logs/YOUR_TASK_NAME] \
--data_dir $YOUR_DATA_ROOT/$SRD_DATASET_ROOT/test/ \
--use_gpu 1 # <0 for cpu \
--is_training 0 # 0 for testing \
The author would like to thanks Nan Chen for her helpful discussion.
Part of the code is based upon FastImageProcessing and Perception Reflection Removal
If you find our work useful in your research, please consider citing:
@misc{cun2019ghostfree,
title={Towards Ghost-free Shadow Removal via Dual Hierarchical Aggregation Network and Shadow Matting GAN},
author={Xiaodong Cun and Chi-Man Pun and Cheng Shi},
year={2019},
eprint={1911.08718},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Please contact me if there is any question (Xiaodong Cun [email protected])
Zhang, Xuaner, Ren Ng, and Qifeng Chen. "Single Image Reflection Separation with Perceptual Losses." Proceedings of the CVPR. (2018).
Hu, Xiaowei, et al. "Mask-ShadowGAN: Learning to Remove Shadows from Unpaired Data." Proceedings of the ICCV (2019).