Skip to content

Latest commit

 

History

History
103 lines (80 loc) · 3.98 KB

README.md

File metadata and controls

103 lines (80 loc) · 3.98 KB

EfficientDerain

we propose EfficientDerain for high-efficiency single-image deraining

Requirements

  • python 3.6
  • pytorch 1.6.0
  • opencv-python 4.4.0.44
  • scikit-image 0.17.2
  • torchvision 0.9.1
  • pytorch-msssim 0.2.1

Datasets

Pretrained models

Here is the urls of pretrained models (includes v3_rain100H, v3_rain1400, v3_SPA, v4_rain100H, v4_rain1400, v4_SPA) :

direct download: http://www.xujuefei.com/models_effderain.zip

google drive: https://drive.google.com/file/d/1OBAIG4su6vIPEimTX7PNuQTxZDjtCUD8/view?usp=sharing

baiduyun: https://pan.baidu.com/s/1kFWP-b3tD8Ms7VCBj9f1kw (pwd: vr3g)

Train

  • The code shown corresponds to version v3, for v4 change the value of argument "rainaug" in file "./train_*.sh" to the "true" (train_*.sh means it's the training script of dataset *)
  • Unzip the "Streaks_Garg06.zip" in the "./rainmix"
  • Change the value of argument "baseroot" in file "./train.sh" to the path of training data
  • Edit the function "get_files" in file "./utils" according to the format of the training data
  • Execute
sh train.sh

Test

  • The code shown corresponds to version v3
  • Change the value of argument "load_name" in file "./test.sh" to the path of pretained model
  • Change the value of argument "baseroot" in file "./test.sh" to the path of testing data
  • Edit the function "get_files" in file "./utils" according to the format of the testing data
  • Execute
sh test.sh

Results

The specific results can be found in “./results/data/DERAIN.xlsx

GT vs RCDNet

GT vs EfDeRain

Input vs GT

GT vs RCDNet

GT vs EfDeRain

Input vs GT

GT vs v1

GT vs v2

GT vs v3

GT vs v4

GT vs v1

GT vs v2

GT vs v3

GT vs v4

Bibtex

@inproceedings{guo2020efficientderain,
      title={EfficientDeRain: Learning Pixel-wise Dilation Filtering for High-Efficiency Single-Image Deraining}, 
      author={Qing Guo and Jingyang Sun and Felix Juefei-Xu and Lei Ma and Xiaofei Xie and Wei Feng and Yang Liu},
      year={2021},
      booktitle={AAAI}
}