Level up from
DeepLabv2 using
the lastest caffe/master. All of the layers are newest. Support Pascal architecture
and cudnn-v5
.
Welcome test and debug :)
DeepLab is a state-of-art deep learning system for semantic image segmentation built on top of Caffe.
It combines (1) atrous convolution to explicitly control the resolution at which feature responses are computed within Deep Convolutional Neural Networks, (2) atrous spatial pyramid pooling to robustly segment objects at multiple scales with filters at multiple sampling rates and effective fields-of-views, and (3) densely connected conditional random fields (CRF) as post processing.
This distribution provides a publicly available implementation for the key model ingredients reported in our latest arXiv paper. This version also supports the experiments (DeepLab v1) in our ICLR'15. You only need to modify the old prototxt files. For example, our proposed atrous convolution is called dilated convolution in CAFFE framework, and you need to change the convolution parameter "hole" to "dilation" (the usage is exactly the same). For the experiments in ICCV'15, there are some differences between our argmax and softmax_loss layers and Caffe's. Please refer to DeepLabv1 for details.
Please consult and consider citing the following papers:
@article{CP2016Deeplab,
title={DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs},
author={Liang-Chieh Chen and George Papandreou and Iasonas Kokkinos and Kevin Murphy and Alan L Yuille},
journal={arXiv:1606.00915},
year={2016}
}
@inproceedings{CY2016Attention,
title={Attention to Scale: Scale-aware Semantic Image Segmentation},
author={Liang-Chieh Chen and Yi Yang and Jiang Wang and Wei Xu and Alan L Yuille},
booktitle={CVPR},
year={2016}
}
@inproceedings{CB2016Semantic,
title={Semantic Image Segmentation with Task-Specific Edge Detection Using CNNs and a Discriminatively Trained Domain Transform},
author={Liang-Chieh Chen and Jonathan T Barron and George Papandreou and Kevin Murphy and Alan L Yuille},
booktitle={CVPR},
year={2016}
}
@inproceedings{PC2015Weak,
title={Weakly- and Semi-Supervised Learning of a DCNN for Semantic Image Segmentation},
author={George Papandreou and Liang-Chieh Chen and Kevin Murphy and Alan L Yuille},
booktitle={ICCV},
year={2015}
}
@inproceedings{CP2015Semantic,
title={Semantic Image Segmentation with Deep Convolutional Nets and Fully Connected CRFs},
author={Liang-Chieh Chen and George Papandreou and Iasonas Kokkinos and Kevin Murphy and Alan L Yuille},
booktitle={ICLR},
year={2015}
}
Note that if you use the densecrf implementation, please consult and cite the following paper:
@inproceedings{KrahenbuhlK11,
title={Efficient Inference in Fully Connected CRFs with Gaussian Edge Potentials},
author={Philipp Kr{\"{a}}henb{\"{u}}hl and Vladlen Koltun},
booktitle={NIPS},
year={2011}
}
DeepLabv2 currently achieves 79.7% on the challenging PASCAL VOC 2012 semantic image segmentation task -- see the leaderboard.
Please refer to our project website for details.
We have released several trained models and corresponding prototxt files at here. Please check it for more model details.
- The scripts we used for our experiments can be downloaded from this link:
- run_pascal.sh: the script for training/testing on the PASCAL VOC 2012 dataset. Note You also need to download sub.sed script.
- run_densecrf.sh and run_densecrf_grid_search.sh: the scripts we used for post-processing the DCNN computed results by DenseCRF.
- The image list files used in our experiments can be downloaded from this link:
- The zip file stores the list files for the PASCAL VOC 2012 dataset.
- To use the mat_read_layer and mat_write_layer, please download and install matio.
Check FAQ if you have some problems while using the code.
There are several variants of DeepLab. To begin with, we suggest DeepLab-LargeFOV, which has good performance and faster training time.
Suppose the codes are located at deeplab/code
- mkdir deeplab/exper (Create a folder for experiments)
- mkdir deeplab/exper/voc12 (Create a folder for your specific experiment. Let's take PASCAL VOC 2012 for example.)
- Create folders for config files and so on.
- mkdir deeplab/exper/voc12/config (where network config files are saved.)
- mkdir deeplab/exper/voc12/features (where the computed features will be saved (when train on train))
- mkdir deeplab/exper/voc12/features2 (where the computed features will be saved (when train on trainval))
- mkdir deeplab/exper/voc12/list (where you save the train, val, and test file lists)
- mkdir deeplab/exper/voc12/log (where the training/test logs will be saved)
- mkdir deeplab/exper/voc12/model (where the trained models will be saved)
- mkdir deeplab/exper/voc12/res (where the evaluation results will be saved)
- mkdir deeplab/exper/voc12/config/deeplab_largeFOV (test your own network. Create a folder under config. For example, deeplab_largeFOV is the network you want to experiment with. Add your train.prototxt and test.prototxt in that folder (you can check some provided examples for reference).)
- Set up your init.caffemodel at deeplab/exper/voc12/model/deeplab_largeFOV. You may want to soft link init.caffemodel to the modified VGG-16 net. For example, run "ln -s vgg16.caffemodel init.caffemodel" at voc12/model/deeplab_largeFOV.
- Modify the provided script, run_pascal.sh, for experiments. You should change the paths according to your setting. For example, you should specify where the caffe is by changing CAFFE_DIR. Note You may need to modify sub.sed, if you want to replace some variables with your desired values in train.prototxt or test.prototxt.
- The computed features are saved at folders features or features2, and you can run provided MATLAB scripts to evaluate the results (e.g., check the script at code/matlab/my_script/EvalSegResults).
Seyed Ali Mousavi has implemented a python version of run_pascal.sh (Thanks, Ali!). If you are more familiar with Python, you may want to take a look at this.
Caffe is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and community contributors.
Check out the project site for all the details like
- DIY Deep Learning for Vision with Caffe
- Tutorial Documentation
- BVLC reference models and the community model zoo
- Installation instructions
and step-by-step examples.
Please join the caffe-users group or gitter chat to ask questions and talk about methods and models. Framework development discussions and thorough bug reports are collected on Issues.
Happy brewing!
Caffe is released under the BSD 2-Clause license. The BVLC reference models are released for unrestricted use.
Please cite Caffe in your publications if it helps your research:
@article{jia2014caffe,
Author = {Jia, Yangqing and Shelhamer, Evan and Donahue, Jeff and Karayev, Sergey and Long, Jonathan and Girshick, Ross and Guadarrama, Sergio and Darrell, Trevor},
Journal = {arXiv preprint arXiv:1408.5093},
Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
Year = {2014}
}