Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix argparser, add MBIPED submodule, and improve README #137

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -102,3 +102,7 @@ venv.bak/

# mypy
.mypy_cache/

checkpoints
dataset
result
3 changes: 3 additions & 0 deletions .gitmodules
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[submodule "MBIPED"]
path = MBIPED
url = [email protected]:xavysp/MBIPED.git
1 change: 1 addition & 0 deletions MBIPED
Submodule MBIPED added at 91473d
43 changes: 27 additions & 16 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,10 +52,11 @@ Dexined version on TF 2 is not ready
* [Kornia](https://kornia.github.io/)
* Other package like Numpy, h5py, PIL, json.

Once the packages are installed, clone this repo as follow:
Once the packages are installed, clone this repo as follows and initiate the MBIPED submodule:

git clone https://github.com/xavysp/DexiNed.git
cd DexiNed
git submodule update --init

## Project Architecture

Expand All @@ -76,28 +77,44 @@ Once the packages are installed, clone this repo as follow:
├── model.py # DexiNed class in pythorch
```

Before to start please check dataset.py, from the first line of code you can see the datasets used for training/testing. The main.py, line 194, call the data for the training or testing, see the example of the code below:
Before starting, please check `datasets.py`. From the first line of code you can see the datasets used for training/testing. In `main.py`, line 194, we call the data for the training or testing. See the example of the code below:
```
parser = argparse.ArgumentParser(description='DexiNed trainer.')
parser.add_argument('--choose_test_data',
type=int,
default=1,
help='Already set the dataset for testing choice: 0 - 8')
# ----------- test -------0--

...

TEST_DATA = DATASET_NAMES[parser.parse_args().choose_test_data] # max 8
test_inf = dataset_info(TEST_DATA, is_linux=IS_LINUX)
test_dir = test_inf['data_dir']
is_testing = True# current test -352-SM-NewGT-2AugmenPublish

# Training settings
TRAIN_DATA = DATASET_NAMES[0] # BIPED=0
train_inf = dataset_info(TRAIN_DATA, is_linux=IS_LINUX)
train_dir = train_inf['data_dir']
```

The datasets listed below must be downloaded in order for training to be performed. In the current configuration of the `datasets.py` file, for standard datasets, the datasets must be stored in a directory called `dataset` underneath the root directory of this repository. For custom datasets, the data must be stored in the `data` directory underneath the root directory of this repository.

## Train

In order to train the model, call the `main.py` script with the flag `--is_training` along with any other additional flags. (Note: The other flags can all be found in the `parse_args()` function in the `main.py` script.) The program is assumed to be in testing mode unless this is done. Below is what should be entered into the command line, assuming no other options are selected:

`python main.py --is_training`

Training with the BIPED dataset is configured to work only with the augmented version of the dataset that is generated from the MBIPED project/submodule. In order to generate this augmented dataset, edit file `MBIPED/main.py` to have `BIPED_main_dir` in the `main()` function be equal to the directory at which you are storing the BIPED dataset. This should be changed to `dataset` for the standard configuration. After this is completed, from the DexiNed root directory, call the MBIPED main script using:

`python MBIPED/main.py`

This will generate the augmented dataset.

Note: This is a long process that is not currently able to be paused/restarted. If the process fails for any reason, the user must delete the augmented image files and restart in order to retry.

## Test
As previously mentioned, the datasets.py has, among other things, the whole datasets configurations used in DexiNed for testing and training:
As previously mentioned, the `datasets.py` file has, among other things, the whole datasets configurations used in DexiNed for testing and training:
```
DATASET_NAMES = [
'BIPED',
Expand All @@ -111,19 +128,13 @@ DATASET_NAMES = [
'CLASSIC'
]
```
For example, if want to test your own dataset or image choose "CLASSIC" and save your test data in "data" dir.
Before test the DexiNed model, it is necesarry to download the checkpoint here [Checkpoint Pytorch](https://drive.google.com/file/d/1V56vGTsu7GYiQouCIKvTWl5UKCZ6yCNu/view?usp=sharing) and save this file into the DexiNed folder like: checkpoints/BIPED/10/(here the checkpoints from Drive), then run as follow:
For example, if want to test your own dataset or image choose `CLASSIC` and save your test data in the `data` dir.
Before testing a pretrained version of the DexiNed model, it is necesarry to download the checkpoint here [Checkpoint Pytorch](https://drive.google.com/file/d/1V56vGTsu7GYiQouCIKvTWl5UKCZ6yCNu/view?usp=sharing) and save this file into the DexiNed folder like: checkpoints/BIPED/10/(here the checkpoints from Drive), then run as follow:

```python main.py --choose_test_data=-1 ```
Make sure that in main.py the test setting be as:
```parser.add_argument('--is_testing', default=True, help='Script in testing mode.')```
DexiNed downsample the input image till 16 scales, please make sure that, in dataset_info fucn (datasets.py), the image width and height be multiple of 16, like 512, 960, and etc. **In the Checkpoint from Drive you will find the last trained checkpoint, which has been trained in the last version of BIPED dataset that will be updated soon in Kaggle **

## Train
Be sure not to set the `--is_training` flag when calling the main script.

python main.py
Make sure that in main.py the train setting be as:
```parser.add_argument('--is_testing', default=False, help='Script in testing mode.')```
DexiNed downsample the input images by factors of 16. Please make sure that, in the `dataset_info()` function (`datasets.py`), the image width and height be multiple of 16, like 512, 960, and etc. **In the Checkpoint from Drive you will find the last trained checkpoint, which has been trained in the last version of the BIPED dataset that will be updated soon in Kaggle **

# Datasets

Expand Down Expand Up @@ -164,7 +175,7 @@ After WACV20, the BIPED images have been checked again and added more annotation

# Citation

If you like DexiNed, why not starring the project on GitHub!
If you like DexiNed, why not star the project on GitHub!

[![GitHub stars](https://img.shields.io/github/stars/xavysp/DexiNed.svg?style=social&label=Star&maxAge=3600)](https://GitHub.com/xavysp/DexiNed/stargazers/)

Expand Down
34 changes: 17 additions & 17 deletions datasets.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,63 +30,63 @@ def dataset_info(dataset_name, is_linux=True):
'img_width': 512, #481
'train_list': 'train_pair.lst',
'test_list': 'test_pair.lst',
'data_dir': '/opt/dataset/BSDS', # mean_rgb
'data_dir': 'dataset/BSDS', # mean_rgb
'yita': 0.5
},
'BRIND': {
'img_height': 512, # 321
'img_width': 512, # 481
'train_list': 'train_pair2.lst',
'test_list': 'test_pair.lst',
'data_dir': '/opt/dataset/BRIND', # mean_rgb
'data_dir': 'dataset/BRIND', # mean_rgb
'yita': 0.5
},
'BSDS300': {
'img_height': 512, #321
'img_width': 512, #481
'test_list': 'test_pair.lst',
'train_list': None,
'data_dir': '/opt/dataset/BSDS300', # NIR
'data_dir': 'dataset/BSDS300', # NIR
'yita': 0.5
},
'PASCAL': {
'img_height': 416, # 375
'img_width': 512, #500
'test_list': 'test_pair.lst',
'train_list': None,
'data_dir': '/opt/dataset/PASCAL', # mean_rgb
'data_dir': 'dataset/PASCAL', # mean_rgb
'yita': 0.3
},
'CID': {
'img_height': 512,
'img_width': 512,
'test_list': 'test_pair.lst',
'train_list': None,
'data_dir': '/opt/dataset/CID', # mean_rgb
'data_dir': 'dataset/CID', # mean_rgb
'yita': 0.3
},
'NYUD': {
'img_height': 448,#425
'img_width': 560,#560
'test_list': 'test_pair.lst',
'train_list': None,
'data_dir': '/opt/dataset/NYUD', # mean_rgb
'data_dir': 'dataset/NYUD', # mean_rgb
'yita': 0.5
},
'MDBD': {
'img_height': 720,
'img_width': 1280,
'test_list': 'test_pair.lst',
'train_list': 'train_pair.lst',
'data_dir': '/opt/dataset/MDBD', # mean_rgb
'data_dir': 'dataset/MDBD', # mean_rgb
'yita': 0.3
},
'BIPED': {
'img_height': 720, #720 # 1088
'img_width': 1280, # 1280 5 1920
'test_list': 'test_pair.lst',
'train_list': 'train_rgb.lst',
'data_dir': '/opt/dataset/BIPED', # mean_rgb
'data_dir': 'dataset/BIPED', # mean_rgb
'yita': 0.5
},
'CLASSIC': {
Expand All @@ -102,7 +102,7 @@ def dataset_info(dataset_name, is_linux=True):
'img_width': 480,# 360
'test_list': 'test_pair.lst',
'train_list': None,
'data_dir': '/opt/dataset/DCD', # mean_rgb
'data_dir': 'dataset/DCD', # mean_rgb
'yita': 0.2
}
}
Expand All @@ -112,39 +112,39 @@ def dataset_info(dataset_name, is_linux=True):
'img_width': 512, # 481
'test_list': 'test_pair.lst',
'train_list': 'train_pair.lst',
'data_dir': 'C:/Users/xavysp/dataset/BSDS', # mean_rgb
'data_dir': 'dataset/BSDS', # mean_rgb
'yita': 0.5},
'BSDS300': {'img_height': 512, # 321
'img_width': 512, # 481
'test_list': 'test_pair.lst',
'data_dir': 'C:/Users/xavysp/dataset/BSDS300', # NIR
'data_dir': 'dataset/BSDS300', # NIR
'yita': 0.5},
'PASCAL': {'img_height': 375,
'img_width': 500,
'test_list': 'test_pair.lst',
'data_dir': 'C:/Users/xavysp/dataset/PASCAL', # mean_rgb
'data_dir': 'dataset/PASCAL', # mean_rgb
'yita': 0.3},
'CID': {'img_height': 512,
'img_width': 512,
'test_list': 'test_pair.lst',
'data_dir': 'C:/Users/xavysp/dataset/CID', # mean_rgb
'data_dir': 'dataset/CID', # mean_rgb
'yita': 0.3},
'NYUD': {'img_height': 425,
'img_width': 560,
'test_list': 'test_pair.lst',
'data_dir': 'C:/Users/xavysp/dataset/NYUD', # mean_rgb
'data_dir': 'dataset/NYUD', # mean_rgb
'yita': 0.5},
'MDBD': {'img_height': 720,
'img_width': 1280,
'test_list': 'test_pair.lst',
'train_list': 'train_pair.lst',
'data_dir': 'C:/Users/xavysp/dataset/MDBD', # mean_rgb
'data_dir': 'dataset/MDBD', # mean_rgb
'yita': 0.3},
'BIPED': {'img_height': 720, # 720
'img_width': 1280, # 1280
'test_list': 'test_pair.lst',
'train_list': 'train_rgb.lst',
'data_dir': 'C:/Users/xavysp/dataset/BIPED', # WIN: '../.../dataset/BIPED/edges'
'data_dir': 'dataset/BIPED', # WIN: '../.../dataset/BIPED/edges'
'yita': 0.5},
'CLASSIC': {'img_height': 512,
'img_width': 512,
Expand All @@ -155,7 +155,7 @@ def dataset_info(dataset_name, is_linux=True):
'DCD': {'img_height': 240,
'img_width': 360,
'test_list': 'test_pair.lst',
'data_dir': 'C:/Users/xavysp/dataset/DCD', # mean_rgb
'data_dir': 'dataset/DCD', # mean_rgb
'yita': 0.2}
}
return config[dataset_name]
Expand Down
Loading