Release code of paper "Unsupervised semantic discovery through visual patterns detection - Pelosin F., Gasparetto A., Albarelli A., Torsello A." submitted to S+SSPR 2020.
-
Clone this repo:
git clone https://github.com/francesco-p/semantic-discovery
-
Download the dataset at this link. Extract it and move the
datasets
folder inside this cloned repo.The folder structure you should have at this point is:
. ├── datasets │ ├── img01.png │ ├── semantic_lvl1 │ ├── semantic_lvl2 │ └── ... ├── fig1.png ├── notebooks │ ├── 01-run.ipynb <-- Interactive run of the method on an image │ └── 02-dataset.ipynb <-- Dataset Visualization ├── output <-- Output of scripts/run.py ├── README.md ├── scripts │ ├── fig01.png │ └── run.py <-- Run the method to an image └── src ├── accumulator.py ├── detector.py ├── detector.pyc ├── extractor.py ├── metrics.py └── utils.py
-
Make sure you have docker installed, then run:
sudo docker run --rm -it -p 8889:8889 -v /abs/path/to/this/repo:/descriptor fpelosin/semantic-discovery bash
Due to the opencv
xfeatures2d
module we had to use docker as reproducibility environment. -
We suggest to play with the notebook
notebooks/01_run.ipynb
to understand the method interactively, run:jupyter-lab --allow-root --port=8889 --ip=0.0.0.0 --no-browser
Now just connect to the prompted link.
-
If you want to run the method in the console, please simply follow the instructions up to step 3. Then run
python run.py
in the folderscripts/
. All the parameters (input image, output image, algorithm params) can be specified by changing directily the params section of the file.
In the datasets
folder you find the released dataset. The labeling has been made through the labelme annotation tool. The format of the annotation is the Pascal VOC format.
We included a notebook in notebooks/02-dataset.ipynb
that provides a visualization of the dataset.