This repository concerns a reproducibility study of the paper: Proto2Proto: Can You Recognize the Car, the Way I Do? by Keswani et al. (2022). It builds upon the authors' original implementation and provides wrapping code to run evaluation in an easy and efficient manner.
This project was developed by Diego Garcia Cerdas, Rens Kierkels, Thomas Jurriaans, and Sergei Agaronian as part of the FACT-AI course at the University of Amsterdam.
conda env create -f environment.yml python=3.6
conda activate proto2proto
Running the following command ensures that the CUB-200-2011 dataset is downloaded into ./datasets/CUB_200_2011/
, and the required code (a folder called lib
) is cloned into ./reproduction/
setup:
bash setup_reproduction.sh
Please make sure to run the above command before running other scripts or notebooks.
We provide the model weights of the networks used in our study through this storage, in checkpoints.zip
. You can also download results.zip
, containing the metrics for our experiments and prototype matches between teacher and student, and nearest.zip
containing the nearest training patches for each model's prototypes.
- Please unzip these files into the current directory as
./checkpoints/
,./results/
, and./nearest/
respectively.
To reproduce the figures from our study, we provide a Jupyter notebook example.ipynb
. This notebook further explains the structure of the folders setup above.
If you wish to perform evaluation from scratch, please download the provided checkpoints and run:
# For interpretability metrics, accuracy, and prototype-matching
python evaluation.py
# For finding nearest training patches for each model's prototypes
python find_nearest.py
-
We provide all arguments needed for evaluation thorugh the YAML files in
./arguments/
. -
To perform evaluation on additional ProtoPNet models, simply create new argument files and modify the
main
method in the above scripts to point to your models' arguments.
Our code is built on top of Proto2Proto, ProtoPNet, and ProtoTree.