Skip to content

Official evaluation toolkit of the paper "Segmenting Object Affordances: Reproducibility and Sensitivity to Scale" accepted at 12th Workshop on Assistive Computer Vision and Robotics (ACVR) in ECCV 2024

License

Notifications You must be signed in to change notification settings

apicis/aff-seg-eval

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Evaluation toolkit for affordance segmentation

This repository contains the code to evaluate affordance segmentation models using two performance measures:

  • Jaccard index measures how many pixels predicted as class a certain class are correct, among all pixels.
  • $F^w_{\beta}$ associates a different weight to the prediction errors based on the Euclidean distance to the annotated mask.

[arXiv] [webpage] [models code] [trained models]

Table of Contents

  1. Installation
    1. Setup specifics
    2. Requirements
    3. Instructions
  2. Running demo
  3. Contributing
  4. Credits
  5. Enquiries, Question and Comments
  6. License

Installation

Setup specifics

The models testing were performed using the following setup:

  • OS: Ubuntu 18.04.6 LTS
  • Kernel version: 4.15.0-213-generic
  • CPU: Intel® Core™ i7-9700K CPU @ 3.60GHz
  • Cores: 8
  • RAM: 32 GB
  • GPU: NVIDIA GeForce RTX 2080 Ti
  • Driver version: 510.108.03
  • CUDA version: 11.6

Requirements

  • Python 3.8
  • OpenCV 4.10.0.84
  • Numpy 1.24.4
  • Tqdm 4.66.5

Instructions

# Create and activate conda environment
conda create -n affordance_segmentation python=3.8
conda activate affordance_segmentation
    
# Install libraries
pip install opencv-python numpy tqdm scipy pandas scikit-learn

Running demo

To run the evaluation toolkit and visualise the performance measure value (except for background):

python src/eval_toolkit.py --pred_dir=PRED_DIR --ann_dir=ANN_DIR --task=TASK --num_classes=NUM_CLASSES
  • PRED_DIR: directory where predictions are stored
  • ANN_DIR: directory where annotations are stored
  • TASK: evaluation type: 1 for $F^w_{\beta}$, 2 for Jaccard index (IoU)
  • NUM_CLASSES: number of output segmentation classes (background included)
  • SAVE_RES: whether to save results or not
  • DEST_PATH: path to destination .csv file (considered only if SAVE_RES=True)

You can evaluate also from the .csv file using eval_from_file.py script. We realease also available models results in .csv file.

Contributing

If you find an error, if you want to suggest a new feature or a change, you can use the issues tab to raise an issue with the appropriate label.

Credits

T. Apicella, A. Xompero, P. Gastaldo, A. Cavallaro, Segmenting Object Affordances: Reproducibility and Sensitivity to Scale, Proceedings of the European Conference on Computer Vision Workshops, Twelfth International Workshop on Assistive Computer Vision and Robotics (ACVR), Milan, Italy, 29 September 2024.

@InProceedings{Apicella2024ACVR_ECCVW,
            title = {Segmenting Object Affordances: Reproducibility and Sensitivity to Scale},
            author = {Apicella, T. and Xompero, A. and Gastaldo, P. and Cavallaro, A.},
            booktitle = {Proceedings of the European Conference on Computer Vision Workshops},
            note = {Twelfth International Workshop on Assistive Computer Vision and Robotics},
            address={Milan, Italy},
            month="29" # SEP,
            year = {2024},
        }

Enquiries, Question and Comments

If you have any further enquiries, question, or comments, or you would like to file a bug report or a feature request, please use the Github issue tracker.

Licence

This work is licensed under the MIT License. To view a copy of this license, see LICENSE.

About

Official evaluation toolkit of the paper "Segmenting Object Affordances: Reproducibility and Sensitivity to Scale" accepted at 12th Workshop on Assistive Computer Vision and Robotics (ACVR) in ECCV 2024

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages