Skip to content

Source code for the automated photo-identification of blue whales

License

Notifications You must be signed in to change notification settings

animalus/visi-baleine

 
 

Repository files navigation

Blue whale photo-identification with LoFTR

ANIMALUS Update

ln -s /data/algos/visi-baleine/models models
poetry run python demo.py query1.jpg

Table of Contents

Introduction

This repo contains the source code accompanying the paper "Automated blue whale photo-identification using local feature matching". Its purpose is to show that new local feature matching techniques such as LoFTR (or SuperGlue or HardNet or ...) can be used successfully to photo-identify blue whales (balaenoptera musculus). Good results have also been obtained for fin whales, i.e. balaenoptera physalus. The process is as follows: the image is first segmented to isolate the whale's body from the background, then a feature matcher finds correspondences between the segmented image and reference images from known whale individuals. The most likely candidate is the one with the "best" correspondences between its reference images and the image being analyzed. Since the approach is totally generic, it should be easy to adapt it to the photo-identification of other species.

Downloading the models

Two models need to be downloaded and saved in a "models" folder. The semantic segmentation model (basnet_fsi.pth) can be found here: https://drive.google.com/file/d/1YVgw9AzOMEIxJfurRRu-OFasAV5IHCBN/view?usp=sharing. The LofTR matching model (outdoor_ds.ckpt) is found here: https://drive.google.com/file/d/1uzG_AVs2qws-z8d9m5n0m_Fwd3T9FsQy/view?usp=sharing.

Building the Docker image

In order to facilitate the use of the provided source code, a Dockerfile is provided. Building the Docker image is as follows:

sudo docker build .

Testing with sample images

Images of two blue whales (B271 and B275) are bundled with the source code to allow for a quick test. Once the Docker image is built, simply type:

docker run -v $PWD:/input <docker image name> /input/query1.jpg

for CPU execution or

nvidia-docker run -v $PWD:/input <docker image name> /input/query1.jpg

for GPU execution.

The output should be:

Downloading: "https://github.com/DagnyT/hardnet/raw/master/pretrained/train_liberty_with_aug/checkpoint_liberty_with_aug.pth" to /home/user/.cache/torch/hub/checkpoints/checkpoint_liberty_with_aug.pth
100%|██████████| 5.10M/5.10M [00:00<00:00, 114MB/s]
Downloading: "https://download.pytorch.org/models/resnet34-333f7ec4.pth" to /home/user/.cache/torch/hub/checkpoints/resnet34-333f7ec4.pth
100%|██████████| 83.3M/83.3M [00:02<00:00, 31.9MB/s]Opening dataset dset_Bm_RSD.txt
2 candidates
B275:23.913230895996094 -> Most likely candidate
B271:9.846597671508789 

Acknowledgments

The test images are provided courtesy of MICS

License

This work is released under an MIT license.

Reference

Please reference this work using the following citation.

@inproceedings{inproceedings,
author = {Lalonde, Marc and Landry, David and Sears, Richard},
title = {Automated blue whale photo-identification using local feature matching},
booktitle = {Proc. CVAUI 2022},
year = {2022},
month = {August}, 
address = {Montreal, Canada}
}

About

Source code for the automated photo-identification of blue whales

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.7%
  • Dockerfile 0.3%