VAME is a framework to cluster behavioral signals obtained from pose-estimation tools. It is a PyTorch based deep learning framework which leverages the power of recurrent neural networks (RNN) to model sequential data. In order to learn the underlying complex data distribution we use the RNN in a variational autoencoder setting to extract the latent state of the animal in every time step.
The workflow of VAME consists of 5 steps and we explain them in detail here.
To get started we recommend using Anaconda with Python 3.6 or higher. Here, you can create a virtual enviroment to store all the dependencies necessary for VAME.
- Install the current stable Pytorch release using the OS-dependent instructions from the Pytorch website. Currently, VAME is tested on PyTorch 1.5.
- Go to the locally cloned VAME directory and run
python setup.py install
in order to install VAME in your active Python environment.
First, you should make sure that you have a GPU powerful enough to train deep learning networks. In our paper, we were using a single Nvidia GTX 1080 Ti to train our network. A hardware guide can be found here. Once you have your hardware ready, try VAME following the workflow guide.
- November 2020: We uploaded an egocentric alignment script to allow more researcher to use VAME
- October 2020: We updated our manuscript on Biorxiv
- May 2020: Our preprint "Identifying Behavioral Structure from Deep Variational Embeddings of Animal Motion" is out! Read it on Biorxiv!
VAME was developed by Kevin Luxem and Pavol Bauer.
The development of VAME is heavily inspired by DeepLabCut. As such, the VAME project management codebase has been adapted from the DeepLabCut codebase. The DeepLabCut 2.0 toolbox is © A. & M. Mathis Labs www.deeplabcut.org, released under LGPL v3.0.
VAME preprint: Identifying Behavioral Structure from Deep Variational Embeddings of Animal Motion
See the LICENSE file for the full statement.