Skip to content

Commit

Permalink
Merge pull request #4 from HaoyiZhu/main
Browse files Browse the repository at this point in the history
update mannual and ui
  • Loading branch information
Fang-Haoshu authored Apr 15, 2022
2 parents 20a9f7c + 0bf4c0c commit 3360b63
Show file tree
Hide file tree
Showing 35 changed files with 294 additions and 175 deletions.
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -27,4 +27,4 @@ Tracking/AlphaTracker/train_yolo/darknet/backup/
Tracking/AlphaTracker/train_yolo/darknet/darknet53.conv.74
Tracking/AlphaTracker/train_yolo/darknet/train.sh

main_ui
./main_ui
10 changes: 9 additions & 1 deletion Manual/BehavioralClustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,16 @@ The main process of hierarchical clustering list below can be found in `./fft_m

<br>

## Run clustering algorithm
## Run By GUI (recommended for non-cs users)
<div align="center">
<img src="media/main_ui/main_behavior.png", width="500" alt><br>
<img src="media/main_ui/behavior.png", width="500" alt><br>
AlphaTracker GUI behavior clustering page
</div>
Please visit our video tutorial for behavior clustering at YouTube or BiliBili.


## Or run by command line
### Step 1. Configuration

Set the Behavioral Clustering folder as the current directory.
Expand Down
20 changes: 14 additions & 6 deletions Manual/Installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,22 +4,30 @@

Download the AlphaTracker repository and rename the main folder from `AlphaTracker-main` to `Alphatracker`. Or you can use `git clone` to clone AlphaTracker repository.

## Install Conda
## Install Anaconda

This project is tested in conda env in linux, and thus that is the recommended environment. To install conda, please follow the instructions from the [conda website](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html) With conda installed, please set up the environment with the following steps.
This project is tested in conda env in linux, and thus that is the recommended environment. To install conda, please follow the instructions from the [conda website](https://docs.conda.io/projects/conda/en/latest/user-guide/install/index.html) With conda installed, please set up the environment with the following steps. **Please install anaconda (not miniconda) if you need to use the AlphaTracker GUI.**

### NVIDIA driver
## NVIDIA driver

Please makse sure that your NVIDIA driver version >= 450.
Please make sure that your NVIDIA driver version >= 450. You can download Nvidia driver for your computer at [nvidia website](https://www.nvidia.com/Download/index.aspx).

### Install AlphaTracker
## Install AlphaTracker
### By GUI (recommended for non-cs users)
<div align="center">
<img src="media/main_ui/main_install.png", width="500" alt><br>
<img src="media/main_ui/install2.png", width="500" alt><br>
AlphaTracker GUI and installation page
</div>
Please visit our video tutorial for installation at YouTube or BiliBili.

### Or by command line
In your command window, locate the terminal prompt. Open this application. Then, find the folder that contains the `AlphaTracker` repository that you just downloaded. Then inside the terminal window, change the directory as follows: `cd /path/to/AlphaTracker`.

Then run the following command:

```bash
bash install.sh
bash scripts/install.sh
```

<br>
Expand Down
85 changes: 50 additions & 35 deletions Manual/Tracking.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,57 @@
# 01 Tracking
# Tracking
## By GUI (recommended for non-cs users)
<div align="center">
<img src="media/main_ui/main_track.png", width="500" alt><br>
<img src="media/main_ui/track.png", width="500" alt><br>
AlphaTracker GUI tracking page
</div>
Please visit our video tutorial for tracking at YouTube or BiliBili.

<br>

## Or by command line
### Step 1. Configuration

Before tracking, you need to change the parameters in [Tracking/AlphaTracker/setting.py](../Tracking/AlphaTracker/setting.py) (blue block in Figure 2). The meaning of
the parameters can be found in the comments.

We will use a trained weight to track a demo video by default.

### Step 2. Running the code

Change directory to the [alphatracker folder](../Tracking/AlphaTracker/) and run the following command line to do tracking:
```bash
# if your current virtual environment is not alphatracker
# run this command first: conda activate alphatracker
python track.py
```

### General Notes about the Parameters:
1. Remember not to include any spaces or parentheses in your file names. Also, file names are case-sensitive.
2. For training the parameter num_mouse must include the same number of items as the number of json files
that have annotated data. For example if you have one json file with annotated data for 3 animals then
```num_mouse=[3]``` if you have two json files with annoted data for 3 animals then ```num_mouse=[3,3]```.
3. ```sppe_lr``` is the learning rate for the SAPE network. If your network is not performing well you can lower this
number and try retraining
4. ```sppe_epoch``` is the number of training epochs that the SAPE network does. More epochs will take longer but
can potentially lead to better performance.

<br>

## Training (Optional)

# Training (Optional)

We have provided pretrained models. However, if you want to train your own models on your custom dataset, you can refer to the following steps.
## By GUI (recommended for non-cs users)
<div align="center">
<img src="media/main_ui/main_train.png", width="500" alt><br>
<img src="media/main_ui/train.png", width="500" alt><br>
AlphaTracker GUI training page
</div>
Please visit our video tutorial for training at YouTube or BiliBili.


## Or by command line
### Step 1. Data Preparation

Labeled data is required to train the model. The code would read RGB images and json files of
Expand Down Expand Up @@ -47,37 +95,4 @@ https://drive.google.com/file/d/1TYIXYYIkDDQQ6KRPqforrup_rtS0YetR/view?usp=shari

There is a demo video in [Tracking/Alphatracker/data](../Tracking/Alphatracker/data) that you can use for tracking. If you want to use the trained network we provide to track this video set `exp_name=demo` in the [Tracking/AlphaTracker/setting.py](../Tracking/AlphaTracker/setting.py)

## Tracking

### Step 1. Configuration

Before tracking, you need to change the parameters in [Tracking/AlphaTracker/setting.py](../Tracking/AlphaTracker/setting.py) (blue block in Figure 2). The meaning of
the parameters can be found in the comments.

We will use a trained weight to track a demo video by default.

### Step 2. Running the code

Change directory to the [alphatracker folder](../Tracking/AlphaTracker/) and run the following command line to do tracking:
```bash
# if your current virtual environment is not alphatracker
# run this command first: conda activate alphatracker
python track.py
```



<br>

### General Notes about the Parameters:
1. Remember not to include any spaces or parentheses in your file names. Also, file names are case-sensitive.
2. For training the parameter num_mouse must include the same number of items as the number of json files
that have annotated data. For example if you have one json file with annotated data for 3 animals then
```num_mouse=[3]``` if you have two json files with annoted data for 3 animals then ```num_mouse=[3,3]```.
3. ```sppe_lr``` is the learning rate for the SAPE network. If your network is not performing well you can lower this
number and try retraining
4. ```sppe_epoch``` is the number of training epochs that the SAPE network does. More epochs will take longer but
can potentially lead to better performance.

<br>

18 changes: 14 additions & 4 deletions Manual/UI.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,27 @@ This interface is browser-based. We recommend using `Google Chrome` as the brows

Pre-installed Python3 is required since this package includes Python scripts.

### Running
## Running
### By GUI (recommended for non-cs users)
<div align="center">
<img src="media/main_ui/main_result.png", width="500" alt><br>
<img src="media/main_ui/vis_results.png", width="500" alt><br>
AlphaTracker GUI open WebUI page
</div>

Change your working directory to [UI/](../UI) by running `cd ./UI`. Then run `python server.py` in command window in the unzipped folder. A window should appear in the user's browser. Then click `html/`. From there, select a program you want to run. `cluster.html` is the Cluster UI and `curate.html` is the Tracking UI.

<img src="media/html.jpg" width = "300" /><img src="media/window.png" width = "400" />
### Or by command line
Change your working directory to [UI/](../UI) by running `cd ./UI`. Then run `python server.py` in command window in the unzipped folder.

<br>


## Tracking UI

A window should appear in the user's browser. Then click `html/`. From there, select a program you want to run. `cluster.html` is the Cluster UI and `curate.html` is the Tracking UI.

<img src="media/html.jpg" width = "400" /><br><img src="media/window.png" width = "400" />



### Import data

Expand Down
Binary file added Manual/media/main_ui/behavior.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Manual/media/main_ui/install2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Manual/media/main_ui/main.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Manual/media/main_ui/main_behavior.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Manual/media/main_ui/main_install.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Manual/media/main_ui/main_result.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Manual/media/main_ui/main_track.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Manual/media/main_ui/main_train.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Manual/media/main_ui/main_ui.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Manual/media/main_ui/track.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Manual/media/main_ui/train.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Manual/media/main_ui/vis_results.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
7 changes: 6 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,17 @@
<img src = 'Manual/media/Alphatracker Flyer.png' width = 500 >
</p>

[AlphaTracker](https://github.com/MVIG-SJTU/AlphaTracker) is a multi-animal tracking and behavioral analysis tool which incorporates **multi-animal tracking**, **pose estimation** and **unsupervised behavioral clustering** to empower system neuroscience research. Alphatracker achieves the state-of-art accuracy of multi-animal tracking which lays the foundation for stringent biological studies. Moreover, the minimum requirement for hardware (regular webcams) and efficient training procedure allows readily adoption by most neuroscience labs.
[AlphaTracker](https://github.com/MVIG-SJTU/AlphaTracker) is a multi-animal tracking and behavioral analysis tool which incorporates **multi-animal tracking**, **pose estimation** and **unsupervised behavioral clustering** to empower system neuroscience research. Alphatracker achieves the state-of-art accuracy of multi-animal tracking which lays the foundation for stringent biological studies. Moreover, the minimum requirement for hardware (regular webcams) and efficient training procedure allows readily adoption by most neuroscience labs. **We also provide simple GUI for most procedure in AlphaTracker so as to facilitate research for non-cs labmates and students.**

<div align="center">
<img src="Manual/media/pipeline.png", width="600" alt><br>
Architecture and Pipeline of AlphaTracker
</div>
<br>
<div align="center">
<img src="Manual/media/main_ui/main_ui.gif", width="500" alt><br>
Illustration of AlphaTracker main GUI
</div>

## Instructions

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -36,8 +36,6 @@
from functools import cmp_to_key
import time

# from ..setting import pose_pair


def display_pose_cv2(imgdir, visdir, tracked, cmap, args):

Expand Down
71 changes: 2 additions & 69 deletions Tracking/AlphaTracker/setting.py
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,8 @@
"./data/sample_annotated_data/demo/train9.json"
] # list of paths to the json files that contain labels of the images for training
num_mouse = [2] # the number of mouse in the images in each image folder path
exp_name = "demo" # the name of the experiment
exp_name = "demo" # the name of the training experiment
exp_name_track = "demo" # the exp name of the tracking experiment, denoting which trained results to use
num_pose = 4 # number of the pose that is labeled, remember to change self.nJoints in train_sppe/src/utils/dataset/coco.py
pose_pair = [[0, 1], [0, 2], [0, 3]]
train_val_split = (
Expand Down Expand Up @@ -70,71 +71,3 @@

AlphaTracker_root = os.path.abspath(AlphaTracker_root)
result_folder = os.path.abspath(result_folder)

with open('train.cfg', 'r') as f:
dat = f.read()
if not dat:
print(f'error, train.cfg is empty')
try:
dict_state = eval(dat)
except Exception as e:
print(f'load train.cfg Exception: {e}')
print(dict_state)

gpu_id = int(dict_state['gpu_id']) # the id of gpu that will be used

# data related settings
image_root_list = [dict_state['image_root_list']] # list of image folder paths to the RGB images for training
json_file_list = [dict_state['json_file_list']] # list of paths to the json files that contain labels of the images for training
num_mouse = [int(dict_state['num_mouse'])] # the number of mouse in the images in each image folder path
exp_name = dict_state['exp_name'] # the name of the experiment
num_pose = int(dict_state['num_pose']) # number of the pose that is labeled, remember to change self.nJoints in train_sppe/src/utils/dataset/coco.py

pose_pair = np.array([[float(j) for j in i.split('-')] for i in dict_state['pose_pair'].split(',')])
print('pose pair is:',pose_pair)
train_val_split = float(dict_state['train_val_split']) # ratio of data that used to train model, the rest will be used for validation
image_suffix = dict_state['image_suffix'] # suffix of the image, png or jpg


# training hyperparameter setting
# Protip: if your training does not give good enough tracking you can lower lr and increase epoch number
# but lowering the lr too much can be bad for tracking quality as well.
sppe_lr = float(dict_state['sppe_lr'])
sppe_epoch = int(dict_state['sppe_epoch'])
sppe_pretrain = dict_state['sppe_pretrain']
sppe_batchSize = int(dict_state['sppe_batchSize'])
yolo_lr = float(dict_state['yolo_lr'])
yolo_iter = int(dict_state['yolo_iter']) ## if use pretrained model please make sure yolo_iter to be large enough to guarantee finetune is done
yolo_pretrain = dict_state['yolo_pretrain'] # './train_yolo/darknet/darknet53.conv.74'
yolo_batchSize = int(dict_state['yolo_batchSize'])


with open('track.cfg', 'r') as f:
dat = f.read()
if not dat:
print(f'error, track.cfg is empty')
try:
dict_state2 = eval(dat)
except Exception as e:
print(f'load track.cfg Exception: {e}')
print(dict_state2)


# demo video setting
# note video_full_path is for track.py, video_paths is for track_batch.py
# video_full_path is the path to the video that will be tracked
video_full_path = dict_state2['video_full_path']
video_paths = [
dict_state2['video_full_path'],
] # make sure video names are different from each other
start_frame = int(dict_state2['start_frame']) # id of the start frame of the video
end_frame = int(dict_state2['end_frame']) # id of the last frame of the video
max_pid_id_setting = int(dict_state2['max_pid_id_setting']) # number of mice in the video
result_folder = dict_state2['result_folder'] # path to the folder used to save the result
remove_oriFrame = int(dict_state2['remove_oriFrame']) # whether to remove the original frame that generated from video
vis_track_result = int(dict_state2['vis_track_result'])

# weights and match are parameter of tracking algorithm
# following setting should work fine, no need to change
weights = dict_state2['weights']
match = int(dict_state2['match'])
86 changes: 86 additions & 0 deletions Tracking/AlphaTracker/setting_ui.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,86 @@
import os
import numpy as np

# code path setting
AlphaTracker_root = "./"

with open('train.cfg', 'r') as f:
dat = f.read()
if not dat:
print(f'error, train.cfg is empty')
try:
dict_state = eval(dat)
except Exception as e:
print(f'load train.cfg Exception: {e}')
print(dict_state)

gpu_id = int(dict_state['gpu_id']) # the id of gpu that will be used

# data related settings
image_root_list = [dict_state['image_root_list']] # list of image folder paths to the RGB images for training
json_file_list = [dict_state['json_file_list']] # list of paths to the json files that contain labels of the images for training
num_mouse = [int(dict_state['num_mouse'])] # the number of mouse in the images in each image folder path
exp_name = dict_state['exp_name'] # the name of the experiment
num_pose = int(dict_state['num_pose']) # number of the pose that is labeled, remember to change self.nJoints in train_sppe/src/utils/dataset/coco.py

pose_pair = np.array([[float(j) for j in i.split('-')] for i in dict_state['pose_pair'].split(',')])
print('pose pair is:',pose_pair)
train_val_split = float(dict_state['train_val_split']) # ratio of data that used to train model, the rest will be used for validation
image_suffix = dict_state['image_suffix'] # suffix of the image, png or jpg


# training hyperparameter setting
# Protip: if your training does not give good enough tracking you can lower lr and increase epoch number
# but lowering the lr too much can be bad for tracking quality as well.
sppe_lr = float(dict_state['sppe_lr'])
sppe_epoch = int(dict_state['sppe_epoch'])
sppe_pretrain = dict_state['sppe_pretrain']
sppe_batchSize = int(dict_state['sppe_batchSize'])
yolo_lr = float(dict_state['yolo_lr'])
yolo_iter = int(dict_state['yolo_iter']) ## if use pretrained model please make sure yolo_iter to be large enough to guarantee finetune is done
yolo_pretrain = dict_state['yolo_pretrain'] # './train_yolo/darknet/darknet53.conv.74'
yolo_batchSize = int(dict_state['yolo_batchSize'])


with open('track.cfg', 'r') as f:
dat = f.read()
if not dat:
print(f'error, track.cfg is empty')
try:
dict_state2 = eval(dat)
except Exception as e:
print(f'load track.cfg Exception: {e}')
print(dict_state2)


# demo video setting
# note video_full_path is for track.py, video_paths is for track_batch.py
# video_full_path is the path to the video that will be tracked
video_full_path = dict_state2['video_full_path']
video_paths = [
dict_state2['video_full_path'],
] # make sure video names are different from each other
start_frame = int(dict_state2['start_frame']) # id of the start frame of the video
end_frame = int(dict_state2['end_frame']) # id of the last frame of the video
max_pid_id_setting = int(dict_state2['max_pid_id_setting']) # number of mice in the video
result_folder = dict_state2['result_folder'] # path to the folder used to save the result
remove_oriFrame = int(dict_state2['remove_oriFrame']) # whether to remove the original frame that generated from video
vis_track_result = int(dict_state2['vis_track_result'])

# weights and match are parameter of tracking algorithm
# following setting should work fine, no need to change
weights = dict_state2['weights']
match = int(dict_state2['match'])

exp_name_track = dict_state2['exp_name_track']

# the following code is for self-check and reformat
assert len(image_root_list) == len(
json_file_list
), "the length of image_root_list and json_file_list should be the same"
for i in range(len(image_root_list)):
image_root_list[i] = os.path.abspath(image_root_list[i])
json_file_list[i] = os.path.abspath(json_file_list[i])

AlphaTracker_root = os.path.abspath(AlphaTracker_root)
result_folder = os.path.abspath(result_folder)
2 changes: 1 addition & 1 deletion Tracking/AlphaTracker/track.cfg
Original file line number Diff line number Diff line change
@@ -1 +1 @@
{'video_full_path': '/home/flexiv/AlphaTracker/Tracking/AlphaTracker/data/demo.mp4', 'start_frame': '0', 'end_frame': '300', 'max_pid_id_setting': '2', 'result_folder': './track_result/', 'remove_oriFrame': '0', 'vis_track_result': '1', 'weights': '0 6 0 0 0 0 ', 'match': '0'}
{'video_full_path': '/home/flexiv/AlphaTracker/Tracking/AlphaTracker/data/demo.mp4', 'start_frame': '0', 'end_frame': '300', 'max_pid_id_setting': '2', 'result_folder': './track_result/', 'remove_oriFrame': '0', 'vis_track_result': '1', 'weights': '0 6 0 0 0 0 ', 'match': '0', 'exp_name_track': 'demo'}
Loading

0 comments on commit 3360b63

Please sign in to comment.