Skip to content

Latest commit

 

History

History

object_tracking_video

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 

This sample provides reference for you to learn the Ascend AI Software Stack and cannot be used for commercial purposes.

This sample works with CANN 3.3.0 and later versions, and supports Atlas 200 DK and Atlas 300.

Sample of Multi-Object Tracking in Video

Function: tracks multiple pedestrians in a scene with the mot_v2.om model.

Input: a crowd video or image

Output: images or video with bounding box and ID for each person in the scene

Preformance and Result: ~8 fps depending on how crowd the video is; see result at https://github.com/HardysJin/atlas-track

Prerequisites

Before deploying this sample, ensure that:

Software Preparation

  • Make sure you log in to the operating environment (HwHiAiUser)

    Icon-note.gif NOTE

    Replace xxx.xxx.xxx.xxx with the IP address of the operating environment. The IP address of Atlas 200 DK is 192.168.1.2 when it is connected over the USB port, and that of Atlas 300 is the corresponding public network IP address.

1. Obtain the source package.

cd $HOME
git clone https://github.com/Ascend/samples.git

2. Install Dependencies

cd $HOME/samples/python/contrib/object_tracking_video/
pip3 install -r requirements.txt

3. Obtain the Offline Model (om) or Convert ONNX to om in Step 4.

Ensure you are in the project directory (object_tracking_video/) and run one of the following commands in the table to obtain the pedestrian tracking model used in the application.

cd $HOME/samples/python/contrib/object_tracking_video/
Model How to Obtain
mot_v2.om wget -nc --no-check-certificate 'https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/object_tracking_video/mot_v2.om' -O model/mot_v2.om
mot_v2.onnx wget -nc --no-check-certificate 'https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/object_tracking_video/mot_v2.om' -O model/mot_v2.onnx

Icon-note.gif NOTE

  • mot_v2.om offline model you can use out-of-the-box without model conversion. If you use this then you can skip the next step on model conversion.
  • mot_v2.onnx ONNX model for those that want to configure the model conversion process.

From the project directory, navigate to the scripts/ directory and run get_sample_data.sh to download sample images for testing the application later in section 4. The sample images will be saved in data/.

cd $HOME/samples/python/contrib/object_tracking_video/script/
bash get_sample_data.sh

4. Convert the original model to a DaVinci model. (OPTIONAL)

Note: Ensure that the environment variables have been configured in Environment Preparation and Dependency Installation.

  1. Set the LD_LIBRARY_PATH environment variable.

    The LD_LIBRARY_PATH environment variable conflicts with the sample when Ascend Tensor Compiler (ATC) is used. Therefore, you need to set this environment variable separately in the command line to facilitate modification.

    export LD_LIBRARY_PATH=${install_path}/compiler/lib64
    

For CANN 3.3.0-alpha006:

  1. Go to the project directory (object_tracking_video) and run the model conversion command to convert the model:

    atc --input_shape="input.1:1,3,608,1088" --check_report=./network_analysis.report --input_format=NCHW --output=model/mot_v2 --soc_version=Ascend310 --framework=5 --model=model/mot_v2.onnx
    

Sample Running

  • Simple & Quick Run on test video (london.mp4)

    cd $HOME/samples/python/contrib/object_tracking_video/scripts
    bash run_demo.sh
    

    See result in object_tracking_video/outputs/london

  • Run on your own video

    cd $HOME/samples/python/contrib/object_tracking_video/src
    python3 main.py --input_video "\Path to video"
    

    See result in object_tracking_video/outputs/VIDEO_NAME

  • Run on single test image (test.jpg)

    cd $HOME/samples/python/contrib/object_tracking_video/src
    python3 test.py --test_img ../data/test.jpg --verify_img ../data/verify.jpg
    

    See data/test_output.jpg

Train

This model is a dlav0 version of FairMOT, you can follow their guide to setup the training environment, then use this script to convert to ONNX.