This sample provides reference for you to learn the Ascend AI Software Stack and cannot be used for commercial purposes.
This sample works with CANN 3.3.0 and later versions, and supports Atlas 200 DK and Atlas 300.
Function: tracks multiple pedestrians in a scene with the mot_v2.om model.
Input: a crowd video or image
Output: images or video with bounding box and ID for each person in the scene
Preformance and Result: ~8 fps depending on how crowd the video is; see result at https://github.com/HardysJin/atlas-track
Before deploying this sample, ensure that:
- The environment has been set up by referring to Environment Preparation and Dependency Installation.
- The development environment and operating environment of the corresponding product have been set up.
-
Make sure you log in to the operating environment (HwHiAiUser)
Replace xxx.xxx.xxx.xxx with the IP address of the operating environment. The IP address of Atlas 200 DK is 192.168.1.2 when it is connected over the USB port, and that of Atlas 300 is the corresponding public network IP address.
cd $HOME
git clone https://github.com/Ascend/samples.git
cd $HOME/samples/python/contrib/object_tracking_video/
pip3 install -r requirements.txt
3. Obtain the Offline Model (om) or Convert ONNX to om in Step 4.
Ensure you are in the project directory (object_tracking_video/
) and run one of the following commands in the table to obtain the pedestrian tracking model used in the application.
cd $HOME/samples/python/contrib/object_tracking_video/
Model | How to Obtain |
---|---|
mot_v2.om | wget -nc --no-check-certificate 'https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/object_tracking_video/mot_v2.om' -O model/mot_v2.om |
mot_v2.onnx | wget -nc --no-check-certificate 'https://obs-9be7.obs.cn-east-2.myhuaweicloud.com/003_Atc_Models/AE/ATC%20Model/object_tracking_video/mot_v2.om' -O model/mot_v2.onnx |
mot_v2.om
offline model you can use out-of-the-box without model conversion. If you use this then you can skip the next step on model conversion.mot_v2.onnx
ONNX model for those that want to configure the model conversion process.
From the project directory, navigate to the scripts/
directory and run get_sample_data.sh
to download sample images for testing the application later in section 4. The sample images will be saved in data/
.
cd $HOME/samples/python/contrib/object_tracking_video/script/
bash get_sample_data.sh
Note: Ensure that the environment variables have been configured in Environment Preparation and Dependency Installation.
-
Set the LD_LIBRARY_PATH environment variable.
The LD_LIBRARY_PATH environment variable conflicts with the sample when Ascend Tensor Compiler (ATC) is used. Therefore, you need to set this environment variable separately in the command line to facilitate modification.
export LD_LIBRARY_PATH=${install_path}/compiler/lib64
For CANN 3.3.0-alpha006:
-
Go to the project directory (object_tracking_video) and run the model conversion command to convert the model:
atc --input_shape="input.1:1,3,608,1088" --check_report=./network_analysis.report --input_format=NCHW --output=model/mot_v2 --soc_version=Ascend310 --framework=5 --model=model/mot_v2.onnx
-
Simple & Quick Run on test video (london.mp4)
cd $HOME/samples/python/contrib/object_tracking_video/scripts bash run_demo.sh
See result in
object_tracking_video/outputs/london
-
cd $HOME/samples/python/contrib/object_tracking_video/src python3 main.py --input_video "\Path to video"
See result in
object_tracking_video/outputs/VIDEO_NAME
-
cd $HOME/samples/python/contrib/object_tracking_video/src python3 test.py --test_img ../data/test.jpg --verify_img ../data/verify.jpg
See
data/test_output.jpg
This model is a dlav0 version of FairMOT, you can follow their guide to setup the training environment, then use this script to convert to ONNX.