Utilizing real-time object detection for endangered animal species under the Jetson Nano using TensorRT engine optimization. This detection model will be used to determine the state of the hardware-configured gate.
Features include:
-
Real-time object detection to set the state of the physical gate. The rules are set within the
config/
folder. Theconfig.json
file contains the general rules to set the list of stimuli that triggers the specified state of the gate (more details in Setup). -
Utilizing TensorRT engine optimization to enhance inference performance under the Jetson Nano. The optimized models are saved as
.engine
files within themodels/
folder. Models are obtained from the Marsupial dataset. -
Web server dashboard to view a live feed fetched from the camera as well as manually control state of the door.
Installing the requirements should be ran under the Jetson Nano with the Jetpack SDK. For more info on setting this up, please refer to NVIDIA's official guides for your respective Jetson Nano model:
There are two options users can use to set up the requirements.
- Install the dependencies locally on the Nano.
git clone https://github.com/TheOpenSI/SmartGate.git
cd SmartGate/setup
./setup_requirements.sh
Optional: Generate the systemd service and automatically enable and start the service to run on boot.
cd SmartGate/setup
./systemd_service_setup.sh
- Build Docker container.
git clone https://github.com/TheOpenSI/SmartGate.git
cd SmartGate/
sudo docker build -t smartgate:latest .
Ensure the Requirements are satisfied before proceeding.
Users are free to configure the rules to set the behaviour of the gate specified on a list of stimuli. Within config/config.json
a base template would be provided with:
{
"model": {
"path": "../models/yolov5s.engine",
"classes": "../models/classes/yolov5s.txt",
"confidence": 0.5
},
"rules": [
{
"objects": ["dog"],
"action": "OPEN"
},
{
"objects": ["cat"],
"action": "CLOSE"
}
],
"server": {
"port": 8080
}
}
Note that in the configuration file, any path specified can be relative to the location of the configuration file itself.
- The
model
section defines the settings for the object detection modelpath
specifies the file path to the trained model (in this case, a YOLOv5 model in TensorRT format). If you are working with a.pt
file, you will need to convert it to a.engine
file. You can use the following repo to convert it, or follow this tutorialclasses
points to a text file containing the list of object classes the model can detect. It will need to be in the following formatindex: class
(check the files inmodels/classes
for some examples)confidence
sets the confidence threshold for object detection (0.5
or 50% in this example).
-
rules
section define how the gate should respond to detected objectsobjects
array represents the list of strings of the objects that should trigger the specifiedaction
. This should be referred from your specified classes file (frommodels/classes
)action
is the action to take when the specified objects fromobjects
are detected. Can be eitherOPEN
orCLOSE
.
-
In this example:
- If
dog
is detected the gate should open - If
cat
is detected the gate should close - If both are detected the gate should close. This should be the default behaviour of the SmartGate when both stimuli are detected
- If
server
section contains the settings for the web serverport
is the port number for which the web server would run under. In the example it's set to8080
Ensure the Requirements are satisfied before proceeding.
To get started run the following
cd SmartGate/src/main/
python3 live_detection.py
A web server should run in which the camera stream can be viewed from the main dashboard and dedicated controls to manually control the gate. Note
: You may need to run this command with sudo privileges.
For project supports, please contact Carlos C. N. Kuhn.
Work in Progress