Object detection model for real soccer scenes, trained only with synthetic data (Blender renderings)
π¨βπ This project was carried out during my master's degree in computer vision at URJC - Madrid
This project is the result of my master thesis. This project presents an object detection model for (real) soccer situations. The model has been only trained with synthetic images created with Blender. The model has been trained via transfer learning with YoloV8.
This project is divided in 2 big sections: the generation of the synthetic data and the model.
The goal of this readme is to explain how to install and use the project.
- Generate (customizable) synthetic data with Blender
- Create an object detection model for soccer situations
- Blender
- Python >=3.7, <=3.10
- Jupyter notebook
- Conda (if you need GPU and you are on Windows)
- jupyter >= 1.0.0
- numpy >= 1.21.3
- Pillow >= 9.3.0
- opencv_python >= 4.8.0.76
- opencv-contrib-python>= 4.8.0.76
- tqdm >= 4.63.1
- ultralytics >= 8.0.71
$ pip3 install -r requirements.txt
Open [Soccer Arena] Win & Linux.blend
in Blender.
The Blender project should look like this when openned
-
Click on the viewport section
(1)
if you want to change the scene view (not required) -
Click on the code section
(2)
and pressALT + P
to install the Python3 packages (needed!). -
Click on the code section
(3)
(book icon) -
Select "F Main" file on the section
(4)
(changes the script)
The soccer panel should have appeared (4)
-
Click on the code section
(5)
and pressALT + P
to add the add-on to Blender. -
Locate the
Soccer_saves/
folder in the soccer panel(6)
. -
You can start to use the add-on (scroll on the add-on to view all options).
9 examples of images and groundtruths generated by Blender
- Use powershell instead of cmd.
- Use full flags names
--input instead of -i=
$ python predict.py -f=[File_path] -m=models/4_classes/weights/best.pt
or
$ python predict.py -f=[File_path] -m=models/val_real/weights/best.pt
Linux example:
-> python predict.py -f=data/Val_Real_1.png -m=models/val_real/weights/best.pt
Windows example:
-> python predict.py --file .\data\Val_Real_1.png --model .\models\val_real\weights\best.pt
4_classes
model (4 classes) : Model for all matches (cannot differentiate between 2 players from different teams)
val_real
model (5 classes) : Specific model trained for Valencia C.F versus Madrid C.F matches (can make the difference between 2 players from different teams)
Some files are available in data/
folder
The dataset need to have this structure :
.
βββ data
βββ all_annot
βββ all_imgs
βββ yolo_training_dataset
βββ images # Name and struture required
β βββ test # Name and struture required
β βββ train # Name and struture required
β βββ val # Name and struture required
βββ labels # Name and struture required
βββ train # Name and struture required
βββ val # Name and struture required
Move all your images in all_imgs/
and all your annotations in all_annot/
.
You will need to modify config_low_linux_mac.yaml
or config_low_windows.yaml
paths (lines 3-4-5)
You can modify the class list too (in the yaml file).
Run Model_trainer_custom.ipynb
script with jupyter notebook and follow the notebook instructions.
$ jupyter notebook Model_trainer_custom.ipynb
Creation of the annotations from groundtruth images.
Place the groundtruth images on a folder [Input_directory]
and the program will generate their annotations in the Yolo format.
$ python annotation.py -i=[Input_directory] -o=[Output_directory]
Linux example:
-> python annotation.py -i=render_examples/groundtruth -o=result
Windows example:
-> python annotation.py --input .\render_examples\groundtruth\ --output .\results
$ python annotation_low.py -i=[Input_directory] -o=[Output_directory]
Linux example:
-> python annotation_low.py -i=render_examples/groundtruth -o=result
Windows example:
-> python annotation_low.py --input .\render_examples\groundtruth\ --output .\results
The difference between annotation.py and annotation_low.py are the number of objects classes (lines 47-72).
If the user wants to downgrade the resolution of the images.
$ python down_resolution.py -i=[Input_directory] -o=[Output_directory] -r=[Resolution_width]
Linux example:
-> python down_resolution.py -i=render_examples/render -o=result -r=320
Windows example:
-> python down_resolution.py --input .\render_examples\render\ --output .\results --res 320
If the user wants to view the result of the annotation on an image.
$ python squares.py -i=[Input_directory]
Linux example:
-> python squares.py -i=squares_example_folder
Windows example:
-> python squares.py --input .\squares_example_folder\
The [Input_directory] must have this structure :
.
βββ [Input_directory]
βββ ano
β βββ *.txt
βββ img
βββ *.png
And files *.txt
/ *.png
must have a number name like 1.txt
and 1.png
Press any key to skip images or exit.
Example of 4_classes model detection
Example of val_real model detection
- Luis Rosario - Member 1 - Luisrosario2604