This repository contains data and code with our punch classification experiments. We are using acceleration sensors measurements and video frames for punch class recognition.
DOI_10.1109ACCESS.2021.3118038
directory contains data and code for reproducing classification metric results for article
'Recognition punches in karate using acceleration sensors and convolution neural networks'.
code
directory contains:
MoveNetExtractKeypoints.ipynb
- keypoints extraction for punch videos. Save it todata/keypoints
RNN-LSTM-GRU.ipynb
- vanilla RNN as baseline.GRU-NormalizeMidPoint.ipynb
- keypoints coordinates normalized to middle point (btw left and right hips).
models
directory contains training results keras and tflite models.
Starting videos available here.
On each video man with unique id have done 10 weak (5 left and 5 right hand) and 10 strong punches.
Total 240 punches in dataset v0.1 We start with only punch class prediction, no power estimation.
Box punches classes:
0. no punch,
- jab (jab left),
- cross (jab right),
- left hook,
- right hook,
- left uppercut,
- right uppercut.
- jab (jab left) strong, etc.
Build docker
docker build -t punch_dl:v2 .
Run docker
docker run -p 8888:8888 -v "$(pwd)":/tf punch_dl:v2
IF you want to keep google colab 2 spaces indentation in jupyter notebook, please visit: https://stackoverflow.com/questions/19068730/how-do-i-change-the-autoindent-to-2-space-in-ipython-notebook
TODO:
- Extract keypoints frame by frame to *.npy files.
- Label frames to defined classes.
- Experiment with recurrent models to punch classification.
- Add some advanced features, e.g. angles, distances e.t.c.
- Experiment with convolution models to punch classification.
- Get more videos.
Links:
Awesome Action Recognition
https://github.com/jinwchoi/awesome-action-recognition