Kinect skeletal frame based gesture classfication
Click for Youtube video: real-time application:
Tensorflow
python libraries:
chmod +x pythonReady.sh
yes "yes" | sudo sh pythonReady.sh
python main.py Train
python main.py Test
- Add the "ghost" class in the data set.
- plot the result in python.
- Apply CNN based on raw RGB images.
- Apply face recognition for operator identification.
MS Kinect usually returns the skeletal coordinates as the first picture.
- Idle
- Move Forward
- Move Back
- Takeoff/landing
However, it sometimes sees "Ghosts" as shown in this second picture.
So, it is necessary to tell if a skeletal stream is from actual human or not. Fortunately, pattern-wise, the human and "ghost" skeletons look different. So, a single hidden layer NN can easily calssify the differences.
Next, the four different gestures are fed as well. Idle, move forward, move back, and takeoff/landing. As a result, I needed only 5 classes, the last class can be simply "ghost" patterns.
Accuracy= 0.90