This project contains experimental code for training end-to-end neural models to automatically recognize the motivational levels from students, using only their facial expressions as input.
Contact person: Pedro Santos, [email protected]
https://www.ukp.tu-darmstadt.de/
Don't hesitate to send us an e-mail or report an issue, if something is broken (and it shouldn't be) or if you have further questions.
python_scripts
-- this folder contains scripts used to run the experiments described in the paper. The CNN implementation is an extension of the discriminator implemented here: https://github.com/carpedm20/DCGAN-tensorflow
- Python 3
- Tensorflow
- Numpy
- Scipy
- Keras
- Scikit-learn
- PyYaml
- OpenCV
- FFMpeg
- OpenFace
- FFMPeg
https://www.ffmpeg.org/download.html
- OpenCV
- OpenFace
https://github.com/TadasBaltrusaitis/OpenFace
- Python dependencies
$ virtualenv --system-site-packages -p python3 motivation_recognition_venv
$ source motivation_recognition_venv/bin/activate
(motivation_recognition_venv) $ pip install --upgrade pip
(motivation_recognition_venv) $ pip install --upgrade -r requirements.txt
The basic pipeline is the following:
- First: extract the visual frames;
(motivation_recognition_venv) $ python extract_visual_frames.py <input_folder> <output_folder>
- Second: Preprocess the frames to obtain the region-of-interest, in our case, the faces;
(motivation_recognition_venv) $ python extract_visual_features.py <visual_features.yaml>
- Third: Run the scripts for performing leave-one-student-out cross-validation.
(motivation_recognition_venv) $ python script_losocv.py <cross_validation.yaml>
Due to privacy issues, the data with the video recordings from the students cannot be publicly shared.