Skip to content

Latest commit

 

History

History
73 lines (48 loc) · 2.11 KB

File metadata and controls

73 lines (48 loc) · 2.11 KB

Recognition of Students' Intrinsic Motivation in Classroom Situations

This project contains experimental code for training end-to-end neural models to automatically recognize the motivational levels from students, using only their facial expressions as input.

Contact person: Pedro Santos, [email protected]

https://www.ukp.tu-darmstadt.de/

https://www.tu-darmstadt.de/

Don't hesitate to send us an e-mail or report an issue, if something is broken (and it shouldn't be) or if you have further questions.

Project structure

Requirements

  • Python 3
  • Tensorflow
  • Numpy
  • Scipy
  • Keras
  • Scikit-learn
  • PyYaml
  • OpenCV
  • FFMpeg
  • OpenFace

Installation

  • FFMPeg

https://www.ffmpeg.org/download.html

  • OpenCV

https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_setup/py_table_of_contents_setup/py_table_of_contents_setup.html#py-table-of-content-setup

  • OpenFace

https://github.com/TadasBaltrusaitis/OpenFace

  • Python dependencies
$ virtualenv --system-site-packages -p python3 motivation_recognition_venv
$ source motivation_recognition_venv/bin/activate
(motivation_recognition_venv) $ pip install --upgrade pip
(motivation_recognition_venv) $ pip install --upgrade -r requirements.txt

Running the experiments

The basic pipeline is the following:

  • First: extract the visual frames;
(motivation_recognition_venv) $ python extract_visual_frames.py <input_folder> <output_folder>
  • Second: Preprocess the frames to obtain the region-of-interest, in our case, the faces;
(motivation_recognition_venv) $ python extract_visual_features.py <visual_features.yaml>
  • Third: Run the scripts for performing leave-one-student-out cross-validation.
(motivation_recognition_venv) $ python script_losocv.py <cross_validation.yaml>

Due to privacy issues, the data with the video recordings from the students cannot be publicly shared.