This web app processes video stream using OpenCV and determines the liveness; whether the frame represents a real person or a fake. Video is transmitted from browser to server using a Python library AIORTC. Liveness determination is done with a trained neural network using tensorflow.
Download all the files and install all the dependencies using
pipenv install
Run the shell
pipenv shell
and run the server.py
file with Python
python server.py
The neural network here is already trained. But, it is possible to retrain the neural network. Training should be done when new data (video) is added to the data set. The files to be run in order are:
gather_examples.py
train.py
TO-DO:
Pass frame label to web pageDone. Used data channelsOptimize liveness determination (currently runs very slow)Partially done. Removed stream display but still triggers @tf.function retracing- Label only shows real on tests done with real or fake faces.
- Implement WSGI in server using gunicorn
- Create a better model by using web Tensorflow