Signify is Alexa for the deaf and mute.
Signify is an innovative home assistant platform designed to enhance accessibility through gesture recognition. It allows users to perform various tasks such as controlling lights, checking weather updates, and playing quizzes without the need for direct screen interaction.
Thumbs Up
Gesture: Select the page to control lights using gestures.Thumbs Down
Gesture: Select the page to play the quiz game using gestures.Thumbs Up
Gesture: Select the page to check the weather in your area.Rock and Roll Sign
: Got to the selected page.
Thumbs Up
Gesture: Turn the lightON
(OFF
by default).Thumbs Down
Gesture: Turn the lightOFF
.Closed Fist
Gesture: Go back to the home page.
Thumbs Up
Gesture: Select thetrue
option.Thumbs Down
Gesture: Select thefalse
option.Open Palm
Gesture: Go to the next question if you have answered a question already.Closed Fist
Gesture: Go back to the home page.
Closed Fist
Gesture: Go back to the home page.
Utilizing MediaPipe for gesture recognition, Signify employs a two-part model approach:
- Hand Landmark Model Bundle:
- Detects hand presence and geometry.
- Utilizes a combination of palm detection and hand landmarks detection models.
- Trained on diverse datasets including real-world images and synthetic models.
- Gesture Classification Model Bundle:
- Identifies specific gestures from hand geometry.
- Supports common gestures like Closed Fist, Open Palm, Thumbs Up, etc.
- Frames are captured every 1.3 seconds and sent to the gesture recognition model.
- The first model component assesses hand presence, while the second classifies the gesture.
- A Flask backend processes recognized gestures.
- Includes functionalities like light control based on the gesture received.
- React-based frontend displays real-time gesture updates via WebSocket.
- Implements logic to respond to different gestures for controlling various functions.
Pre-trained models offer efficient processing with average latencies of 16.76ms (CPU) and 20.87ms (GPU) on Pixel 6 devices.
Run the operations below using your terminal. The directory should be the root directory of the Signify
project.
-
Download Anaconda.
-
Create a conda environment with Python 3.9 as mediapipe works perfectly for this version.
conda create -n signify_environment python=3.9
-
Activate the conda environment
conda activate signify_environment
-
Download the conda packages
conda install -r conda_requirements.txt
-
Download the python packages using pip
pip install -r pip_requirements.txt
-
Create a new kernel with
signify_environment
.conda install ipykernel python -m ipykernel install --user --name=signify --display-name "signify_environment"
-
Run jupyter lab
jupyterlab
-
Once you open the notebook make sure the top right corner where it shows the kernel says
signify_environment
. -
Run all the cells using
shift
+enter
until the open CV code starts running and you see the camera turn on. -
Press
q
after selecting the camera window if you want to stop code execution and quit the camera.
-
Navigate to the
api
directory in theSignify
project directory. -
Once you are in the
api
directory, create a python3 virtual environment to seperate the dependencies that you install for this project from the rest of your system.python3 -m venv venv
-
Activate the virtual environment.
source venv/bin/activate
-
Install all python dependencies using pip.
pip install -r requirements.txt
If there are any errors in this step then install the packages manually by referencing the code.
-
Start the backend flask server.
python run.py
-
Navigate to the
frontend
directory within theSignify
project directory. -
Install the packages using npm.
npm i
-
Run the react app.
npm start