This Project aims to easily let blind people interact with their surrounding environmen.
Tools:=> => Using YOLOv3 Pretrained Model to detect 80s' variant object in real-time. (Car, Person, Stop signs in roads, Chairs, Books, tooth brush, etc.) => Using Tesseract API Real-Time OCR to detect & recognize text (Books, and general road texts) -- Still Under Optimization -- => Using Google TTS Engine to Read all recognuzed texts and speak loudly about what are existed in the enviroment.
Requirements To Try:=> 1- Download YOLOv3.weights :: https://drive.google.com/file/d/1eFRGUpwi36DL8PChyME8PEQLk-hkWzEN/view?usp=sharing 2- Clone the code folder. 3- Put .weights file into clonned folder. 4- Run Main File.