You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Blind mobility still remains a far-fetched dream in today’s era of global advancements. Our idea is to make a headgear or belt for visually challenged people to augment the real world which will help them to navigate the area around them using an RGBD camera. Open source resources will be used to develop an interface to provide kinaesthetic and auditory feedback to a blind person from the visual information collected by the camera and sensors. 3D vision and deep learning will be used to improve the decisions made on the environment including object identification majorly(as we surveyed potty, dig, etc on the road).
We wish to develop a wearable kit which could be used to give useful information about the environment with the goal of helping the blind.
We envision a future where blindness is not at all a hindrance when it comes to living a normal life. We want that no one should feel ashamed or treat it like a curse. To realize this dream, we’ll make a belt. The belt will have a camera (a combination of raspberry camera and RGBD Kinect camera) and we will use image processing techniques to distinguish between obstacles and open, traversable path. We’re not using the ultrasonic sensor as the problem with it is that it fails to recognize obstacles like a table where a slight movement of the body can change the signal readings by a big measure.
The benefit of using image processing is that we can easily distinguish between a table (where the distance can change at an angle when looking towards the edge) and continuous objects. Apart from that, details about the objects including its shape, size, color etc. can help the person in many real-world circumstances where other sensors might fail. To cope up with all these problems we have decided to develop a headgear consisting of Microsoft Kinect camera which provides a 3D point cloud consisting of RGB values (color) of a normal 2D image as well as the depth of each pixel of the surrounding, thus forming a 3D point cloud. This will help greatly increase the amount of information being made available to a blind person (despite losing sight) thus improving his/her chances of getting things done much easier.
When we discover an obstacle on the path, we will notify the wearer using the smartphone the person is carrying. For the purpose of notification, we will make a smartphone app which will be connected to the belt. The app will communicate with the belt using existing wireless technologies such as Bluetooth or Wi-Fi (whichever is more feasible.) The belt will run on raspberry pi, which is powerful enough to do the computation required in processing images. Based on the conclusions of the image processing, we would provide kinesthetic feedback on the belt to help the person get a feel of the space around him. To do this, we can control the intensity and direction of the vibrations. This will help in situations where we cannot accept the lag between sending the signal and the phone acting upon it. We can also have associate certain types of vibrations with emergency maneuvers like in the case where a flying ball is suddenly spotted.
The app will also use the google text to speech API to verbally inform the wearer about the Obstacle.
🔦 Any other specific thing you want to highlight?
(Optional)
✅ Checklist
Before you post the issue:
You have followed the issue title format.
You have mentioned the correct labels.
You have provided all the information correctly.
The text was updated successfully, but these errors were encountered:
Before you start, please follow this format for your issue title:
TEAM NAME - PROJECT NAME
ℹ️ Project information
Please complete all applicable.
Vikas Kamboj: https://github.com/vikaskamboj1085
Ashish Priyadarshi: https://github.com/ashishpriyadarshiCIC
https://github.com/rishabhj126/RPIConnect
🔥 Your Pitch
Blind mobility still remains a far-fetched dream in today’s era of global advancements. Our idea is to make a headgear or belt for visually challenged people to augment the real world which will help them to navigate the area around them using an RGBD camera. Open source resources will be used to develop an interface to provide kinaesthetic and auditory feedback to a blind person from the visual information collected by the camera and sensors. 3D vision and deep learning will be used to improve the decisions made on the environment including object identification majorly(as we surveyed potty, dig, etc on the road).
We wish to develop a wearable kit which could be used to give useful information about the environment with the goal of helping the blind.
We envision a future where blindness is not at all a hindrance when it comes to living a normal life. We want that no one should feel ashamed or treat it like a curse. To realize this dream, we’ll make a belt. The belt will have a camera (a combination of raspberry camera and RGBD Kinect camera) and we will use image processing techniques to distinguish between obstacles and open, traversable path. We’re not using the ultrasonic sensor as the problem with it is that it fails to recognize obstacles like a table where a slight movement of the body can change the signal readings by a big measure.
The benefit of using image processing is that we can easily distinguish between a table (where the distance can change at an angle when looking towards the edge) and continuous objects. Apart from that, details about the objects including its shape, size, color etc. can help the person in many real-world circumstances where other sensors might fail. To cope up with all these problems we have decided to develop a headgear consisting of Microsoft Kinect camera which provides a 3D point cloud consisting of RGB values (color) of a normal 2D image as well as the depth of each pixel of the surrounding, thus forming a 3D point cloud. This will help greatly increase the amount of information being made available to a blind person (despite losing sight) thus improving his/her chances of getting things done much easier.
When we discover an obstacle on the path, we will notify the wearer using the smartphone the person is carrying. For the purpose of notification, we will make a smartphone app which will be connected to the belt. The app will communicate with the belt using existing wireless technologies such as Bluetooth or Wi-Fi (whichever is more feasible.) The belt will run on raspberry pi, which is powerful enough to do the computation required in processing images. Based on the conclusions of the image processing, we would provide kinesthetic feedback on the belt to help the person get a feel of the space around him. To do this, we can control the intensity and direction of the vibrations. This will help in situations where we cannot accept the lag between sending the signal and the phone acting upon it. We can also have associate certain types of vibrations with emergency maneuvers like in the case where a flying ball is suddenly spotted.
The app will also use the google text to speech API to verbally inform the wearer about the Obstacle.
🔦 Any other specific thing you want to highlight?
(Optional)
✅ Checklist
Before you post the issue:
The text was updated successfully, but these errors were encountered: