HelpingHand

Introduction 📌

Even with the rise of tools and technologies, mankind hasn’t implemented applications that could help visually impaired people. With the rise of Data Modelling techniques that can be used to infuse “intelligence” even in dumb computers and the ease of accessibility, this “intelligence” can be extended to our Smartphone to help the visually impaired people cope up with their surroundings and get a helping hand in their daily activities. Our Application aims to bridge the gap between them and the visual world by leveraging the power of Deep Learning which can be made accessible even on low-ended devices with a lucid User-Interface that would exactly allow them to better understand the world around.

Technology Stack 🏁

🏃‍♂️ Why this Project?

Our primary purpose behind this project is to leverage and study how Deep Learning Architectures along with easy prototyping tools can help us develop applications that can be easily rendered even on low-end devices. With this Application, we will develop a one-stop solution to allow the Blind or Partially Blind People to better understand the surroundings around them and to be able to cope with the dynamic world ahead of them.

The Minimal Viable Product (MVP) would allow the Users to leverage Image Captioning Architecture to generate a real-time insight into their surroundings while using Natural Language Processing to speak out in a lucid manner. The cornerstone of the Application would be its User Interface which would infuse a lucid experience for the User with its ease of handling and use.

For this project, we will be collaborating on various domains like:

This would be an enriching experience for all of us that are part of this team.

How our project is more accessible for visually impaired people

Try it out

Here is a link to the apk for trying it out: https://drive.google.com/file/d/1VjqDbLCVj_YqMb_hoIbc8naaDzEEYeKj/view?usp=sharing

Link to Github Repo
Link to Demo Video