Skip to content

Latest commit

Β 

History

History
116 lines (78 loc) Β· 4.4 KB

File metadata and controls

116 lines (78 loc) Β· 4.4 KB

πŸ‘οΈ GuardianEye – Smart Eyewear for the Visually Impaired

An AI-powered, sensor-integrated wearable device for obstacle detection, text recognition, navigation, and real-time assistanceβ€”built to empower visually impaired individuals with safe, independent mobility.


GuardianEye Banner License Status

πŸ” Overview

GuardianEye is a multi-sensory smart eyewear system designed to assist visually impaired individuals in navigating their surroundings confidently and independently. By combining advanced AI, computer vision, and sensor fusion, the device offers:

  • Real-time obstacle detection
  • Currency note recognition
  • Printed text reading via OCR
  • Multi-language voice feedback
  • Location tracking for caregivers
  • Hands-free voice command control
  • Emergency alert system with haptic and audio feedback

πŸš€ Key Features

Feature Description
πŸ›£οΈ Real-Time Obstacle Detection Detects and classifies static/dynamic obstacles using LiDAR + YOLO
πŸ’¬ Multi-Language Voice Output NLP-based output in multiple languages for localized interaction
πŸ“· Optical Character Recognition (OCR) Reads text from newspapers, signboards, etc., and converts to speech
πŸŽ™οΈ Voice Command System Microphone-powered hands-free interaction via natural language
πŸ’Έ Currency Note Detection Identifies denomination using computer vision to prevent fraud
🧭 GPS Location Tracking Enables family members to track user’s location remotely
πŸ€– Person & Object Recognition Recognizes known individuals and frequently seen objects
πŸ“³ Haptic Feedback Vibrates for obstacle alerts, turns, or important environmental cues
🚨 Emergency Beep Alert Loud beeping sound triggers when immediate danger is detected

🧠 Technologies Used

  • 🧠 AI Models: YOLOv8 (Object Detection), Tesseract (OCR), NLP (Speech-to-Text / Text-to-Speech)
  • 🌐 Communication: Bluetooth Low Energy (BLE)
  • πŸ”Š Audio Feedback: Bone conduction speakers
  • πŸ“· Sensors: LiDAR, Pi camera, GPS
  • πŸ“¦ Hardware Platform: Microcontroller (Raspberry Pi), Rechargeable battery, Microphone

🧰 System Architecture

Sitemap ![WhatsApp Image 2025-07-19 at 22 58 16_b30fc57b](https://github.com/user-attachments/assets/534d884c-b585-4151-a018-d6377e279c50)

🎯 Use Cases

  • Navigate urban streets, stairs, and corridors
  • Recognize people at work or home
  • Read public signs or packaging labels
  • Detect currency during monetary exchange
  • Alert surroundings in case of emergencies
  • Stay connected with caregivers for safety

πŸ“· Media & Demos

Coming Soon: Demo Video

image
  • βœ… Real-time object detection
  • βœ… Text-to-speech conversion of printed material
  • βœ… GPS tracking dashboard for caregivers
  • βœ… Voice-command feature demo

πŸ§ͺ Project Status

  • Research and literature review
  • Hardware component selection
  • AI module prototyping (YOLO + OCR)
  • NLP-based multi-language system
  • Integration & Testing
  • Real-world trials with visually impaired users
  • Final production and optimization

πŸ‘¨β€πŸ’» Contributors

  • Anirudh Garg – Computer vision(Team Lead)
  • Aaradhya Sharma – Computer vision
  • Abhiroop Singh – IoT
  • Rajveer Singh – Speech Processing, Documentation
  • Bhavneet Kaur – NLP, Speech Processing

πŸ“œ License

This project is licensed under the MIT License.


🀝 Support & Contact

Have feedback, ideas, or want to collaborate? Reach out at:

"Let’s build a world where everyone can see possibilitiesβ€”even without sight." 🌍