-
Notifications
You must be signed in to change notification settings - Fork 5
Description
Description
While exploring ways to improve communication accessibility for the Indian deaf community, I came across a project called OmniBridge which uses American Sign Language (ASL) with computer vision for real-time translation. Inspired by this, I propose integrating a similar feature into SUNVA AI by training a model on Indian Sign Language (ISL) using publicly available ISL datasets. This could significantly bridge the communication gap for users who prefer sign language over typing.
Expected Behavior
- A camera-based input interface allows users to sign using ISL.
- The system interprets ISL signs in real-time and converts them into simplified text on screen.
- Optionally, this text can then be converted to speech using SUNVA AI's existing TTS features.
Current Behavior
SUNVA AI currently supports STT (speech-to-text), text simplification, highlighting, and TTS for typed responses. It does not currently support input through sign language.
Possible Solution
- Explore training a lightweight computer vision model on ISL gesture datasets from data.gov.in.
- Start with a limited set of frequently used ISL gestures for a proof of concept.
- Integrate the model into the SUNVA frontend to allow real-time ISL input.
- Engage with the deaf community to validate the usefulness and ease of use of this feature.
Additional Context
As a beginner, I’m unsure how to train the model or integrate it, so I’m raising this here for guidance, suggestions, and possible collaboration. I believe this could be a meaningful step forward for SUNVA AI and its impact on the Indian deaf community.