Real-time American Sign Language (ASL) translator powered by computer vision and machine learning, enabling both Speech-to-Sign and Sign-to-Speech communication.
🚀 This project was built during the ADU Hackathon in May, 2025.
ASL SignBridge is an interactive application that bridges communication gaps between sign language users and non-signers. Using computer vision and machine learning, this application:
- Recognizes Hand Gestures in real-time using a webcam
- Converts Signs to Speech through a text-to-speech engine
- Provides Visual Feedback showing detected gestures and confidence levels
This tool is designed to make communication more accessible for deaf and hard-of-hearing individuals by providing immediate audio translation of common sign language gestures.
- Real-time Hand Tracking using MediaPipe's hand landmark detection
- Machine learning-based Gesture Classification with TensorFlow
- Text-to-Speech Conversion for immediate audio feedback
- Gesture Stability Detection to prevent false positives
- Intuitive UI with Streamlit for easy interaction
- Adjustable Sensitivity Controls for different environments and users
- Hand Detection: Uses MediaPipe's hand tracking to identify and track 21 key points on the hand
- Preprocessing: Normalizes hand landmark coordinates to be position and scale-invariant
- Classification: Passes normalized coordinates to a trained neural network model
- Stability Checking: Ensures consistent gesture detection before triggering speech output
- Speech Synthesis: Converts recognized gestures to spoken words using pyttsx3
The current model recognizes the following gestures/phrases:
- Hello
- How Are
- You Are
- I am
- Am
- Good
- Thirsty
- Thank you
- Beautiful
- Goodbye
- You
- Python 3.8+
- Webcam
- Clone the repository
git clone https://github.com/BuildersFTW/ASL-SignBridge.git
cd ASL-SignBridge- Create and activate a virtual environment (recommended)
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate- Install dependencies
pip install -r requirements.txt- Run the application
streamlit run app.py- Launch the application using the command above
- Adjust sensitivity settings in the sidebar if needed
- Position your hand within view of the webcam
- Perform sign language gestures from the supported list
- The application will display the detected gesture and speak it aloud when confident
- Min Detection Confidence: Adjust the threshold for initial hand detection
- Min Tracking Confidence: Adjust the threshold for tracking between frames
- Gesture Stability Threshold: Set how many consistent frames are needed before speaking
- Stability Percentage: Set what percentage of recent detections must agree
The gesture recognition model was trained on custom-collected data using TensorFlow:
- Data collection: Used the training.py script to collect hand landmark data for different gestures
- Preprocessing: Normalized coordinates to make the model position and scale-invariant
- Model Architecture: Implemented a neural network with several dense layers and dropout for regularization
- Training: Used sparse categorical cross-entropy loss and early stopping
To add new gestures:
- Run
python training.py - Press 'Enter' to switch to recording mode
- Press a letter key (a-z) to assign to your new gesture
- Perform the gesture multiple times (100+ samples recommended)
- Update the
keypoint_classifier_label.csvfile with your new gesture name - Retrain the model using the Jupyter notebook
keypoint_classification.ipynb
hand-gesture-recognition-using-mediapipe is under Apache v2 license - see the LICENSE file for details.

