Handy makes defining and recognizing custom hand poses in WebXR a snap!
-
Updated
Jun 21, 2021 - JavaScript
Handy makes defining and recognizing custom hand poses in WebXR a snap!
Real-time AI-powered translation of American Sign Language fingerspelling (sign to text)
Sign Language Detection system based on computer vision and deep learning using OpenCV and Tensorflow/Keras frameworks.
Sign Language Recognition using WiFi and Convolutional Neural Networks
Real-time fingerspelling video recognition achieving 74.4% letter accuracy on ChicagoFSWild+
World Level American Sign Language Recognition and Translation to Spoken Language
A CNN based human computer interface for American Sign Language recognition for hearing-impaired individuals
The application is developed using Python3, OpenCV and with concepts of Neural Networks it is trained on DarkNet 53 and for the real time object detection I am using Yolo v3. It detects the 1 hand representation of number 1-10 which will be pointed by the user in real time using American Sign Language
Real-time American Sign Language (ASL) letters detection, via PyTorch, OpenCV, YOLOv5, Roboflow and LabelImg 🤟
An Android Application that uses gesture recognition to understand alphabets of american sign language
American Sign Language Detection Android App using SSD_Mobilenet | Google Colab
communication application to make Complete conversation with deaf people
Sign language recognition, using multihand tracking solution from Mediapipe, along with Speech recognition and Text to Speech to deliver an Android App for deaf mute people
A simple sign language recognizer using SVM
🧏 Detection of the alphabets in American Sign Language (ASL).
A real-time American Sign Language (ASL) detection system using computer vision and deep learning. This project uses a combination of OpenCV, MediaPipe, and TensorFlow to detect and classify ASL hand signs from camera input. The system can recognize a wide range of ASL characters, and can be used to facilitate communication for sign language users.
Recognise and Learn American Sign Language
A Computer Vision based project that uses CNN to translate American Sign Language(ASL) to text and speech
Translate American Sign Language using your iPhone's camera!
Add a description, image, and links to the american-sign-language topic page so that developers can more easily learn about it.
To associate your repository with the american-sign-language topic, visit your repo's landing page and select "manage topics."