Abstract Search

ISEF | Projects Database | Finalist Abstract

Back to Search Results | Print PDF

Real-Time American Sign Language (ASL) Detection and Translation Using Kinect Sensor

Booth Id:
ROBO022

Category:
Robotics and Intelligent Machines

Year:
2022

Finalist Names:
Jaroonjetjumnong, Claire (School: Oregon Episcopal School)

Abstract:
People with hearing and speech impairments use sign language as their main form of communication, but it possesses the disadvantage where they cannot communicate with people without sign language knowledge. The Real-time American Sign Language recognition project hopes to utilize computers as an alternative medium between these two groups of people to help increase accurate communication between the communicators and non-communicators of sign language. The percentage of correct translations per gesture, for both the standard webcam and the depth sensor were compared as the result of the project. The effectiveness of each prototype was determined by the accuracy of the translation: the number of correct translations out of 100 trials per gesture, performed in 3 different environments by 5 people. Both the end prototype for the standard webcam and Kinect 2 are shown to be effective in identifying the correct gestures shown. The Kinect 2 produces an average accuracy of 100%, and a total of 99.20% for the standard webcam in normal lighting conditions, 98.20% and 20.80% in low lighting conditions respectively. The difference in recognition accuracy in normal lighting is only 0.8% which is not comparably significant. Thus, it can be concluded that in normal lighting, the depth sensor could slightly increase accuracy in gesture identification but is not significant enough to be proven advantageous against standard webcam. Nevertheless, a depth sensor is shown to be required for the task of gesture identification in low light level conditions.