Abstract Search

ISEF | Projects Database | Finalist Abstract

Back to Search Results | Print PDF

American Sign Language Classifier

Booth Id:

Systems Software


Finalist Names:
Yim, Thomas (School: Lakeside School)

Over 466 million people suffer from disabling hearing loss. However, learning and practicing sign language is not commonplace in society; hence, this study developed a sign language recognition prototype using machine learning and hand gesture recognition. This project aimed to build software to recognize American Sign Language (ASL) alphabets from hand gesture images with over 95% accuracy, which would be a significant improvement compared to results obtained in previous studies. The software consists of a classification algorithm based on the use of Convolutional Neural Networks (CNNs) and static images of ASL. Using a publicly available database of Modified National Institute of Standards and Technology (MNIST) 28 x 28 pixel images of different ASL letters from Kaggle, a model was constructed to correctly classify all images. The model was modified for another publicly available dataset of 200 x 200 pixel images, as that resolution is more applicable in real life use than a 28x28 pixel MNIST train image. The MNIST classifier reached a training accuracy of 98.85 percent and a test accuracy of 71.40 percent. This suggests an overfitting of data, so modifications were made in the number of hidden layers, number of neurons per layer, and the type of activation equations to reduce this inaccuracy. The model finally reached a test accuracy of 96.8 percent on test labels. Higher rates of accuracy could be obtained if the classifier were trained to distinguish between letters such as k and p, which use the exact same hand position but are orientated differently.