Abstract Search

ISEF | Projects Database | Finalist Abstract

Back to Search Results | Print PDF

Visual Sign Language Translator

Booth Id:
ROBO085

Category:
Robotics and Intelligent Machines

Year:
2021

Finalist Names:
Umurbekov, Ilyas (School: Nazarbayev Intellectual School of Physics and Mathematics in Kokshetau)

Abstract:
There is currently an increase in the number of people who suffer from hearing loss due to exposure to loud noise, the prevalent use of the in-ear audio system, or old age. The goal was to create a visual sign language translator, which translates visual signs using hand gestures captured by the camera to text and audio via artificial intelligence. And to control interactions on monitors by using hand gestures. This is especially useful during this current pandemic as it limits the use of touch. A study and training of neural networks and modeling were carried out to track and recognize visual gestures. Once the study was complete an algorithm for finding/tracking/understanding the hand gesture was written. The neural network was programmed and built. Multiple experiments were carried out to identify the optimal method of recognizing hand movements. Data were analyzed, optimization and error corrections were carried out. A test application was created, it translated the visual gesture from a phone camera and a computer camera into text and audio in real-time. The text and audio messages could be read out or even sent to someone automatically via a messaging app. The study showed that the goal of using neural networks to detect gestures and emotions in a moving image is achievable. A person who was deaf was understood by the camera interpreting some of the gestures. All of the required gestures were not added. The use of gestures to control computer functions, such as lowering the volume were also successful.