Booth Id:
ROBO004T
Category:
Robotics and Intelligent Machines
Year:
2018
Finalist Names:
Kazantsev, Daniil (School: Municipal Lyceum #12)
Semenov, Daniil (School: Municipal Lyceum #12)
Abstract:
Nowadays there are 360 million people with speech disturbances worldwide. Despite the existence of sign language and methods of its interpretation the issue of communication with deaf-mute people remains acute. The goal of our project is to create a device recognizing sign language (ASL) and translating it into English. Myelofon uses a controller, consisting of a set of infrared cameras and LEDs to register the positions of the user's hands. Then the data is processed by the computer unit. The source code of the program is written in Python v2.7. We decided to use mathematical algorithms that separate the hands from a complex environment using a differential Sobel operator. This method of detecting among the background noise allows Myelofon to recognize the hands. To reproduce spoken language, we used the Naive Bayes tokenization algorithm from the Natural Language Toolkit (NLTK), that allowed us to separate the input data by the presence or absence of certain elements in their composition. The accuracy of the sentences is provided by using of NLTK morphological analyzers. Myelofon takes the sample and finds the most common two and three word expressions. The information about hand position is updated 200 times per second, Myelofon recognizes the gestures with up to 98% precision (during the test in normal light conditions). The algorithm is resistant to background interference in field of view of the controller (120 degrees in-depth, 150 degrees in width). Myelofon is capable of simulating a natural language with up to 95% accuracy (correctly formulated sentences in the test), and allows us to greatly simplify communication with the deaf-mute community. In future we want to continue expanding Myelofon's word bank and making it more comfortable for users.
Awards Won:
Fourth Award of $500