Abstract Search

ISEF | Projects Database | Finalist Abstract

Back to Search Results | Print PDF

Automated Lip-Reading Technique for Speech Disabilities: A Novel Machine Learning Algorithm Using Microsoft Kinect Sensor and Hybrid Approach for Feature Extraction

Booth Id:
ROBO020T

Category:

Year:
2015

Finalist Names:
Eltaiebany, Ahmed
Mesbah, Ahmed

Abstract:
Speech disabilities has an incident rate of more than 360 million people worldwide according to the World Health Organization. Speech is the most natural communication means for human. However, this means is difficult for the speech handicapped person who cannot utter clear voice. Their main means are by writing or sign language, and these means takes burden with both the speaker and listener. Then, we focus the lip reading which recognizes the utterance meaning based on visual lip motion. There are many researches focus only on help deaf people to improve their lip reading imitation and skills to understand the others .Our work propose a new novel lip reading system to aid the muted people through using their lips as a mean of communication with normal people . Recognizing the content of speech based on observing the speaker’s lip movements is called ‘lip-reading’, it has become a hot topic for human computer interaction .The major difficulty of the lip-reading system is the extraction of the visual speech descriptors. An automatic lip reading system consisting of two main modules 1) a pre-processing module able to extract lip geometry information from the video sequence and 2) a classification module to identify the visual speech based on lip movements. We represent a hybrid framework for lip reading using MS Kinect camera to extract 18 feature points coordinates by MS Kinect SDK passing features to Matlab for classification using KNN .The experimental results are included to confirm the effectiveness of the proposed system.