Abstract Search

ISEF | Projects Database | Finalist Abstract

Back to Search Results | Print PDF

Composing Music with Sign Language Pattern Recognition

Booth Id:
SOFT033I

Category:

Year:
2015

Finalist Names:
Mustafa, Amina

Abstract:
Composing music with sign language pattern recognition provided the concept of being able to communicate with any system through sign language whether it be composing music or translating a message. Supervised learning, labeled training data, was used in this program by creating templates of twelve hand gestures that represent keyboard and mouse shortcuts necessary to input notes onto sheet music in the composition program Finale. The set of hand gestures used for this version represent the notes A, B, C, D, E, F, and G, along with quarter and half notes, a backspace that erases the last note created, and stop and play hand motions. The coding for the system is written in Microsoft Visual Studio 2010, using the Intel OpenCV (Open Source Computer Vision) Image Processing Library and EMGU C# wrapper for Intel OpenCV. To test the accuracy of how well this program recognizes the hand gestures, there is a side procedure within the program where a command is selected and performed twenty times in a row. By doing so, the program is able to compare every frame of the live video to the templates and recognize the correct functions to be translated onto the sheet music. A Chi-square two-way table test was conducted in order to compare the accuracy of all twelve hand gestures and to see whether or not the program was consistent. The P-value indicates that some association between the variables is present, and if it is greater than 0.05, one may fail to reject their hypothesis since it will be within the range of acceptable deviation. The test from this project showed a 0.160981 P-value, so the accuracy of the gestures in relation to one another supported the program to be consistent.