The project enables the speech impaired people, who have defects in the larynx, to speak. For a particular sound or word, there exist a particular tongue and lip movement. Sensing the tongue and lip movement enables the device to identify the sound the user wants to produce. After sensing the movement the device plays an audio recording of the intended sound. For this an accelerometer sensor placed on a membrane PCB is fixed at the tip of the tongue and another sensor at the lower lip. As the tongue and lips move to intentionally produce a sound as no real sound can be produced by a speech impaired person. The sensor, along with the tongue has a movement that is exclusive for a particular sound and the sensor after sensing the movement gives an exclusive output. This output is recorded by the microcontroller and then sent to the processing chip for the DIGITAL SIGNAL PROCESSING. After the output is processed by the chip an algorithm directs the speaker to play the intended sound recording. This is how sensing the tongue and lip gestures can make a speech impaired person speak.
Fourth Award of $500