This project aims to develop a technology-based solution for enabling people with physical disabilities to perform motor tasks. In its initial stage, this was substantiated by the use of electroencephalography to control a humanoid robot for object manipulation. A four-step methodology was used to accomplish this. 1) A facial expression (jaw clenching) was used for generating brain signals that embodied the user’s intent. 2) Signal processing techniques including Butterworth bandpass filtering, Fourier Transform, and frequency spectrum averaging were employed to classify input signals. A binary classification system that distinguishes between active and neutral states was extended by attaching a time duration tag to permit an n-class system. 3) A graph-based model was designed and implemented for mapping the classification data into robot instruction sets, and 4) A life-size prototype humanoid robot was designed and built for performing the intended tasks on behalf of the user. Preliminary results suggest a high level of accuracy (>95%) and consistency (>80%) between the user’s intent and the observed behavior of the robot. However, further research is needed to increase the system’s functionality and to improve its performance.
Serving Society Through Science: First Award of $500
Philip V. Streich Memorial Award to the London International Youth Science Forum
First Award of $5,000
Intel ISEF Best of Category Award of $5,000