The purpose of this project was to design and create a robotic humanoid to do sign language as a way of education students of American sign language. Testing of the robot included primarily of a feedback multiple choice survey as looking for the identification of certain words using a voice recognition software called BitSophia. The main difference between prototype one and prototype two is the construction of the shoulder: prototype one’s shoulder was made out of PVC pipe and an aluminum bracket to connect it to the mannequin used for the main section of the body. Prototype two’s was formed by a 3D printed piece but forms a ball and socket joint with the spherical being on the bicep piece. After testing was completed, the results of the testing for the first prototype before a group of ten people was the identification percentages of the word big being 100%, the word afraid having a 25% identification(participants were partially in-between so both answers were counted), the word cold having a 25% identification, and the word fly having a 75% identification. The results of the testing for the second group included: the mouse having a 60% identification, the word fly having a 60% identification, the word sad having a 40% identification, the word afraid having an 80% identification, the word big having a 60% identification and the word why having a 30% identification, the numbers one through nine all had a 100% identification percentage. It should be noted that more words were tested with the second prototype as the mobility of the robot increased; however, the testing conditions of this round of testing were in noisier conditions which unfortunately could not be fixed. Therefore, if this project was to be tested again, quieter conditions would be needed for testing.
Arizona State University: For the project that applies computer science to further inquiry in a field other than computer science
Google CS Connect Award