Booth Id:
BEHA081T
Category:
Behavioral and Social Sciences
Year:
2021
Finalist Names:
Barrett, Arjun (School: The Harker School)
Lan, Alexander (School: The Harker School)
Abstract:
Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder that affects one in fifty-four individuals born in the US. ASD hinders one’s ability to recognize emotions, causing difficulties with interpersonal relationships and social engagement. We developed a neural network that uses a new visual modality, a diversified dataset, advanced preprocessing techniques, and deeper CNN and ResNet model architectures to recognize emotions in video conferencing platforms. To create the models, we preprocessed audio input with mel spectrograms to extract frequency changes over time and used Haar cascade classifiers to extract faces from the visual input. We trained and tested our models using the Keras API for TensorFlow in Python. The audio models only achieved 57% accuracy due to the wider variety of phrases and actors in the new IEMOCAP and CREMA-D datasets. However, our video models achieved 86% accuracy when classifying emotion into positive and negative categories and 76% when classifying into angry, sad, neutral, or happy. We developed a novel duplex N-categorical classification combination and remapping algorithm (DNCCRA) to convert our model outputs into emotional indicators with statistics, colors, and emojis. To validate real-world performance, we developed an app that integrates with Zoom, the most widely used medium for social interaction during COVID-19, to display emotion information for all participants. We received positive feedback from our ASD participants and incorporated their suggestion to include emojis in the app. Our production-ready platform can help millions of ASD individuals train their emotion recognition skills and integrate more easily into social situations.
Awards Won:
Second Award of $2,000