Abstract Search

ISEF | Projects Database | Finalist Abstract

Back to Search Results | Print PDF

SoundCare: Alerting Emergency Situations to People with Hearing Impairment Using Deep Learning

Booth Id:

Robotics and Intelligent Machines


Finalist Names:
Kang, Taeuk (School: Korea International School, Jeju Campus)

Conventional methods of warning emergency heavily rely on auditory signals and are inaccessible to 300 million people worldwide with hearing impairment. This study aimed to produce a software that alerts emergency situations for people with hearing impairment by listening to the background environment for triggers, such as sirens and alerts, and classifying them as an emergency using an audio classification system based on deep learning. Conventional sound classification with deep learning has limited applicability in real life as the noise from the environment each user is exposed to may be different and the model cannot be trained on an individual user's environment a priori before its usage, limiting its accuracy at post-deployment. To overcome this, a novel continuous augmentation system based on regular sampling of noise from the users’ environment was developed and was used to generate new training datasets for each user. Transfer learning was applied to quickly generate personalized models for the user based on their environment, improving the user experience and accuracy of the generated model in real life settings. This study successfully showed that emergency auditory signals can be classified at a high accuracy up to 90%. Two different applications using the model trained in this study were developed to demonstrate its real-life usability.