Abstract Search

ISEF | Projects Database | Finalist Abstract

Back to Search Results | Print PDF

An Improved Deep Neural Architecture Search Net-Based Wearable Object Classification System for the Visually Impaired

Booth Id:
ROBO054

Category:
Robotics and Intelligent Machines

Year:
2024

Finalist Names:
Arvind, Aniketh (School: Hackley School)

Abstract:
Machine learning is a rapidly growing field, making incredible advances especially in the subfields of computer vision and image classification. These advances are in part what allow for innovative applications to be implemented. In Part I of this study, a deep learning system consisting of a transfer-learning model and a novel wearable prototype were designed to aid the visually impaired in classifying vital, centi-scale objects in their surrounding daily environments. While the system proved incredibly effective, the need for a more comprehensive apparatus led to Part II. A widespread current issue in the field is resource-efficient data expansion and the lengthy model training times that come with it. In this follow-up study, the custom dataset curated for the model to train on was grown by over 550%, from 6,024 images to 40,256 images. This augmentation required a significant number of memory conservation techniques such as code optimization and the use of generator functions, as well as a variety of model adjustments with respect to layering and hyperparameters. Ultimately, this study allows for over 35 classes of daily objects to be classified with model accuracies hovering just under 90.00%. Using the novel wearable device engineered in Part I to conduct a 12-trial experiment, the system finds an average precision, recall, and F1-score of 98.04%, 98.11%, and 98.08%, respectively. Overall, this system represents a significant milestone in the development of comprehensive machine learning systems that positively aid the visually impaired while also improving their daily independence and quality of life.