Abstract Search

ISEF | Projects Database | Finalist Abstract

Back to Search Results | Print PDF

Project Vision: Virtual Environment Through Artificial Intelligence Recognition

Booth Id:
EBED021

Category:
Embedded Systems

Year:
2022

Finalist Names:
Nagler, James (School: Garden City High School)

Abstract:
According to the World Health Organization (WHO), there are around 1.3 billion visually impaired people, 36 million of which are blind. A predicted 79 million more will be blind in the next 30 years. My project aims to help the visually impaired by providing a cheap, compact, and efficient navigation system using primarily artificial intelligence, distance sensors, vibration motors, and a speaker. For my procedures, I used images from various sources, annotated them for over 15 different objects, and trained them using various algorithms from Google’s Tensorflow AI program. The efficiencies, latency, and accuracy of these algorithms were recorded to determine which algorithm was best for this application, and how the different algorithms affected the dependent variables of efficiency, latency, and accuracy. The safety of the device was also ensured while testing. Through the data that I collected, the Efficient-0 algorithm offered by Google’s Tensorflow AI library was the most efficient, both in size of time, but also was the least accurate. The Efficient-4 algorithm, however, was the slowest in size and time, but the most accurate. In order to maximize the amount of data that the headset can process, I went with the Efficient-0 algorithm to maximize that task. In conclusion, the previous tests show that it is possible to create a compact and functional navigational system for the visually impaired using AI, although the accuracy of the AI can be improved upon with a different algorithm or more data.

Awards Won:
First Award of $5,000