Abstract Search

ISEF | Projects Database | Finalist Abstract

Back to Search Results | Print PDF

Emulating Human Vision Processes To Quantify an Environment

Booth Id:
ROBO072

Category:
Robotics and Intelligent Machines

Year:
2022

Finalist Names:
Kiiskila, Manu (School: Finnish International School of Tampere)

Abstract:
When humans look at the world, they can automatically quantify important distances, even without binocular disparity due to previous knowledge about similar situations. Humans are also able to use these distances to envision a 3-D model of a room, with objects' relative locations. In order to have automated robots capable of interacting with their environment, it’s important for them to be able to quantify and construct the same model as human vision can. This question this paper will address is, "Is there a way to automate the characterization of a room from a single image, containing no additional information? " To achieve that, this paper presents a method of quantifying aspects of an indoor environment with a monocular image by emulating human processes. This is achieved by estimating distance to the opposing wall, width of the room, distance to individual objects, and distance to the side wall. Different machine learning techniques including AutoML, semantic segmentation, and stacked assembling are used. The combination of information received from these techniques can be used to form a 3-D perspective of the environment, similar to human vision processes.