Abstract Search

ISEF | Projects Database | Finalist Abstract

Back to Search Results | Print PDF

Computer Vision: Mapping and Orientation in 3-D Space

Booth Id:



Finalist Names:
Zvara, Daniel

Mapping and orientation in 3-D space is one of the fundamental challenges of robotics. However, it still poses a problem because of its high computational difficulty. In my work I present a fast way of creating a 3-D map of the environment and navigating inside it. I use stereo cameras to reconstruct the 3-D model of the viewed scene. To reduce the time complexity I implement fast semi global matching algorithm (SGM) with binary feature descriptors. In poorly textured areas where SGM isn't effective, I use local features (Laws texture energies, gradients, edges) to determine their depth relatively to previously known data. An acquired point cloud is filtered and compressed to reduce the computational effort. I add the reconstructed 3-D model to the 3-D map of the surrounding space and create an 2-D map for navigational purposes. Registration of the current scene in a 3-D map is done using 3-D-feature based descriptors with software (SW) motion detection. This enables me to create a model of surroundings without using odometry or other type of physical motion sensing. I have tested the developed software on a small multicore ARM driven robotic vehicle equipped with odometry sensors to compare the accuracy of SW motion detection and scene registration with odometry. It was found to be comparable with SW, making SW very useful when odometry is not available. Accuracy of the calculated depth was measured and the measurement error is less than 0.1m (in surrounding range). Created model is an accurate visualization of the surrounding space and I'm capable of tracking the vehicle position inside it. It can be used for dimension measurement, object detection and SLAM applications.

Awards Won:
Third Award of $1,000
SPIE, the international society for optics and photonics: $1,500 Open Source Award