Booth Id:
ROBO023
Category:
Robotics and Intelligent Machines
Year:
2020
Finalist Names:
Liao, Tongzhou (School: Shenzhen Experimental School Senior High School)
Abstract:
Self-localization is a critical component of autonomous systems, including mobile robots. As a popular approach, mobile robot visual self-localization methods have been widely employed in tasks that enhance productivity and benefit everyday lives.
The project proposes a multi-perspective vision-based mobile robot self-localization method. It is designed to provide accurate, robust and scalable self-localization for mobile robots in closed environments. A chain of feature extractors identifies and locates visual landmarks in images obtained by multiple cameras and evaluates the angular positions of each observed landmarks relative to the robot. A Bayesian estimator then provides location estimations of the robot based on pre-mapped positions of the visual landmarks, known positions and orientations of the cameras, and the orientation of the robot.
The proposed method avoids omnidirectional cameras which are subjective to disadvantages including high cost, heavy weight, difficult hardware integration, and the need of calibration by using multiple cameras to gather angular information of visual landmarks. The proposed method has a flexible design that allows visual self-localization with any quantity of directional cameras placed in any arbitrary way, providing self-localization of different accuracies at different costs, sizes, weights to satisfy different demands.
The proposed method is tested on a soccer robot in a standard 243cm×182cm RoboCup Junior Soccer League field, obtaining average localization errors of 7.8% (longitudinal) and 7.3% (lateral).
The localization performance and scalability of the proposed method brings considerable practical advantages in cost-sensitive, size-sensitive, or weight-sensitive industrial mobile robot applications.