Abstract Search

ISEF | Projects Database | Finalist Abstract

Back to Search Results | Print PDF

Fast Technology of Automatic Markup and Teaching a Robot to Recognize Objects

Booth Id:
ROBO043

Category:
Robotics and Intelligent Machines

Year:
2019

Finalist Names:
Lysin, Serhii (School: Polytechnic Lyceum NTUU "KPI")

Abstract:
Usually, training of an assistant robot recognition system requires more than 100 man-hours because input image material should be collected and marked up, usually manually. It also makes a problem with NN retraining for new objects. That's why a method is needed, that can be used to quickly prepare training materials and train and retrain a CNN to recognize a new object and indicate possible actions. Used Tensorflow implementation of faster RCNN detection framework and prepared training dataset of 3 objects manually. Time for preparing consists of photographing, markup and folding the material and NN training time. ' Created the app for an automated markup from a video stream. The application allows to select an object, shoot a video, select areas of interest on the object (which can be used for actions: pressed, pulled, rotated), throw to server and train NN. I collected a dataset of the same objects and measure the same parameters. Compared neural networks recognition quality with the video stream from a camera of earlier created robot prototype and inside ROS simulation. NNs was trained with the first and second methods and tested on photos from the original object made in an initial environment and also moved to another background. This method gives the same recognition quality but new technology of material preparing is 5 times faster. The developed method can be also used by a not trained user, so it can be used as a common interface to assistant robots.