Booth Id:
ROBO002T
Category:
Robotics and Intelligent Machines
Year:
2019
Finalist Names:
Bushuev, Maksim (School: School of Computer Science VECTOR++)
Gorkaev, Gleb (School: School of Computer Science VECTOR++)
Abstract:
In the next 5 years, leading car companies promise to release unmanned vehicles, but we are sure that the steering wheel, pedals and the driver himself will be still in the vehicle’s debts. For the time being, it'll be more a smart assistant driver that should “see” further a person and prompt decisions quickly.
Autopilot sensors must operate in adverse weather conditions, provide a range of hundreds of meters, if possible do not have moving mechanical parts, be inexpensive and reliable. Today, a block of dual cameras and a lidar is used. We suggest using an electronically controlled radar beam. Its advantages: it's much cheaper, more reliable, "sees" through all weather disturbances.
We used such a radar in our work, combining radar data and visual recognition of pedestrians and cars from video cameras into one system. From the video image, we detect objects on the road, then we combine the data of computer vision and radar on one screen. We found out that one camera is enough, since we get the speed and distance to objects from the radar. Instead of using "heavy" multilayer neural networks, Haar cascades were used. This increased the speed of the system by 3-4 times on the same hardware and let us use less expensive and less powerful equipment, compared to what neural networks require. Also, our system recognizes traffic signs and marking lines by a practically linear algorithm and also checks the moment when the driver falls asleep using a convolutional neural network, the accuracy of which is 97%.
We tested our system on full-size autopilot on the field. Testing has proved the efficiency of the system and reliable object detection at a distance of hundreds of meters.