Booth Id:
ROBO001
Category:
Robotics and Intelligent Machines
Year:
2020
Finalist Names:
Kolev, Victor (School: Sofia High School of Mathematics)
Abstract:
While deep reinforcement learning has achieved some impressive results in the past years, it suffers from poor sample efficiency and an inablily to generalize, since they learn to manipulate specific data distributions and cannot extract principles from them. Although at its core, RL takes inspiration from the way humans learn and interact with the world, current algorithms do not reflect the human way of thinking. Memory-augmented neural networks can potentially solve these problems - the Differentiable Neural Computer represents a model of the hippocampus by accounting that humans have long and short term memory, as well as fixed routines. Furthermore, the DNC is designed to decouple learning and remembering data, which should substantially aid generalization. In this paper, we show preliminary results that the DNC exhibits great potential in the field, improving both sample efficiency and robustness to noise. Moreover, the rigid memory structures of the architecture allow for extensive analysis of the data structures stored there, which would provide invaluable novel intuition and knowledge into the processes, occurring inside the black box that is the neural network.