In recent years, deep neural networks have found some success in exhibiting human-level cognitive skills, yet they suffer from several major obstacles. One of the most significant limitations is the inability to learn new tasks without forgetting previously learned tasks, a problem known as catastrophic forgetting. In this research, we propose a method to overcome catastrophic forgetting and enable continual learning in neural networks. We draw inspiration from principles in neurology and physics to develop the concept of weight friction. This mechanism functions as a form of memory for neural networks. It operates by a simple modification to the update rule in the gradient descent optimization method and converges at a rate comparable to that of the stochastic gradient descent algorithm. Broadly speaking, weight friction raises the possibility for neural networks to learn continually as a step toward achieving strong AI. It is simple to implement and is not restricted to operate over limited task domains. Furthermore, it is inherently applicable to most neural networks and therefore has the potential to expand the scope of tasks to which continual learning may be applied.
First Award of $3,000
National Security Agency Research Directorate : Second Place Award "Science Security" of $1,000
Patent and Trademark Office Society: Second Award of $500
Association for the Advancement of Artificial Intelligence: Honorable Mention