Abstract Search

ISEF | Projects Database | Finalist Abstract

Back to Search Results | Print PDF

Auto Arranger Based on Deep Learning Methods

Booth Id:
ROBO045

Category:
Robotics and Intelligent Machines

Year:
2019

Finalist Names:
Shumnov, Petr (School: Lyceum 1533 of Information Technologies)

Abstract:
This project implements the idea of automatic accompaniment. Deep learning is used to create accompaniment without using samples. In contrast to existing auto arrangement products, it allows to train program to compose music for almost any instrument in any style. While writing music guitar players often need to try how their guitar part would sound in full arrangement. But rarely they play drums or bass guitar. There is no need to worry about backing track anymore, as users can create it by simply choosing music style. The program gets guitar notes as an input and then automatically generates parts for drums and bass guitar. The key advantage of using machine learning is that the program learns from real music, which makes result sound natural. We have tried more than 30 different data preparation approaches and four types of neural networks, ranging from basic dense model to deep recurrent ones. For now, the best result has been achieved using LSTM model (Long Short-Term Memory – a type of recurrent neural networks). The model has already been trained on 24 classic rock groups and 8 rock and roll groups. Parts written by the neural network are over 65% identical to the parts composed by original performers. The program can be trained in any music style as needed using vast open libraries of guitar tablatures available.