A neural network that compose music. You can play the example generated in this same repository.
In development.
It is functional, but the code with i have trained and used this model is a mess for now. I'm trying to clean it to push it in the repository but, at the same time, I'm trying to update the architecture to a transformer.
In this repository you can find the model. You only need to do:
new_model = tf.keras.models.load_model('model')
states = None
next_char = tf.constant([' '])
result = [next_char]
for n in range(1000):
next_char, states = new_model.generate_one_step(next_char, states=states)
result.append(next_char)
result = tf.strings.join(result)
print(result[0].numpy().decode('utf-8'), '\n\n' + '_'*80)
It is an RNN so it: generates a music note, return a state and after it you can call generate_one_step with the new state again.
I recomend use timidity to generate the .mid file:
sudo apt-get install abcmidi timidity
abc2midi my_file.abc -o my_file.mid && timidity my_file.mid
This .mid file can be played ^.^