Train New Songs With Your Own Music

Brief Instructions for Using Tensorflow Magenta With Your Own Music to Train the AI & Create New Songs


Creating Tensorflow TF Records

From the root directory of your project, run the following terminal command, replacing input_dir with the directory where your MIDI dataset is:

convert_dir_to_note_sequences \


–output_file=tmp/notesequences.tfrecord \



Creating Data Sequences (from the TF Records):


polyphony_rnn_create_dataset \

–input=tmp/notesequences.tfrecord \

–output_dir=tmp/polyphony_rnn/sequence_examples \



After this command finishes, there will be two files in the tmp/polyphony_rnn/sequence_examples directory, one called training_poly_tracks.tfrecord and one called eval_poly_tracks.tfrecord to be used for training and evaluation respectively.

Training (takes a long time):

polyphony_rnn_train \

–run_dir=tmp/polyphony_rnn/logdir/run1 \

–sequence_example_file=tmp/polyphony_rnn/sequence_examples/training_poly_tracks.tfrecord \

–hparams=”batch_size=64,rnn_layer_sizes=[128,128,128]” \



Testing (by making 10 new songs)

polyphony_rnn_generate \

–run_dir=tmp/polyphony_rnn/logdir/run1 \

–hparams=”batch_size=64,rnn_layer_sizes=[128,128,128]” \

–output_dir=/tmp/polyphony_rnn/generated \

–num_outputs=10 \

–num_steps=128 \

–primer_pitches=”[67,64,60]” \

–condition_on_primer=true \