ed earl

 contact: ed (at) edearl (dot) com


Procedural music generation - developing computer programs that compose music - is where most of my musical effort goes these days. I sometimes post about this on Facebook or Twitter.

Two procedural music generation apps were released via my company, Cello Beehive.

You can find some of my old hand-written piano music on Bandcamp.

Audio renderings (Feb 2017)


I uploaded audio renderings of the pieces below, plus some new ones, to Soundcloud.

More procedurally generated music (Jan 2017)


Some more pieces of music generated by the algorithm described below (now at revision 2419). Same setup. Around 12 hours per piece.

There's a lot of variety. Randomly selecting input pieces in different genres has a large effect on the output.

Seed 1300801428

Seed 1652005206

Procedurally generated music (Jan 2017)


Here are some pieces of music generated by the algorithm described below (revision 2409).

For each piece, the algorithm used the chords from one randomly selected piece and the structure and statistics from another.

The algorithm ran for around 24 hours for each piece. It still hadn't hit the stopping criterion after that time: the music was still (very slowly) improving.

The only changes I made to the output were to adjust the tempi.

Seed 18845949

Seed 1149903632

Seed 2016411325

Generate-and-test method (Jan 2017)


I've started having some success with a new algorithm for composing music. Samples soon...

The method used is known as generate-and-test, and it works like this:

  1. Define a score to measure the quality of content.
  2. Generate some content.
  3. Calculate the score for the content.
  4. If the score is the best so far, then keep the content.
  5. If the best score is good enough, then stop. (Variation: if too much time has passed, stop.) If we didn't choose to stop, then repeat from step 2.

I didn't invent generate-and-test. There's a nice analysis of generate-and-test applied to video games in the paper Search-based Procedural Content Generation by Togelius et al, 2010.

For this application, the content is basically a MIDI song. More precisely, the music is represented by monophonic voices or parts, and split into evenly sized timesteps. At each timestep, every part is either playing a MIDI note (0-127), continuing the previous note, or silent. The internal representation is a matrix of values, with parts across one dimension and time steps across the other.

For step 1, we calculate a score measuring how close the music is to a predefined target in various ways, such as: proportion of chord tones and non-chord tones; note length; skip size; proportion of notes which fall on each beat of the bar; structure (repetitions, etc). The target values can be taken from a single piece of music in MIDI format, taken from different pieces of music, or entered by hand. Extracting target values from existing music poses problems, because MIDI music doesn't contain chord or scale information, and the music is arranged in polyphonic channels rather than monophonic voices. To deal with this, it was necessary to develop a part extraction algorithm (based on the method described in The Computational Analysis of Harmony in Western Art Music, Mearns, 2013), a chord labelling algorithm and a scale labelling algorithm.

For step 2, we need to generate a matrix of values in the correct format. The obvious first attempt is to generate randomly, but the hit rate is too low: the score improves only very rarely. The search space is too large for random generation to be effective. Using the target statistics to guide generation helps: restricting the range of notes for each part, for example. Fixing the structure of the music, including chords, scales and repetitions, before generation begins further narrows the search space.

The output usually sounds pretty good. Better-scoring content sounds better, suggesting that the score is well defined. The most significant improvements over my previous music composition algorithms are due to the work on chord tones and on structure.

Each piece takes around ten hours to generate at the moment. The biggest problem is still the hit rate - it very quickly becomes hard to find changes which will improve the score.

Musically speaking, the most significant flaw is probably a lack of clear melody.