Can computers compose?


This Thursday, my latest BBC Radio 4 documentary, Can Computers Write Shakespeare is broadcast. The programme asks whether computers can ever be truly creative, using sculpture, music and poetry as examples.


As a teenager, I wrote a computer program that composed ragtime music using simple probability tables, i.e. if the current note is an A what is the probability the next note is B,C, D etc. The notes were selected by rolling a dice. I then superimposed the structure and rhythms of ragtime to these melodies. This produced music that had the jaunty lilt of ragtime, but it wasn’t something you’d listen to for very long because it didn’t seem to be going anywhere.
The Radio 4 documentary focuses on a much better computer composing system, IAMUS (‘Other computer composers are available’). The video below shows the world premiere of Adsum, a piece entirely composed and orchestrated by the machine.


As you can hear, IAMUS usually creates modern classical music. This is done by mimicking the processes in Darwinian evolution. IAMUS started with a very simple population of musical genomes that were just a handful of notes that lasted a few seconds. Through a process of breeding and mutation, IAMUS has produced new compositions that are longer and more elaborate. The computer is given very few guidelines beyond ensuring the notes remain within the range of the musical instruments. It is like watching a student composer develop their compositional style, where the computer is working on its own without human input for musical ideas.

One of the most fascinating things I learnt while making this documentary was that many of those working in computational creativity are not that interested in the Turing Test. They’re not interested in testing whether a computer algorithm can create art that can pass as human-generated. So, when we got experts to critique the music and poetry, they were told it was computer generated to begin with.


The simple act of telling them that the music or poetry was written by a computer changed how they perceived it, and part of that prejudice appears to be unconscious. When Steinbeis and Koelsch compared which regions of the brain were stimulated by computer and human composed music in an fMRI scanner [doi: 10.1093/cercor/bhn110], they found that the regions of the brain associated with ascribing intention to others is less active with computer composed music. Maybe an indication that however good computers get at composition, they will always fall short of fulfilling the need for art to be about communication between humans.
You can buy IAMUS from online music stores. Will you be buying it?
Postscript: One thing that was lost on the cutting room floor was IBM’s attempts to churn out new and innovative recipes using computers. Would you fancy Swiss-Thai asparagus quiche?

Follow me

0 responses to “Can computers compose?”

  1. Interesting, I guess things have moved on a little since the Illiac suite, although I wonder if I don’t find that piece a little more stimulating than the Iamus generated composition.
    As for computers or human composers I think the main difference is motivation. Somehow to think that a piece of art can be created because a machine has been told to create it has a strange ring to it. Imagine that you love music that’s created by something that doesn’t even care, or is concious, that it composed something! I like computer generated music (and even work in that field), although for me it seems best when generated and controlled via a human, making it a sort of extension of some living soul.
    Interesting post, thanks.

  2. To me, the piece is very reminiscent of the first movement (“The ‘St. Gaudens’ in Boston Common”) of Ives’s “Three Places in New England,” or even parts of his Holidays Symphony. The resemblance I hear seems to be due to the soft, slow texture, the use of high strings and long tones in the brass, the superimposition of simple melodies in contrasting tonalities, occasional percussion mutterings, and the soft–loud–soft arch form. Thanks for sharing!

  3. In most cases mechanical oscillation is playing the music, but the spiritual expression of music is not the sound, it’s how one feels about the sound that is the music, the interpreted expression.
    Simply taking any sound and presenting it is expression, the nature of the sound is irrelevant, it is whether a person interprets the sound as musical or within the context of listening and experiencing. The music lives in the listener.
    in the case of a musical computer, just like a music box or player piano, the expression interpreted from it is down to the design, the designer is the composer in this case, and the computer is the instrument, which does not require multiple impulses or inputs to operate, it can be “plucked” in its compilation and oscillate musically from there.
    Computers can certainly process sound, but in order for it to be experienced as music it must be listened to.
    Sonic consonance is structural order of a geometric and rational sort. Complicated non-linear but still rational processes can produce incredibly complicated ‘resonances’.
    It could even be argued that on a fundamental level, the whole universe is just a big instrument resonating, and that any physical structure – including a human being – is, umm, resonating.

Leave a Reply to da bishop Cancel reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: