Hi all,

I’ve been thinking about mimic songbirds in my work recently, and particularly in relation to AI that can generate sounds. Mimics are songbirds that learn sounds from their environments and incorporate them into their calls – like the Australian lyrebird or the New Zealand tūī (more on that one later). These kinds of birds are really amazing – just a joy to listen to, and it’s really fun trying to work out where they picked up various parts of their song.

 

 

This thought process began with a talk I gave at the Elbphilharmonie back in March, although I remember I did end my piece “Silicon” with a recording of birdsong in its first draft in 2021, later replaced by a neutral chord and bass drum heartbeat. Clearly the connection between AI and birdsong has been bubbling for some time.

(as a side note, I will definitely write a post about neutral chords at some point).

Both AI and mimic songbirds can slot the sounds they have learned into a more complex sonic structure, juxtaposing the expected with the unexpected. And neither are perfect learners. A tūī usually sounds like a tūī, no matter what it’s singing. It has a certain abrasive quality to it – it sort of sounds like the old Internet dial-up tones. So you can tease out the difference between the sounds it has learned and what the bird sounds like innately.

This is one I recorded last time I was in New Zealand. It’s not the best ever recording, but I wanted to share one I made!

 

The same is true of AI. You can train ten different algorithms on the same dataset and they will make 10 different imitations of that data. Or, by training the same algorithm on lots of different data sources and comparing what it generates, you can begin to tease out what that algorithm sounds like underneath the surface. I love this, as in my music I’m always trying to distill complexity into simplicity, and vice versa.

Fast forward to May, in New York, which saw the premiere of my piece Tūī for International Contemporary Ensemble, conducted by Vimbayi Kaziboni. This is the first piece where I’ve tried to put some of these thoughts into action.

I want to write another piece at some point focussing on some specific electronic and composition techniques used in this piece, but for now you can click here to hear it and there’s a programme note in the video description.

This post, however, is about looking forward to future work. There’s one huge difference between mimic songbirds and generative AI that I’m particularly interested in – and that’s how we listen to them.

Nobody listens to a mimic songbird expecting it to create a piece of music that makes sense to us. Yet the dominant mode of listening to AI-generated sound is to judge it as though it’s a human-made piece. This strikes me as odd – they are both, after all, non-human actors that integrate imitation into their sound-worlds. This difference goes some way to explaining why AI-generated music is often profoundly rubbish – it is being used to create music, and that music is being judged, according to metrics that it simply isn’t best suited for. We should listen to it more like how we listen to songbirds (or whalesong, or any other non-human music). We don’t expect birds to perform a symphony or improvise over the Coltrane changes – they do their own thing.

OK, but what might that mean in practice? Well, I’m still working that out, but I expect to be working various approaches in the future. It definitely includes experimenting with music AI systems that have nothing to do with human-made music. I don’t know yet what this might result in, but it definitely won’t be yet another “AI completes Mozart” exercise. I hope it’ll be something creatively inspiring.

I am also thinking about a “listening strategy” that focusses on the unique aspects of what makes AI tick, and not on listening for structural, melodic, harmonic, or timbral ideas that we expect from human-made music. I’m excited to see what happens if I take Messiaen’s work on birdsong, for example, as a starting point and translate it towards AI sounds.

Or it might be something that results in a new method of performing with AI. What is an AI music ecosystem, and how does each algorithm fit into it (or not)? Can we learn anything from how mimic songbirds interact with each other that can be transformed into a music performance context?

Where might this all end up? Well, I’m writing two big orchestra pieces for next year so that’s a definite destination. (More on those another time). But I’m also thinking about a set of electronic sound art pieces, or an installation designed to be heard in a specific space. We’ll see.

Leave a Reply

Scroll to Top