This week I had the pleasure of attending the first annual Screen Music Connect at London’s South Bank Centre. One of the areas of interest at the conference was the question of Artificial Intelligence (A.I) and whether it is a threat to media composers.
* * * * *
We live in an increasingly technological world where the ideas surrounding A.I. are no longer confined to the fantastical worlds of Hollywood blockbusters. Today, A.I. is beginning to creep its’ way into many parts of our society, from manufacturing to social media- so how does it work with music?
To begin with, composers input data into the system. This could be as simple as teaching the A.I. basic music theory, or as complex as the type of timbres we associate with certain moods- for example plucked strings with an energetic happy mood. When this system is eventually designed, the A.I. is able to develop it’s own music based on the parameters it was given. The filmmaker then enters their required specifications into the system (e.g. Happy, Energetic, Minimalist). The system then comprises a piece of music that it thinks matches the specification.
So is it all doom and gloom?
During his keynote speech Christian Henson played an A.I music piece and another piece composed by a human. He then asked the audience which one we thought belonged to the machine. For me, it was easy to identify which one was machine because it sounded very un-produced and processed. Later on, Ed Newton-Rex conducted the same test, but this time I got it wrong. How? Because Ed’s piece was composed by A.I, but more importantly, produced by a human. So my assumption that the delicate sweeping filter in the piece could only be of human origin was correct.
So what does this all mean?
Well, what Ed showed us is A.I can be something that composers work with instead of simply reject.
In an age where it is increasingly important to churn out music at a fast rate, A.I composition might be a way of assisting composers. As part of his presentation, Ed claimed that A.I gives his company, Jukedeck, the ability to process around 15 tracks a day.
That said, it’s important to remember A.I is bound to its binary state. It cannot feel (currently!), think, or react to the emotion on the screen like a human would. Additionally, as I stated early, A.I. is also bound by the preferences it is given and the data it receives… in simpler terms it cannot think outside the box. As composers, our natural ability to react to, or support the drama on screen or in a game, is why our music is so detrimental to the media industry. Our ability to sympathies with human emotion, allows composers to bridge the gap of meaning and context between the action on the screen to the audience in the cinema.
Take for instance the recent Christopher Nolan and Hans Zimmer/ Benjamin Walfisch retelling of the story of Dunkirk. (Spoilers ahead if you haven’t seen the movie). In the cue ‘Home’, Zimmer slips in the opening theme of Nimrod by Elgar. Nimrod instills any self-assuming Brit with an automatic sense of emotion and national pride. So when the civilian small ships appear on the horizon, and we here ‘Home’, we feel the emotion of Kenneth Branagh’s character, but also have nuance of national stoicism. This idea of ‘nuance’ was at the core of Christian Henson’s Keynote and as he says, it’s the reason we shouldn’t be worried about A.I.
In summary it’s not all doom and gloom. Yes, there are some aspects of A.I that are scary- such as our inability to truly comprehend it at this early stage. But, modern composers must embrace it as the previous generation embraced computers. We must follow the example of Brian Eno and Olafur Arnolds in their recent embracing of technology to invent new timbres and techniques to produce music. Although many composers might disagree with this sentiment, it’s crucial to contemporary practice to embrace technology rather than reject it altogether.