In my first blog discussing the role of AI in the music industry, which you can check out here, I explored how software is transforming the way music is produced behind the scenes. It led me to further explore how AI technology is learning to perform music itself, and to share my own opinions, speaking as a musician, on the exceeding potential but also definitive limitations of AI.
The idea of a world controlled by robots has often been the inspiration for science fiction films, books and stories, and now that fantasy has become a reality for music. New Zealand composer Nigel Stanford released his hit music video ‘Automatica’ on YouTube in 2017: a jaw-dropping display of robots playing musical instruments and eventually making them explode… Not how all musical performances should end! He described the process as an exploration into “what creativity means as more and more processes become automated”. Behind the scenes, Stanford worked with Kuka Robotics and their partner Andy Flessas to program the robots’ playing ability. For this they used Flessas’ ‘Robot Animator’ plugin, which allowed them to “animate a physical robot and generate a program that instructs the real robot to move in exactly the same way”. Potentially, the demand for this kind of mad display on stage could rise in the future, meaning music concerts could host more performances featuring both people and machines, or even entirely robotic led shows… boy bands are so last year!
Robots have all the skills to master musical instruments, from reading the first few notes to rocking out in performance. An earlier project in 2012 by the Tokyo Metropolitan University created a useful handheld device named Gocen: a "handwritten notation interface… for learning music”. In essence, this learning tool allows you to scan a written score, which is read by the device and eventually played in realtime by an accompanying speaker. Similarly to the future app Kena.AI that I explored in my first blog, projects like these are improving our ability to learn music, and AI’s ability to master it itself.
Undoubtedly, the music industry is embracing AI as a brilliant accessory for the production and performance of music. From a commercial point of view, services like Spotify and Amazon Music have modernised consumers’ access to tracks so that, in 2018, CD sales dropped by 23% according to the BBC, indicating that customers are rapidly switching to these streaming services instead. AI-powered features such as ‘Spotify’s Discovery Weekly’ are constantly providing us with personally appealing music and playlists that match our mood - so we can avoid wandering around HMV aimlessly! Regarding wider production of music, companies like AIVA as well as others such as Amper Music and Brain.FM have produced intelligent systems that are composing our music for us, reducing time and costs already. And as the engineering industry becomes more developed, robots such as those produced by Kuka Robotics are being programmed to play music, which is both useful and visually incredible.
If AI can compose on an industrial scale, that leads to an increasingly controversial issue following its production: who rightfully owns the music? There are arguments for every step of the way, from the composition of the first few notes to the streaming of that track online. Arguably, it belongs to the designers and programmers of AI software - but if the scores the computer produces are those that the programmers only hear the perfected versions of, can they truly be considered artists without having contributed any musical ability to the finished product?
Alternatively, does the computer itself own the music? AI’s place in copyright laws is questionable, for example in US copyright law, the word “human” does not appear once. Consequently, it is unclear whether AI can be held accountable for reproducing music of a very similar style to an already famous artist. Furthermore, if an AI system is trained via studying the songs of only one artist, for example, as an article from The Verge suggests, Beyoncé, “if that system then makes music that sounds like Beyoncé, is Beyoncé owed anything?” According to legal experts, the answer is no, despite that seeming unfair on the original composer.
If not the computer, does the customer using the system own the music created? Depending on their direct input of filter combinations, such as genre, mood and style, the composition produced is made unique: so surely they played a part in the making of the piece? This problem could impact technology’s development in the music industry. As AI develops more human abilities, it seems wrong to excuse it from the rules, however ambiguous.
This all leads us to the question: what can’t technology do? Despite exploring many brilliant examples of AI enhancing the music industry during my research, I do consider that there are limitations. Sure, AI software can create music with intricate technical accuracy, perfect harmonies and typically correct structure, but what’s it all for? If, in the art industry, the sole purpose of each framed piece was to ‘look pretty’, what more could art possibly teach us beyond conventional beauty? Instead, exhibitionists seek thought-provoking art, work that conveys raw emotion, expression and an insight into the inventor’s mind when his or her idea first blossomed. We can look at music in exactly the same way. When we listen to music, we have an opportunity to empathise with a composer’s wandering mind. I believe that a computer simply cannot offer that insight to us because it lacks the personal experience and human spirit that grants music its personality.
As somebody for whom music has been a powerful part of my childhood and my passion for piano today, I would argue that a musician’s ability to play with feeling will always surpass his or her technical expertise. Observing a six-year-old piano maestro seize the trophy I wanted at my local music festival in 2010, I realised that I simply couldn’t match his outstanding accuracy. However, I discovered that I could do something he maybe could not: I could play with feeling and deliver my music as if it were a story as a supposed to an impressive robotic challenge. Feeling is a gift that no amount of technological genius can offer us. If it were the case, all the music of the present and future would be AI-generated: a quicker, cheaper, more impeccable solution, and yet it is not.
With that in mind, I can say for sure that whilst AI is a revolutionary advance in technology for the 21st century, near surpassing the intelligence of its own creators, ultimately, nobody can express us better than ourselves.