What's Lang Lang Got That AI Hasn't?


In my first blog discussing the role of AI in the music industry, which you can check out here, I explored how software is transforming the way music is produced behind the scenes. It led me to further explore how AI technology is learning to perform music itself, and to share my own opinions, speaking as a musician, on the exceeding potential but also definitive limitations of AI.

The idea of a world controlled by robots has often been the inspiration for science fiction films, books and stories, and now that fantasy has become a reality for music. New Zealand composer Nigel Stanford released his hit music video ‘Automatica’ on YouTube in 2017: a jaw-dropping display of robots playing musical instruments and eventually making them explode… Not how all musical performances should end! He described the process as an exploration into “what creativity means as more and more processes become automated”. Behind the scenes, Stanford worked with Kuka Robotics and their partner Andy Flessas to program the robots’ playing ability. For this they used Flessas’ ‘Robot Animator’ plugin, which allowed them to “animate a physical robot and generate a program that instructs the real robot to move in exactly the same way”. Potentially, the demand for this kind of mad display on stage could rise in the future, meaning music concerts could host more performances featuring both people and machines, or even entirely robotic led shows… boy bands are so last year!

Robots have all the skills to master musical instruments, from reading the first few notes to rocking out in performance. An earlier project in 2012 by the Tokyo Metropolitan University created a useful handheld device named Gocen: a "handwritten notation interface… for learning music”. In essence, this learning tool allows you to scan a written score, which is read by the device and eventually played in realtime by an accompanying speaker. Similarly to the future app Kena.AI that I explored in my first blog, projects like these are improving our ability to learn music, and AI’s ability to master it itself.

Undoubtedly, the music industry is embracing AI as a brilliant accessory for the production and performance of music. From a commercial point of view, services like Spotify and Amazon Music have modernised consumers’ access to tracks so that, in 2018, CD sales dropped by 23% according to the BBC, indicating that customers are rapidly switching to these streaming services instead. AI-powered features such as ‘Spotify’s Discovery Weekly’ are constantly providing us with personally appealing music and playlists that match our mood - so we can avoid wandering around HMV aimlessly! Regarding wider production of music, companies like AIVA as well as others such as Amper Music and Brain.FM have produced intelligent systems that are composing our music for us, reducing time and costs already. And as the engineering industry becomes more developed, robots such as those produced by Kuka Robotics are being programmed to play music, which is both useful and visually incredible.

If AI can compose on an industrial scale, that leads to an increasingly controversial issue following its production: who rightfully owns the music? There are arguments for every step of the way, from the composition of the first few notes to the streaming of that track online. Arguably, it belongs to the designers and programmers of AI software - but if the scores the computer produces are those that the programmers only hear the perfected versions of, can they truly be considered artists without having contributed any musical ability to the finished product?

Alternatively, does the computer itself own the music? AI’s place in copyright laws is questionable, for example in US copyright law, the word “human” does not appear once. Consequently, it is unclear whether AI can be held accountable for reproducing music of a very similar style to an already famous artist. Furthermore, if an AI system is trained via studying the songs of only one artist, for example, as an article from The Verge suggests, Beyoncé, “if that system then makes music that sounds like Beyoncé, is Beyoncé owed anything?” According to legal experts, the answer is no, despite that seeming unfair on the original composer.

If not the computer, does the customer using the system own the music created? Depending on their direct input of filter combinations, such as genre, mood and style, the composition produced is made unique: so surely they played a part in the making of the piece? This problem could impact technology’s development in the music industry. As AI develops more human abilities, it seems wrong to excuse it from the rules, however ambiguous.

This all leads us to the question: what can’t technology do? Despite exploring many brilliant examples of AI enhancing the music industry during my research, I do consider that there are limitations. Sure, AI software can create music with intricate technical accuracy, perfect harmonies and typically correct structure, but what’s it all for? If, in the art industry, the sole purpose of each framed piece was to ‘look pretty’, what more could art possibly teach us beyond conventional beauty? Instead, exhibitionists seek thought-provoking art, work that conveys raw emotion, expression and an insight into the inventor’s mind when his or her idea first blossomed. We can look at music in exactly the same way. When we listen to music, we have an opportunity to empathise with a composer’s wandering mind. I believe that a computer simply cannot offer that insight to us because it lacks the personal experience and human spirit that grants music its personality.

As somebody for whom music has been a powerful part of my childhood and my passion for piano today, I would argue that a musician’s ability to play with feeling will always surpass his or her technical expertise. Observing a six-year-old piano maestro seize the trophy I wanted at my local music festival in 2010, I realised that I simply couldn’t match his outstanding accuracy. However, I discovered that I could do something he maybe could not: I could play with feeling and deliver my music as if it were a story as a supposed to an impressive robotic challenge. Feeling is a gift that no amount of technological genius can offer us. If it were the case, all the music of the present and future would be AI-generated: a quicker, cheaper, more impeccable solution, and yet it is not.

With that in mind, I can say for sure that whilst AI is a revolutionary advance in technology for the 21st century, near surpassing the intelligence of its own creators, ultimately, nobody can express us better than ourselves.

Written by Florence Grist

See Florence Grist's blog

Based in the UK, Florence Grist is a freelance writer who enjoys writing on technology and sustainability issues and especially how AI has the potential to both transform our understanding of the environment and help protect fragile ecosystems.

Related blogs

Iceland provides the power behind Germany's most pioneering AI start-ups

This week has seen the announcement of Analytic Engineering, a pioneering German AI engineering firm, choosing Verne Global’s data center in Iceland as the location for their intensive computing. This represents another impressive AI and Machine Learning client win for us, following DeepL joining us just before Christmas.

Read more

Explainable AI

SC18 here in Dallas is proving once again to be a fascinating melting pot of HPC insights and observations, and it's intriguing to see the continuing convergence of AI into the supercomputing ecosystem. Along these lines I started to think about the movement towards 'Explainable AI'. Being able to explain and understand how models work when making predictions about the real world is a fundamental tenet of science. Whether solving equations in a dynamic system for precise answers or using statistical analysis to examine a distribution of events, the results sought from these methods are intended to increase our clarity and knowledge of how the world works.

Read more

Could Your Next Favourite Artist Be A Robot?

We’ve discovered that AI can perform methodically and analytically, but does it have the potential to mimic the human mind’s creativity in order to produce a unique work of art such as a musical score? The devising process for music takes imagination; it is a challenge for even the most right-brained people to compose something brilliant that a listener has never heard before. It is even more of a challenge for that music to hold feeling and soul – so how could a computer possibly manage such a task when it ultimately has neither?

Read more

We use cookies to ensure we give you the best experience on our website, to analyse our website traffic, and to understand where our visitors are coming from. By browsing our website, you consent to our use of cookies and other tracking technologies. Read our Privacy Policy for more information.