Question Time with Suzanne van Rooyen

Earlier this month I posted my Not-Review of Suzanne van Rooyen‘s I HEART ROBOT.

Today I’m talking to them about robots, AI, music,and emotions. So, a warm welcome to the wonderful Suzanne! My question for them is:

How would you feel if an AI was created that could out-perform and out-emote any modern day musical performer?

 

 

Hi Cat,

 

Thank you so much for having me on your blog today. You asked me a rather tricky question that I’m going to attempt to answer…

Being a musician myself, my gut reaction to this question was ‘that’ll never happen.’ Then I had to take a step back, because this conceit is pretty much the premise of my novel I Heart Robot.

To adequately answer this, I need to unpack the question a little more. Firstly, what does out-perform mean? I’m going to interpret this as the technical ability of the musician, the speed and accuracy with which a pianist can hit the notes, the embouchure and air pressure control of a flutist, the speed and accuracy of both bowing and fingering for string players. Could a robot out-perform a musician in this context? Absolutely. And the scary thing is, these robots don’t even need to be particularly smart to do it. They don’t have to be smart at all.

There’s a fantastic music video by Nigel Stanford for his song Automatica which shows fairly basic assembly line robots playing guitars, drums, piano, and even spinning the decks on a DJ desk. This is entirely plausible, and the dexterity this would take is easily programmed into machines of this sort.

If you were to add AI to this scenario, giving the robot the ability to learn and adapt, I have no doubt a robot – even a non-anthropomorphic one – could out-perform a human musician.

Out-emoting a human being, however, that is rather different. Being able to add expression to one’s playing is no easy task. Even some of the most gifted musicians struggle with this. Having excellent technique can be relatively easily developed, but being able to evoke specific emotion in an audience is something many people consider an innate ability, and not one easily acquired.

Having done my Master’s in music perception and cognition, I have to disagree a little here. From numerous studies on music and the brain, and numerous studies on how music affects emotions, music scientists have a pretty concrete understanding of how and why music makes us feel certain things. Combine certain elements of music (timbre, texture, tempo, dynamics) in the right way and you are pretty much guaranteed to make the listener feel sad or happy, angry or afraid. While I hate to reduce emotion in music to a recipe, I have to admit that you most certainly can, and recipes like these robots can learn.

When I was at university, one of the research teams developed a clever little app that would listen and analyze music in real time. A ball would then bounce around the axes of the five basic human emotions, denoting the music to be happy or sad etc. We deliberately went out of our way to confuse this app, but rarely managed to.

Our emotional response to music can, sadly, be reduced to a set of scientific principles based on how our brains analyze and process sound, as well as our social and cultural conditioning. When we listen to Chopin’s Funeral March we know that it is not only the minor chords that evoke such a strong emotional response in us, but also the tempo at which it is played, the way the musician leans on certain tones, delays the harmonic resolutions of dissonance, and plays with the dynamic range of the music. If we took several different performances of Chopin’s piece and analyzed how every human musician performed it, creating averages for how great the crescendo was, how long a single fermata was – if we analyzed all the minutiae, we could potentially create a blue-print for the perfect performance. These details – reduced to mathematical formulae – could then be given to a machine. Essentially, we could programme a robot to play like Horowitz or Lang Lang or – arguably, better still – as an amalgam of all the greatest pianists to have ever lived. Surely then, this robot would out-emote any human performer, wouldn’t it?

That’s a question I can’t answer. Part of what makes Lang Lang so appealing to certain listeners, or why others much prefer Horowitz to Ashkenazy, is because of individual idiosyncrasies of the player. Amalgamate those personalities and perhaps you’d be left with a bland performance by comparison. However, imagine that as a starting point. Imagine an AI capable of analyzing, imitating, and assimilating the best performances by the world’s most beloved performers with unparalleled, flawless technique. I think then, we may be in for a rather nasty surprise.

As much as I don’t want to admit that a robot – perhaps like Quinn in I Heart Robot – could ever out-emote a human musician, I also have to be honest that I don’t think it’s impossible.

via GIPHY


related post

Published by

cat_hellisen

I write.