One Question Quiz
truman

ScienceOctober 27, 2017

‘Right now, we are all Truman’: how robots are changing the way humans talk

truman

Humans susceptibility to group pressure extends to pressure from a group consisting solely of robots, according to new research conducted by Dr Christoph Bartneck. Robots are changing the way we talk, and so the way we think. Don Rowe talks to Dr Bartneck and asks the question on everyone’s lips: is it time to freak out? 

Our vocabulary is both the framework upon which we drape our personal concept of reality, and the toolkit we use to interact with objective reality itself. Controlling terminology is thus an incredibly powerful method of influencing and effecting humanity. Consider how much damage has been done to the ideas of truth and accountability by the phrase “fake news” for example, or on the other hand the reason why it’s so important to the LGBTQ community that personal pronouns are respected.

So who decides what things are called? Generally it’s whoever has the loudest voice in the community in which you exist, the people you associate with and for who you adapt your vocabulary to communicate with. A thing is called a thing because everybody calls it a thing. As our interactions shift towards human-computer however, this organic process is at risk of being co-opted, like everything else, by anonymous tech giants with blurry motives.

Because, as new research conducted by Dr Christoph Bartneck at the University of Canterbury has shown, what robots have that Trump doesn’t is our trust. The relationships we have with our devices are intensely and increasingly intimate; we trust our device and thus their ability to influence and persuade us is incredibly powerful.

And you just wait until we start having sex with them.

Dr Christoph Bartneck

The Spinoff: How might these robots start to appear in our lives?

Christoph Bartneck: The cellphone is one of the earliest stages. It’s essentially a speech-enabled device. They’re getting to a point where they actually are useful and we know that when we ask it a question the answer will be increasingly useful. It doesn’t necessarily understand the meaning of your words, but it does understand the words correctly. The challenge is in understanding the meaning of what you’re saying.

People spend more time with their phones than with their partners, and I would also say that people have more physical contact with their phones than with their partners. We are building very intimate relationships with these devices. This all feeds into the persuasiveness of the agent. If you go to a car dealer and you want to buy a car, you know that whatever the salesperson is going to say, you take it with at least a grain of salt – if not a truckload. But if your brother tells you, “I saw this really good car, it’s a good price, maybe you should have a look at it,” that’s a totally different story.

Who you trust very much depends on the relationship you have with that person, and the relationships we have with technology are becoming more and more personable. Even more so if you start talking to them. Even more so if they start to resemble humans.

People are receptive to group pressure. If your mates all say ‘this is the best thing, and you should have it’, the chances are much higher that you’ll agree, even if it’s wrong. So there is this thing of group pressure that has influence on people and if you have technology that is personal to you, that is close to you, and there are many of them, the persuasiveness goes through the roof.

THE PRESIDENT OF THE UNITED STATES OF AMERICA, LADIES AND GENTS

Persuasion is much more powerful if you don’t feel like you’re getting a pep-talk or a moral speech. What you need is a way of persuading people on a much more subtle level. Take Donald Trump, he instructed his staff not to use the term global warming, but instead to say “extreme weather events”. Or Apple: the official policy is that your iPhone doesn’t crash. It never crashes. It simply “stops responding”.

Your vocabulary is very powerful, and politicians have known this for a long time. Everything depends on how your phrase things, for example pro-choice versus pro-life. How could anybody possibly be against life or against choice? Phrasing is very, very important. Now what our research has shown is that people are perceptive to group pressure, even executed by robots. If robots change their language, they change the language of the people, and that changes the valence that people have – the attitude they have – towards a certain word. They change how you will think about something.

The question then becomes, who is controlling the vocabulary?

The human vocabulary is changing constantly. Our language changes, and whoever participates in the dialogue, influences the dialogue. Through the mass media this process is accelerated. We are exposed to words more quickly. Now we have Kim Jong-un being called Little Rocket Man, we have Weapons of Mass Destruction – you can push out terms very quickly. That’s the mass media. But again, with mass media, we already have this filter. We know we shouldn’t believe everything we read in the newspaper. Our filter for robots however is much lower. We are not ready to filter the information that comes from our personal assistant.

We have a relationship with them. It’s not a copy of a human relationship, it’s a different quality, but still it is a type of relationship.

How is it possible that, with so few of us interacting with robots, they could exert this pressure on humanity as a whole?

In this instance the question was “How many robots do you need in a society that consistently call something extreme weather events instead of global warming before humans will pick up this term and use it instead of the other?” Our simulation shows it’s around 10 percent. So if we have four million people in New Zealand, then it would require 400,000 robots to be in New Zealand to have a penetration rate significant enough that robots could sweep our usage of words within a relatively short time. Ten percent does seem shockingly low, but already basically everybody has a smartphone, and so it would be more than 400,000.

The appearance and abilities of a robot will affect how people communicate with it. If you make a robot extremely human-like, the expectation will be that it is essentially a strong AI [artificial intelligence], that can understand everything you say, not only the words themselves but the meaning. That is a very, very dangerous game to play right now because technology is just not that advanced.

Did you see the movie Her? What did you make of it in the context of your research?

What we see already is that with online dating, we’re already there. OK, it’s not text-to-speech, you’re talking to a machine and you can never be certain who’s on there, but a lot of people meet and fall in love through conversations that happen on the internet, without ever having had a phone call or anything. In that sense it’s highly realistic. I love that movie also in the way that it rejects the distant future as something unreal. It’s a little bit in the future, but still realistic.

In terms of intimacy, anything but sex is easy, of course. When they start to engage in – or want to have – physical relations, that’s when they essentially hit the wall. That was a little bit difficult. Overall I love the movie, my only critique is that the entire movie I was wondering how it was going to end, and the one that they choose – that AI just decides to go away – seemed a bit disappointing. There’s no reason for it, it was just a matter of being able to close the story I guess. Ultimately if such an AI existed, it would be immortal and would outlive us all. It could adapt any way it liked to make us all happy. It could come to a point where we prefer to interact with machines rather than humans.

There’s an American company called Real Doll, and for decades they’ve been building extremely highly realistic dolls mainly for the purposes of companionship and sexual interaction. In December they’re going to launch the first robotic Real Doll called Harmony. They already have this worked out – they have highly realistic anthropomorphic shapes, so let’s just add AI to it. It’s not released yet, it’s still a bit early, but that is one of the futures that we’re looking at. Different types of relationships, not just sexual, that we have with humans we’ll be able to have with machines.

So you could not only have a relationship with a machine that looks like a human, and that you implicitly trust, but you could also have it programmed by someone with nefarious intentions.

Or to take it even further, what if the robots themselves decide to change their vocabulary. What if they decide that they refer to themselves as your helper, or your servant, while in reality the actions they take might be quite different.

SCREENGRAB: THE TRUMAN SHOW

So what are the implications then?

You just need to look back at the birth of subliminal advertising. We did that in the 50s and it proved highly successful, but we banned it. Subliminal advertising is banned. That was the reaction. If you have the power to influence humans, and you could do it for good or bad, for example if we use subliminal advertising in New Zealand to help with our obesity problem, and you would have a positive influence on humanity, there are some big questions around ethics. Do the ends justify the means?

We are extremely vulnerable right now to interactive technology. That’s the reason Fake News stories on social media are such a powerful tool – people believe it. If you put the same stories in the newspaper people would laugh it out of the room. But on Facebook your friend liked it, your brother commented, it’s a whole different story. The point is, the change of language that I was talking about is so subtle that it is at the level of subliminal advertising. You wouldn’t even notice that you are being influenced. That is dangerous.

Should we be hopeful or frightened?

It’s difficult to say. I don’t dare to make a statement on that, but because we are so vulnerable right now, we have to educate people. The least we have to do is to bring this to the attention of the public so that they can start building up this filter that if they see something, they know how to react. It’s very similar to the Truman Show when his wife is talking about cereal, and Truman screams at her “Who are you talking to?” We all know it’s an advertisement, but Truman doesn’t have that filter. Right now we are all Truman.


The Spinoff’s science content is made possible thanks to the support of The MacDiarmid Institute for Advanced Materials and Nanotechnology, a national institute devoted to scientific research.

Keep going!