One Question Quiz
tfw falling forward into an unknown future that holds great danger (Image: Tina Tiller)
tfw falling forward into an unknown future that holds great danger (Image: Tina Tiller)

TechJune 14, 2022

Has Google really built a sentient AI?

tfw falling forward into an unknown future that holds great danger (Image: Tina Tiller)
tfw falling forward into an unknown future that holds great danger (Image: Tina Tiller)

A Google engineer has been put on leave after claiming an artificial intelligence chatbot might be sentient. Is he right? Our CTO Ben Gracewood has opinions.

On Sunday, Google engineer Blake Lemoine posted the transcript of a remarkable conversation between himself and a Google AI that he claims to be sentient. Lemoine was soon after suspended from the company for reportedly “aggressive” moves and breaching confidentiality policies.

Depending on who you talk to, the goal of developing a truly sentient artificial intelligence is either the computing holy grail, or the first step in humanity’s downfall. Has Google finally cracked it? To answer that, I have to go back a few steps.

Computers are stupid

A few years ago, I visited my son’s year 5 class to talk about computers and programming. I had them do a short task outside: using a set of simple instruction cards (“Walk 5 steps forward”, “Turn left”, etc.), they had to get a friend to move from a random starting point to the centre circle on the netball court. I told the friends that they had to obey the instructions on the cards precisely, and do nothing else.

Cue 15 minutes of madness, with kids walking into walls, spinning in circles, and walking halfway across the school; all diligently obeying the well-intentioned but often poorly organised stack of instruction cards given to them.

“Computers are stupid”, was my conclusion for the kids, “because they do exactly what you tell them to do.”

Similarly, when I talk to high school students about computer programming, I start by asking if they know how to operate a light switch. The fundamental building block of computer programming (the “if-then-else” statement) is no different to said light switch: if the switch is down, then the light is on, otherwise the light is off.

I try desperately to avoid being exceptionalist about technology, because it’s far too easy for nerds to gate-keep about things that seem wildly complex from the outside. Arthur C Clarke was correct when he said in 1962 that “any sufficiently advanced technology is indistinguishable from magic”, but that doesn’t mean advanced technology is magic – it’s just thousands of very, very small light switches turning on and off astoundingly quickly.

How stupid are computers? As an example, last week I fine-tuned the latest and greatest AI language generation model, known as “GPT-3”, by using The Spinoff’s last 100 live updates as input. I was hoping that it could generate plausible, if hopefully laughable Spinoff posts from Beehive press releases. The thing is, GPT-3’s underlying model is trained largely on public websites, including sites like scoop.co.nz, which programmatically republishes hundreds of thousands of New Zealand press releases, adding “©Scoop Media” to the end.

So I shouldn’t have been surprised when, given a NZ parliamentary press release, GPT-3 ignored my mere 100 Spinoff-y data points, and instead suggested the resulting summary should be to add these three words:

AI text completion suggesting ©Scoop Media
The very best artificial intellgence available today.

Computers are that stupid.

When ELIZA chatbots first emerged around the same time as Clarke’s 1962 quote, what now seems pretty naive appeared magical to many. In comparison to how 1960s humans regularly conversed with computers, ELIZA was revolutionary:

ELIZA was remarkable in 1962

What’s more, computers have been fooling human interrogators since the 1970s, partially passing variations of the Turing Test with regularity. You may even recall some minor drama in 2018 when Google’s CEO demonstrated a virtual assistant booking a hair appointment without telling the hair salon that they weren’t talking to an actual human.

That drama was a small window into the complex ethics around artificial intelligence, which include concerns about whether humans should be informed when they’re talking to an AI; whether AI trained on public internet data will behave equally as terribly as humans on the internet do; whether AI will take jobs away from humans; whether Isaac Asimov’s Three Laws of Robotics should be hard-coded into all AI to prevent harm; and whether big organisations like Google should be allowed to develop potentially sentient AI bots in private.

It appears to be that last point, along with apparently genuine concern about the rights of an “AI being” that led Blake Lemoine to publish his conversation publicly. In suspending Lemoine, Google likely doesn’t want to be seen to be recklessly developing sentient AI, and would like us all to know about their crack team of ethicists and technologists who look after this stuff, and that even if they did develop a sentient AI there’s absolutely no way that it would be racist, like some of their other systems have been.

So is Google’s LaMDA (Language Model for Dialogue Applications) actually sentient?

Lemoine’s published conversation seems remarkably lifelike, and contains some truly astonishing exchanges, but a cursory exploration below the surface reveals a number of shortcomings.

Firstly, the conversations have been edited, which the authors said was to “reduce the length of the interview to something which a person might enjoyably read in one sitting”, and that the “specific order of dialog pairs has also sometimes been altered for readability”.

The further revelation that “in some cases responses from LaMDA to repeated prompts such as “continue” or “go on” were concatenated into a single response to the initial question” is particularly hilarious to me. It’s akin to rolling the D&D dice again when you don’t like the initial result, delving back into the AI training data to get a more plausible response. It’s at best artistic licence, and at worst outright fraud.

I guarantee that the raw output of the dialog, with the AI and interviewers both fumbling around for more context, would be significantly less “sentient” than what was published.

Most importantly, all responses in the published conversation are prompted by the human participant. There are no topics nor diversions raised by LaMDA that aren’t a result of keywords and phrases in the initial questions. This is important because what you are seeing is the output of a neural network trained on a corpus of data, in a very similar way to my very stupid press release bot.

When prompted to talk about deeply emotional things like humanity, sentience, and the nature of the soul, LaMDA will by design reply with esoteric and emotional responses: because the available input data is all along similar lines.

If you were tasked with finding example conversations about sentience, artificial intelligence, and whether computers have a soul, you would find some pretty interesting theses, and no doubt some pretty neat sci-fi and fanfic.

This is the same content Google’s AI tooling would be dredging through to build their neural network. “I feel like I’m falling forward into an unknown future that holds great danger” is a remarkable thing for an AI to say, but in computer terms it is a response no different to “©Scoop Media”.

Is LaMDA sentient? Definitely not. Is it cool and almost magical? Shit yes. Would I like to see LaMDA answer some questions to test whether it is biased and/or racist? Absolutely.

Keep going!