When it comes to understanding artificial intelligence, is science fiction just a pesky distraction from the real dangers out there? Microsoft’s authority on all things AI seems to think so, reports Jihee Junn.
“With artificial intelligence, we are summoning the demon,” declared Elon Musk back in 2014. “In all those stories where there’s the guy with the pentagram and the holy water, it’s like – yeah, he’s sure he can control the demon. Doesn’t work out.”
Over the years, the Tesla and SpaceX chief executive has regularly come out to express his concerns over the future of AI. He’s even warned of a possible Terminator-style robot uprising, telling a group of US governors earlier this year: “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”
Is humankind doomed? Could the world really be terrorised by artificially sentient beings? Are we on the verge of creating our own existential threat? Does this mean we have to endure another resurrection of Arnold Schwarzenegger’s acting career? For Eric Horvitz, technical fellow and director of Microsoft Research, the likelihood of such a scenario occurring remains resoundingly low. “I think there’s a much higher probability that humanity could wipe out humanity. Way higher,” he says.
“These are all interesting long-term questions, but I think they’re getting way too much attention right now… I see no reason why powerful intelligences would be malevolent. We hope the machines we build will not be so malevolent as human beings.”
“I find it interesting that people have somehow forgotten about nuclear weapons when in the 1980s, that was a huge concern. All of a sudden, it’s science fiction that dominates our minds, and to me, that’s a huge distraction.”
However, that doesn’t mean that the dangers of AI don’t exist. After all, Horvitz — along with hundreds of other prominent technologists such as Stephen Hawking, Shane Legg, Steve Wozniak, and Musk himself — signed an open letter in 2015 committing to a future of “robust and beneficial” artificial intelligence. But unlike Musk’s rather more dystopian outlook (in another open letter from earlier this year, Musk labelled the threat of AI greater than any threat posed by North Korea), Horvitz’s fears lie more in how we, as humans, could exploit the properties of AI for selfish and manipulative gain.
“One of my deepest concerns with AI right now in terms of malevolence is that it could take what we call ‘marketing’ and ‘propaganda’ to new levels. To me, this is a much bigger concern and much nearer term than any Terminator kind of scenario,” he says.
“[AI could potentially] use the weaknesses of the human cognition — the gaps and biases in our brain — to modify someone’s beliefs. Psychological operations – or PSYOPS as it’s called in the military – is all done by hand. Imagine what it would be like if it was done by machines.”
Horvitz gives the example of how social media platforms such as Facebook and Twitter are already being used by AI systems to disseminate intellectually manipulative material on a wide-scale basis. And as practices like psychometrics and automated news writing proliferate, fake news and misinformation could reach entirely new heights, essentially creating a weaponised AI propaganda machine.
Then there are examples where other forms of advanced technology and AI intersect. One example Horvitz gives is of a video where the words and expressions of politicians are manipulated in real time. “Within a few years, AI systems plus advanced graphics will synthesise in a realistic way to have a political figure say anything they want,” he says. “Systems being taught on how to manipulate you… [do you] think malevolent governments are just going to sit on their hands and not use these methods?”
“The early innovators for centuries in technology have been armies, militaries, and defence infrastructures. It’s unclear if AI technologies will be deployed in a way that’s stabilising or destabilising… but you have to be ready for that.”
In its efforts to stay ahead of the curve, leading tech companies from Facebook to Google have invested millions of dollars in its AI projects. Microsoft, who’s been pushing hard on the AI front as a result of its pivot towards cloud computing, has also integrated the technology into several of its consumer products, such as Cortana, Translator, and PowerPoint Designer. But it’s Tay, the company’s ill-fated AI chatbot from last year, that’s attracted the most attention thus far. Learning from those interacting with her on Twitter, Tay was transformed into a fully-fledged troll in less than 24 hours, spewing racist and sexist hate wherever she could.
While Tay’s failure attracted plenty of criticism of AI at the time, Horvitz says that since then, the team has been able to learn from its mistakes to create more advanced (and less offensive) automated beings. “We had teams look at it very carefully and it was very instructive for creating new AI technologies that understand hacking, bad words, and what humans find offensive,” he says.
“Since then, other agents have been fielded that have been very successful, such as Xiaoice in China.” In fact, Xiaoice (known as Zo in the English-speaking world) is so popular in China that’s she’s become a celebrity in her own right, starring in everything from TV talk shows to broadcast news.
While much of the innovation around AI may seem frivolous, AI can also be radically life-changing, with Horvitz labelling the technology “the sleeping giant for healthcare”.
“I’m stunned by how little of it’s been applied to this day. Why haven’t we translated technology even from the 1980s and 1990s into healthcare? It could’ve made a huge difference by now.”
“We don’t see the positive implications facilitated enough in our press and public because movies about Terminator sell many more tickets than movies about saving people from cholera epidemics,” he says, referring to how AI could be used to predict the spread of the disease before it breaks out, potentially saving up to a 100,000 people from dying each year.
As well as preventing potential death or disease, AI-powered products have also revolutionised the lives of those living with disabilities. YouTube, for example, has long used speech-to-text software to automatically caption its videos, while researchers at IBM are currently using Watson to help people with cognitive or intellectual disabilities.
Most recently, Seeing AI, Microsoft’s talking camera app for those with visual impairments, was released on iOS. The app will not only recognise the exact product in front of you — such as a Coke can or an apple — but will tell you the age, gender, and appearance of a specific person, even going so far as to name them if it recognises their face from sources like the internet.
Whether AI poses a stabilising or destabilising presence remains uncertain. But ultimately, Horvitz says it’s silly to worry, urging us to pursue humankind’s “natural curiosity” instead.
“Worrying means sitting on your hands. Let’s figure it out instead,” he says. “Let’s be aware instead of walking into the darkness without knowing.”
Jihee Junn travelled to Seattle courtesy of Microsoft. Read more about the trip here.
The Spinoff’s business content is brought to you by our friends at Kiwibank. Kiwibank backs small to medium businesses, social enterprises and Kiwis who innovate to make good things happen.
Subscribe to Rec Room a weekly newsletter delivering The Spinoff’s latest videos, podcasts and other recommendations straight to your inbox.