Are we ready for an influx of AI-generated muck? Toby Manhire on the perils ahead.
The explosion into the mainstream of generative artificial intelligence has prompted a similarly bewildering profusion of ideas about the various tools’ potential to complicate, improve or pollute whatever they might encounter. In that swill you’ll find everything from party-trick to hyperbole, from life-enhancing breakthroughs to seriously worrying prospects. Putting to one side for the moment any distracting thoughts of, well, an AI-induced extinction of humanity, there is plenty to think about as far as impact on democracy is concerned. Especially if you’re, say, hurtling towards an election.
The headline of the week in the AI and New Zealand politics category related to the National Party’s use of AI generated images for throwaway social posts. The use of such materials does invite discussion around the way these illustration tools feed on human creators’ material. And there is undeniable amusement value, in recalling the “pretty legal” Eminiesque soundalike (a field now facing its own reckoning with AI), and in political leaders confronting heinous and hopeless AI generated likenesses.
Though National’s use of AI is innocuous, how would it be if, say, an attack ad was posted with AI generated images depicting Chris Hipkins presiding over a dystopian near-future. That’s pretty much what the Republicans did last month, except the target was Joe Biden. More broadly, the episode encourages us to consider the more serious implications of AI tools for election year, and how bad actors could exploit the technology. Last week on Capitol Hill, Sam Altman, CEO of ChatGPT creator OpenAI, told a senate hearing investigating AI that the potential for such tools to compromise election integrity is a “significant area of concern”. He urged regulation, saying: “I am nervous about it.”
True, the small and distant nation state of Aotearoa is, blessedly, hardly at the vanguard. We’re unlikely to be top of the list for nefarious, meddling offshore powers. But neither are we immune – everyone from 19th-century Fabians to Mark Zuckerberg have fancied the idea of New Zealand as a social laboratory.
The most immediate and real risks posed by AI in an election campaign are familiar: disinformation and deception. But the existing potential for manipulation is turbocharged by the newer tech. Anyone can access a large language model such as ChatGPT to generate an ocean of text, reams of potential fake news. Tools to create everything from images to voice – and even rudimentary video – are now within reach.
“You’ve got an upcoming election,” said Toby Walsh, professor of artificial intelligence at UNSW Sydney, at an Auckland Writers Festival session on the weekend. “And we’re already seeing examples of these technologies being misused to impersonate people. That’s a very real and immediate threat, the idea that you can deepfake just about anything now. These tools are widely available, you can access them very easily. There are real harms with these things because you can’t unsee things that you’ve seen … It’s very realistic and that’s enough to start to pervert our democracy.”
“Robocalls” have long been part of elections: you answer your phone (perhaps more commonly when it was a landline) to hear a candidate rabbiting on about their virtues. Today that candidate’s voice could be cloned, reasonably straightforwardly, and synthesised to engage you in a plausible back-and-forth conversation.
“Literally with technology that exists today, you can make a Trumpbot,” said Walsh. “You take all these speeches, these tweets, you turn it on and it can say things that sound real. It’s not a very hard thing. You can connect that to a Trump cloning voice – you only need a few seconds of someone’s voice – and now, you can ring up every voter in the United States. You can have a conversation where Trump persuades someone to vote for him, at a modest cost. I’d be surprised if one of the parties doesn’t start to do this.”
Luxbot and Chipbot
Throw in some personalisation, the sort of thing “we’ve already started to see with Cambridge Analytica”, and you might get a tailored conversation, with – let’s return to this to a New Zealand setting – the Luxbot or Chipbot engaging you directly on the issues that matter to you, or, more cynically perhaps, on what frightens you.
Is that all fine? Maybe so, and there’s nothing obvious in New Zealand law to prevent it, even if it would probably require an authorisation note at the top.
It gets murkier still when you consider the prospect of setting loose the synthesised voice of a rival. “If I’m an anti-Trump [operative], I could build a Trumpbot to do the same thing and ring voters up and say things that would persuade you not to vote for him,” said Walsh, “though I’m struggling to think would Trump could say to upset his supporters.”
Under New Zealand law, that, too, would likely need a promoter’s statement, obviating some of the risk of confusion. And it might conceivably be considered what the Electoral Act calls a “fraudulent device or contrivance [that] impedes or prevents the free exercise of the franchise of an elector” that amounts to exerting “‘undue influence” and thereby a “corrupt practice”. But, as one person familiar with the law told me, this is 19th century legislation, which didn’t quite have deepfakes or chatbots in mind.
Were bad actors to be involved, of course, whether at home or abroad, they’d probably not be too bothered with the minutiae of the legislation. Among the other things experts are worried about, in the US at least: voice messages purporting to be from candidates providing false information on how or where to cast a ballot, news reports with a candidate “confessing” to a crime or announcing they’d quit the race, or a trusted and independent (but synthesised) voice expressing an endorsement.
Flooding the zone
Once a courtier to Trump, Steve Bannon is said to have declared, “The real opposition is the media. And the way to deal with them is to flood the zone with shit.” That approach, one with echoes of Kremlin disinformation strategies, is made a whole lot easier thanks to generative AI.
Massively greater volumes of material can be created, making the flood even more formidable. Not only does that make it difficult to sort what matters from what doesn’t, it gives duplicitous politicians wide air cover for deniability. That audio recording you have of me assailing X or endorsing Y? Fake. Deepfake. Next.
The “post truth” hazard is swelling by the moment. Walsh’s warning is this: “We’re not going to be able to tell what’s true or false any more – and truth is already a pretty fungible idea.”
Regulators smell the coffee
The AI risks – to elections and across industry, politics and society – are suddenly making lawmakers sit up. As well as the congressional inquiry under way, the White House yesterday began the process towards a national strategy on artificial intelligence. “The pace of AI innovation is accelerating rapidly, which is creating new applications for AI across society. This presents extraordinary opportunities to improve the lives of the American people and solve some of the toughest global challenges,” it chirped, though you know what’s coming next. “However, it also poses serious risks to democracy, the economy, national security, civil rights, and society at large.”
The European Union, not for the first time in responding to digital risks, is ahead of the curve. The AI Act, currently going through the legislative process, seeks to ensure such tools “are overseen by people, are safe, transparent, traceable, non-discriminatory, and environmentally friendly”. The rules would ban biometric surveillance, emotion recognition and predictive policing AIs, and protect . It would also become a legal requirement that the source material created by generative AI – text, images, video, whatever – was disclosed.
A bill inspired by the Republican ad that conjured up a future Biden, meanwhile, inspired a House Democrat to introduce a bill that would require all political ads to declare any usage of artificial intelligence tools.
What about New Zealand?
The curiosity, concern and energy in response to a watershed AI moment in the capitals of Europe and the US is not mirrored in Wellington. When Stuff asked the main parties about their plans and policies pertaining to AI, they appeared, with the exception of the Greens, just not to have thought about it very much.
The independent panel tasked with reviewing New Zealand’s electoral laws is expected to produce a draft report in the coming days, and that is likely to tackle some of the challenges presented by AI technologies. But given we’re now less than two months until the start of the regulated period for advertising ahead of the October election, the chances of any changes before the campaign are vanishingly small.
The minister for justice, Kiritapu Allan, told The Spinoff that she was alert to the challenges of AI – “an emerging issue that democracies around the world are grappling with and ensuring our laws are future-proof is really important”, and noted: “Electoral advertising is being considered by the Independent Review of electoral law. The panel’s interim report will be released soon.”
In a statement, she added: “AI is an inherently difficult area to regulate and engages privacy, human rights and broader democratic issues. Because of this, regulatory change in this area needs to be well-considered. While the Electoral Act doesn’t explicitly deal with AI, all electoral advertising needs to comply with the rules set out in the act.”
The chief electoral officer, Karl Le Quesne, said in response to Spinoff questions: “We’re aware of the ongoing changes to the information environment we’re living in where technology continues to change rapidly, as does the way people share information. However, the same rules apply to all election advertisements regardless of the technology or channels used.”
He noted the requirement for all election advertisements to include a promoter statement, with name and address, and that “any change to the rules about election advertising would be a matter for parliament to consider”.
Le Quesne offered the following advice: “We encourage anyone viewing an election ad to apply some basic checks if it doesn’t look right. Does it have a promoter statement saying who’s behind it? If it’s from a candidate or party, you can check if it’s on their social media account or website. If you’re not sure about it, don’t share it.”
With each day that passes, however, that sniff test, the “doesn’t look right” radar, becomes decreasingly dependable.