It arrives not with a robotic roar, but with the leader of the National party being unsure whether his party is using AI while a spokesperson confirms they are, writes Anna Rawhiti-Connell in this excerpt from The Bulletin, The Spinoff’s morning news round-up. To receive The Bulletin in full each weekday, sign up here.
National using artificial intelligence to create attack ads
As Stuff’s Andrea Vance writes, “It’s the wonky eyeball that gives it away. In the Instagram photograph, a woman stares out the window into a dark street.” Vance is referring to an image posted on the National party’s Instagram account attacking Labour’s “soft on crime” approach and it’s been generated using artificial intelligence (AI). Vance reports that in the last month, National has published at least four images generated by AI to its social media accounts with a spokesperson confirming they were using it as “an innovative way to drive our social media”. Yesterday, party leader Christopher Luxon was unclear about National’s use of AI in its attack ads saying, “No, not that I’m aware of,” when asked if the party was using it.
AI image and text generation now in the hands of everyone
On Monday, a fake image which CNN describes as bearing “all the hallmarks of being generated by artificial intelligence” purported to show an explosion near the Pentagon. It was shared by multiple “verified” Twitter accounts on Monday, leading to a brief dip in the stock market. It was fake. This is a confluence of problems. “Verified” on Twitter now means nothing more than someone paying for a “verified” account — it is no badge of authenticity. The capacity to create these kinds of images now lies in the hands of everyone. I used Bing Image Creator to create today’s feature image. It took me five seconds using the prompt: “a happy dog being prime minister of New Zealand in front of the Beehive in Wellington, New Zealand holding a New Zealand flag”. As you can see, it’s not the Beehive, nor the New Zealand flag and that suit fit is a travesty. I chose not to enter prompts that might bear a resemblance to reality for the sake of trust and truth. It’s just me imagining a beautiful future.
AI being explored by Electoral Review panel here
As this AP news piece highlights, AI experts can very quickly name a number of scenarios in which AI is used for the purposes of confusing voters, slandering a candidate or even inciting violence. The news cycle is awash with these stories and warnings. Two months ago, psychologist and AI commentator Paul Duignan said there was every reason to think AI would be used in the New Zealand election and so it has come to pass. It’s a topic being explored by the Government’s Independent Electoral Review panel. The first report from that review is not due until June. Far be it from me to suggest that once-in-a-generation review of electoral law be rushed, but to quote the Gershwins, some might suggest it’s time for them to “put on some speed”.
Should we be asking for disclosure from our political parties?
In the US, a bill was introduced at the beginning of the month that would require political groups or campaigns to disclose the use of content created by AI in political ads. Here, our educators and education officials are being proactive about developing guidelines and policies for how generative AI is being used in the education system. The Ministry of Education just published a set of guidelines. In the absence of any formal regulation of the use of AI in political campaigns, a set of cross-party guidelines or an agreement about disclosure might be a decent stop gap with the election only five months away. Discussing it in the context of US law and the 2024 US election, Matthew Ferraro, a cybersecurity lawyer, suggested it was a good way to go.