Frances Cook and Catherine McGrath
Personal financial journalist Frances Cook, left, and Westpac CEO Catherine McGrath (Photos: Supplied. Design: The Spinoff)

Societyabout 10 hours ago

What will it take for Meta to do something about scams in NZ?

Frances Cook and Catherine McGrath
Personal financial journalist Frances Cook, left, and Westpac CEO Catherine McGrath (Photos: Supplied. Design: The Spinoff)

Images of Westpac’s CEO and a high-profile financial journalist’s were used in financial scams. Meta didn’t respond to one and didn’t resolve the problem for the other. Will the government act?

It was quite the scene: Westpac chief executive Catherine McGrath was mid-argument with Winston Peters, a TV producer was trying to separate them, and Jack Tame was attempting to calm Peters. No wonder when McGrath’s colleagues saw the AI image circulating on Facebook as the hook for an investment scam, some thought it was a joke. That it was April 1 didn’t help.

“Some people who knew me well forwarded it saying they thought it was an April Fool’s,” says McGrath. But, of course, it wasn’t a joke – it was a deepfake designed to funnel New Zealanders toward a fraudulent investment platform. 

McGrath did everything she could to get the ad removed, but no matter how many ways she contacted Meta – the owner of Facebook, Instagram, Threads and WhatsApp – she never received a response. 

If Meta won’t respond to the chief executive of Westpac, who, in New Zealand, will it respond to? Not The Spinoff, apparently. We contacted the company for this story but heard nothing back.

McGrath says the scam ad eventually disappeared – possibly after the Financial Markets Authority (who she’d also alerted to the ads) intervened – but she decided she needed to speak publicly. “The media rightly put pressure on banks about whether they were doing enough to stop scams,” she says. “I’d love to see the media put the same pressure on social media platforms.”

 

Catherine Westpac scam pic
The AI-generated image featuring Westpac’s Catherine McGrath certainly had a lot going on. (Image: Supplied)

 

She’s not alone. The Financial Markets Authority (FMA) has documented deepfake ads featuring prime minister Christopher Luxon – one Taranaki grandmother lost $224,000 after viewing a fake Luxon video promoting Bitcoin. Politicians, media personalities, financial commentators, and business leaders have all been impersonated via fake pages and accounts on social media using deepfakes to promote fraudulent investment groups on WhatsApp, Signal, or Discord. Kiwibank CEO Steve Jurkovich and deputy prime Minister Winston Peters have appeared in recent scam ads. The FMA identified 110 such ads in a single 24-hour period on Meta platforms.

Frances Cook, personal finance journalist and host of the Cooking the Books podcast, has had her image stolen repeatedly. She hasn’t seen deepfakes like McGrath’s but has lost count of fake accounts using her name and likeness. 

Before his recent death, well-known psychologist and TV presenter Nigel Latta interviewed people who had been scammed by someone pretending to be her. “It’s horrible to think people believe I took their money,” she says. 

How did Meta respond to Cook’s situation? She says after years of reporting fake accounts with little result, she’s largely given up. “I feel helpless and hopeless. The platforms don’t do anything.”

The scams are especially effective, Cook says, because some of the impersonations are almost indistinguishable from legitimate work – a private group offering investment tips feels so on-brand for a financial journalist (although Cook insists that she never offers specific investment advice). 

McGrath feels the violation as much as Cook. “If it were just my image in an AI story, that could be weird and funny. But it was used for an investment scam. You never want your image used to persuade people to part with their hard-earned money.” 

She’s acutely aware of her privilege. “I have more than one method to try to get the ad taken down. If I weren’t the CEO of Westpac, I still don’t know if it would have been removed. That’s not good enough.”

McGrath wants Meta to verify that anyone buying financial-services ads is a legitimate provider. She wants faster action on fraud reports and a closed reporting loop – in other words acknowledgment that action has been taken. She also wants a direct channel for bank security professionals to escalate validated scam activity. She also thinks Meta should be checking who it is taking advertising money from. “Asking someone to verify that if they’re taking advertising dollars, it’s from a legitimate advertiser is not an onerous step.”

Basically, at a minimum, she wants New Zealand to have the fraud-prevention tools Meta has already deployed elsewhere. 

Since February 2025, Meta has required financial services advertisers targeting Australian users to provide their Australian Financial Services Licence number and run a visible disclaimer on each ad. The tech giant has also partnered with the Australian Financial Crimes Exchange on a Fraud Intelligence Reciprocal Exchange, which has enabled the removal of over 9,000 spam pages and 8,000 AI-generated celebrity impersonation pages in two months. 

That direct intelligence-sharing channel is exactly what McGrath is asking for. “Why should New Zealanders be treated less well than customers in other parts of the world?”

And yet, those in other parts of the world aren’t happy either. Last week, The Consumer Federation of America took legal action in the US over scam ads, alleging Meta knowingly took steps and adopted policies to make money at the expense of users’ safety. Meta denies the allegations, saying they misrepresent the reality of the company’s work and that it aggressively combats scams on its platforms.

The class action complaint came after international news agency Reuters published an investigative series on Meta, in which it reviewed internal Meta documents that it said showed the company projected it would earn around US$7bn in 2025 from “higher-risk” scam ads. Advertisers below the threshold for deletion but suspected of fraud were charged higher rates rather than removed, Reuters said.

One document set the maximum revenue Meta would forgo to crack down on suspicious advertisers at 0.15% of revenue or US$135m – against a potentially US$16bn problem. The documents also described a “global playbook” in which Meta cleaned up its public ad library – a tool regulators use to search for fraudulent ads – by running regulator search terms to remove flagged content, barely denting the overall volume.

Meta disputed Reuters’ reporting, calling the calculations “rough and overly inclusive”. 

According to New Zealand writer and filmmaker Dylan Reeve, the technology to identify scam ads is not the issue. “They have the tools in-house to catch these. And they weren’t doing it.” 

After reporting hundreds of examples to Meta (which he documented for The Spinoff), he concluded: “With seconds of human oversight – looking at the ad and asking, What is this? Who is this? Who is advertising it? – it was immediately apparent they were fraudulent. It should have been relatively simple not to run them.”

Meta’s typical response to a report was an automated reply arriving almost exactly six days and twelve hours later. “No investigation was taking place,” Reeve says. “It just fell out of the back of the queue.” 

The clearest sign of selective enforcement came, he says, when Meta swiftly removed ads placed by law firms who were recruiting plaintiffs for social-media addiction suits against the company. “They’ve got the tools when they need them.”

If international law suits, high-profile victims and media investigations are all in play and the scams continue, what can stop them? 

The Spinoff asked commerce and consumer affairs minister Cameron Brewer what the government had planned. He pointed to the New Zealand Online Scams Code, launched last month and developed by Tech New Zealand with support from Google, Meta, TikTok and others, as evidence of progress. “I expect Meta to be actively working to find ways to reduce the use of their platform to facilitate scams, and to respond quickly to remove scams,” he said. 

The government also plans to introduce legislation in the coming weeks to create a legal “safe harbour” allowing platforms to remove suspected scam content without fear of liability for taking down legitimate material. 

But for McGrath, legal empowerment is not the constraint. Meta already has the tools, the data, and – as the Reuters documents suggest – full awareness of the problem. What it lacks is the financial incentive to act.

McGrath acknowledges that banks are not innocent bystanders. Westpac, like other banks, has introduced friction into payment processes – prompting customers to pause before making new payments to unfamiliar accounts. “Some easier-to-see scams are likely being prevented more,” she says. “Some social media ones are possibly going up.” 

Westpac says customer losses to fraud and scams have been falling, and that 64% of cases this year originated on social media, up from 57% last year.

But still, she draws a sharp distinction: an industry with intense regulatory scrutiny versus one with almost none. Banks are one of the most heavily regulated sectors, for good reason. Then you have Meta and other global social media platforms operating with extremely little regulatory framework – yet the two are different ends of the same fraudulent funnel. 

Frances Cook
Frances Cook wants to know why there are not repercussions for Meta. (Photo: Supplied)

Cook makes the same point less diplomatically. “If someone makes a bank transfer, there are scam and fraud protections, all this compliance. Yet social media platforms have no repercussions. That’s wild. I don’t understand how this situation can continue.”

McGrath has seen no communication through the industry anti-scam alliance indicating that Meta has changed its approach. “It may be that some things have changed, but there’s been no communication saying, ‘Here’s an uplift of capability’.”

As to whether Westpac might eventually stop advertising on Facebook – contributing revenue to a platform aiding the defrauding of its customers – McGrath won’t rule it out. “At some point you have a different conversation. Right now, our approach is to be vocal and see what change we can drive.” 

Cook, with no corporate calculation to make, is bleaker in her analysis. “There will be people out there who think I scammed them, and there’s nothing I can do to stop it. That is beyond frustrating.”