Posts and engagement have risen drastically, and there’s been a shift to ‘increasingly violent language’, reports Dylan Reeve for IRL.
The current delta outbreak has brought with it a “sharp increase in the popularity and intensity of Covid-19-specific disinformation”, according to a new report from The Disinformation Project, part of University of Auckland research centre Te Pūnaha Matatini.
The researchers behind the project have been studying online disinformation in Aotearoa since early 2020. Yesterday, they released a working paper about their most recent study into the effects of online mis- and disinformation, covering the period from when New Zealand’s latest Covid outbreak was detected on August 17 up to the end of last week.
Over the 12-week period, they observed a “sharp increase in the popularity and intensity of Covid-19-specific disinformation” as well as “other forms of ‘dangerous speech’ and disinformation, related to far-right ideologies”.
“We started noticing that there was a closer link between Covid-19 disinformation and wider sets of fringe beliefs,” explained Kate Hannah, the project lead of The Disinformation Project, in a briefing to media. “From August last year we noticed it was becoming two or three steps to take people through to white supremacist or far-right ideologies, QAnon material, extreme misogyny, incel material and transphobic material, and we started viewing Covid disinformation as one of the entry-level ideas that draws people down these further disinformation ideology pathways.”
The project classifies the information collected in a variety of ways, drawing distinctions between misinformation (false information created without intent to cause harm), disinformation (false information purposely created to cause harm) and malinformation (true information used with ill intent). They also use a category of communication called “dangerous speech”, which is speech that “can increase the risk that its audience will condone or participate in violence against members of another group” – a more nuanced definition than the commonly cited “hate speech”.
Since the beginning of lockdown in August, the group observed that “posts and engagement have drastically increased”, and they continue to “show a trajectory of growth and spread that is increasing, widening, and deepening every week”.
“From the beginning of level four in the delta outbreak it has been utterly amazing,” said Hannah. “The level has gone up and up and up. We’ve had a hundred-fold increase in followers in various Facebook pages and groups we’ve been following for over a year.”
Among the trends identified by The Disinformation Project over this period has been a marked shift from “vaccine hesitancy” to “vaccine resistance”, where individuals are now making their refusal to vaccinate a part of their identity. This shift makes counter speech even more difficult and “often leads to further entrenchment of identity within this framework”.
Another key observation is the adoption of Telegram as the platform of choice for these groups, as mainstream social media sites like Facebook and Twitter have taken a more forceful stance against mis- and disinformation. Telegram offers no “platform-level guidelines or interventions such as the use of interstitials [the inserted warnings about disinformation that appear on the likes of Facebook and Twitter]”. What’s more, having already been adopted widely by alt-right and QAnon groups internationally, the ease of sharing content between groups on the platform means it provides a practically seamless way for those ideologies to spread among their local counterparts.
“Telegram has moved from being what people would have thought of as a messaging app with no rules and affordances, to being fully platform-ised,” explained Hannah. “It provides all the tools people need to do live streams, live chats, and share information instantly. [Telegram] has no oversight. There’s no rules, no regulations. Netsafe can’t go and talk to someone at Telegram.”
As well as an overall increase in content volume over the past 12 weeks, the project also identified a shift in language and image choice, saying that “increasingly violent language and other forms of expression, [have] become normalised and justified within the groups and individuals who make up the disinformation community in-group”.
While much of what The Disinformation Project has reported on could be broadly true of similar groups and communities worldwide, there are localised factors that they have identified here. Among other things, the project identifies the ways in which the predominantly Pākehā-dominated communities are adopting Māori iconography and concerns, and cite specifically the “increasing use of Māori voices, narratives, and imagery for agendas of white supremacist individuals and groups who make up one cluster we study”.
And while Māori kaupapa is being exploited, at the same time Māori, along with many other minorities, are targets. An example in the report identifies how mainstream media coverage of increasing uptake of vaccination among Māori has been weaponised within disinformation groups and used to reinforce discriminatory ideas.
“A lot of us who are campaigning for our whānau to get vaccinated also happen to be the same people that have been engaged in Treaty justice campaigns for quite some time,” said indigenous rights expert Tina Ngata at the briefing. “We’ve been subjected to threats and harassment from these groups who are now co-opting our symbols and doing what is referenced in the report as ‘accusations in the mirror’ – framing us as the bad Māori and framing other people, who they are moulding and shaping, as good Māori.”
While the report does make for disheartening reading, Hannah urges perspective. “It’s a very noisy bunch of people,” she said. “They feel like they’re making more noise, and they’re making noise in a way that’s scary and anxiety inducing, but they’re doing that because we’re reaching these really high numbers of people vaccinated.”
In short, they remain a very vocal minority.