One in three New Zealanders has seen content directly promoting violence towards others in the last year, according to new research from the Classification Office.
As New Zealand grapples with new, growing forms of toxic digital material and the ways those can seep into the real world, new research lays bare the widespread exposure to dangerous content in Aotearoa – and just how sceptical we are about the large digital platforms’ efforts to provide a safe online environment.
Among the most striking findings in the Classification Office report published this morning, which draws on a Kantar survey of 1,201 people, is that 33% had over the past year “seen content that directly promotes or encourages violence towards others”. Most of those – 29% in total – had seen material that included “violence towards others based on things like race, culture, religion, sexuality or gender”, while 20% had encountered expressions of “‘violent extremism or terrorism”. One in five said they had personally seen content that encouraged suicide, self-harming or eating disorders.
These were “pretty shocking numbers”, said acting chief censor Rupert Ablett-Hampson in an interview with The Spinoff. But, he added, “it’s not surprising, given recent times.” The risk of such expressions spilling out of digital spaces was very serious. “You see these real life events occurring because of things that happened online,” said Ablett-Hampson. “If you look at the recent Buffalo shootings, there is a direct link to the Christchurch mosque terrorist, to his livestream, his ‘manifesto’. Those are directly attributed [influences] by the Buffalo shooter.”
Entitled What We’re Watching: New Zealanders’ Views About What We See on Screen and Online, the survey pointed to a high level of confidence in the classification regime for traditional forms such as films, television and written publications. That provided “some solace”, said Ablett-Hampson. “Conventional media have these familiar tools people are able to use and understand and guide choices for their children and whānau.” But the online deluge was a challenge of different proportions.
“When you’re in the wasteland of social media, there isn’t that guidance there. TikTok can go, for example, from the most mundane clip of kids trying to emulate dance moves to some of the most extreme and vile content.” There had been encouraging signs, he said, with the protocol under GIFCT – the Global Internet Forum to Counter Terrorism established as part of the Christchurch Call – working effectively to take content down after the Buffalo attack, “but, you know, once you’ve seen it you’ve seen it. People are being exposed to that content.”
Similarly, while the recent addition of streaming platforms such as Netflix to the classification system was “really positive”, that represented a fraction of materials viewed. “As any parent knows,” said Ablett-Hampson, “more and more of the media that children and young people are consuming is not on these platforms, it’s on social media.”
When it comes to that content, there was a marked lack of confidence that the platforms are stepping up. Asked if they agreed that online platforms provided what people needed to keep them safe, just 33% said they somewhat or strongly agreed. Asked whether they trusted social media platforms to remove dangerous, violent or harmful content, 27% answered yes and 51% no. According to the report, respondents sought the following: “stronger and better regulation; better education, information and support; improved technical solutions and tools; and the need for tech/social media companies to do more.”
While the vast majority supported regulation of harmful content, with 89% considering the work of a classification agency very or quite useful, only 43% believed the current system was working well “to keep young people safe from inappropriate or harmful content online”. Despite this, a majority of respondents, 61%, said they felt they knew “enough to help keep my family/whānau stay safe online”.
The report summarised survey participants’ comments in this way: “The most common response was in support of government action and more effective regulation. Relatively few participants talked about age ratings or restrictions on content like movies or shows, rather, the pressing issue for most was about social media and other online content. Some talked about tougher measures to hold tech companies to account, and others about legal requirements for online age restrictions.”
Recent months have seen everyone from disinformation scholars to the outgoing chief censor suggesting the collected parts of New Zealand’s content regulatory framework are not “fit for purpose” in confronting the torrents of toxic online material. That question underpins the Content Regulatory Review launched last September by Internal Affairs – a project referenced directly in the concluding remarks of the Classification Office’s What We’re Watching report. “New Zealand is currently undertaking a wide-ranging review of content regulation, and this a key opportunity to learn from overseas developments and ensure we have a system that works to ensure the safety and wellbeing of New Zealanders,” it urged, under the header “The harms are real, and we need to take action”.
The conclusion noted “the great majority” of online content “is not subject to effective safeguards”, continuing: “The reality for New Zealanders – including our rangatahi – is that much of the material they see online is provided by global social media platforms. These large online platforms have taken significant steps to address issues around harmful content on their services in recent years, such as by taking actions to address the spread of misinformation or extremist content. However, these measures remain highly variable and often ineffective. This is reflected in our own findings about New Zealanders’ relatively low levels of trust in social media companies to take sufficient measures to ensure the safety of users and to remove harmful or dangerous content.”
For the group leading the regulatory review, “the big message is Kiwis care about this,” said Ablett-Hampson. Pointing again to the lack of faith in social media platforms and safety, he said: “Wherever the media content review lands, it must do something to address that confidence for New Zealanders.”