One Question Quiz
a blue background with a hand holding a phone with some thumbs up and thumbs down around it
(Image: Tina Tiller)

InternetJune 1, 2023

How a new proposal could change online safety in New Zealand

a blue background with a hand holding a phone with some thumbs up and thumbs down around it
(Image: Tina Tiller)

The Safer Online Services and Media Platforms document has just been released by the government’s Content Regulatory Review. It does more than capitalise nouns – here’s what you need to know about what’s inside.

What is this document with the world’s most boring name?

It’s a proposal from the Department of Internal Affairs to change how online content is regulated in New Zealand. Like, all online content – social media, news websites, messaging platforms, all of it. 

Well, what is it proposing? 

The suggested changes are pretty different from what we have right now. All digital industries that publish content, including overseas companies like Meta and Google and local industries (like the media!), would have to sign up to codes of practice that would govern what could be available to New Zealanders on their platforms – essentially the first thorough attempt to regulate social media platforms here. The codes would be developed by the various industry groups with support from a regulator.

There have been similar frameworks applied to online content overseas, in places like Australia, the EU and Canada. This approach would bring New Zealand roughly in line with rules digital platforms are beginning to abide by in those places.

These codes of practice would be the “how” to achieve online safety outcomes set by parliament. Things like “not exposing children to violent content” and “not spreading extremist or terrorism-related content on the internet”. A new independent regulator would oversee this and help platforms to comply. The intent is to create a single streamlined, flexible and clear system where individuals and organisations know what they’re obligated to do and who they can ask for help – and ultimately make sure fewer people see harmful content. 

a red brick wall with a computer inside and a phone. like a jail for phones
Will locking away our devices keep us safe online? (This is not proposed in the new document) (Image: Tina Tiller)

So I’m going to wake up tomorrow with a whole new internet and experience a peace I haven’t known since the days of dial-up?

No (unfortunately). This document is only at the proposal stage. Public consultation is open for the next two months, then the suggested framework may be adjusted to respond to feedback. After that, we might have a timeline for when this could be implemented, but it might take a while. Australia’s Online Safety Act was passed in 2021 and they’re still working through it. We also don’t know how huge global corporations like TikTok or Meta will react and whether the codes they develop, as they have done in Australia, will be deemed acceptable by the regulator or be palatable to the many who will have views on them.  

Why is all this necessary?

Right now, the legislation governing online content falls between different agencies and different laws – most of which were written in the pre-internet era. The Films, Videos and Publications (Classification Act) 1993 and the Broadcasting Act 1989, which are applied to online content, are more than 30 years old. This means, for example, that the rules applied to content on YouTube were written to govern what should be available on broadcast TV – very different mediums, used in different ways. 

So if you have a complaint about something your kid has seen on YouTube, you have to figure out not only whether it breaches the conditions of the platform, but also which of five industry complaint bodies to complain to. Furthermore, not all content on all platforms can be covered under the law and industry structure as it stands right now. 

This is a particular concern for children and consumer protection, said Suzanne Doig, general policy manager at the DIA, in the press release. Because various and sometimes ill-fitting laws are applied to online content, and many of the major players are often based overseas, things can fall through the gaps. “We need a modernised and consistent approach to the obligations of content providers and a much greater emphasis on the safety of children, young people and vulnerable groups from illegal and unsafe content,” Doig said.

The proposal has been developed after talking to different groups, including faith-based groups, ethnic groups and young people; the report recognises that some groups, including Māori, are exposed to more harms than others. Many of these people told the Content Regulatory Review that they didn’t feel like they knew how to respond to offensive digital content when they saw it and what options were available to them.

girl with a laptop and notebook
Keeping kids safe is a priority with the proposed regulation (Image: Archi Banal)

How would this change what I do and see online?

If you don’t often see content you don’t like, you might not notice any changes. But if you’ve experienced online harassment, or if you keep seeing distressing self-harm content being recommended on your social feeds, or if you’re worried about what your children are seeing online, you may notice a difference as digital sectors sign up to codes of practice. The regulator will also be tasked with education for the general public and awareness campaigns, which you might see or be part of. 

Having a single regulator would also make raising concerns much easier – instead of figuring out if you should go to the Media Council or Netsafe or the Police or the Advertising Standards Authority depending on the nature of your complaint, there would be one place to direct all questions about online content.

Does this change what I’m allowed to do online? 

When the proposal has been discussed and consulted on with the public, the government will need to approve the creation of the new regulator. However, the proposal won’t actually change what is and isn’t legal to do online. There will still be the possibility for the government to censor illegal content, as the Classifications Office did with the March 15 terrorist attack video. However, the government will only intervene when the law is being broken, while the regulator can identify risks and run public education campaigns of its own accord.

The discussion document gives the example of what might happen with disordered eating content targeting young people being shared online. If this was identified as a risk, then there would be mandatory options available for users to opt out of seeing it, creators of this content could learn about the potential impacts it would have and education for the public about disordered eating content could be integrated in school curriculums or at standalone events.

What does this mean for the international websites I love to use daily? My beloved Instagram? My precious YouTube? My cherished BeReal?

Many of these websites have no New Zealand-based offices or employees, and the content on them is not currently regulated under New Zealand law. The proposal would mean that all digital platforms operating in New Zealand would have signed up to a code of practice. The digital regulator would oversee the codes of practice, meaning that they could be proactive about the risks of content on their platforms, rather than having to wait for a law to be broken before they do anything. 

I actually have heaps of opinions about this that I would like to share. 

Great! To paraphrase Wayne Brown, don’t talk to me about it – share your thoughts with the Department of Internal Affairs instead. There are questions to think about throughout the discussion document, and consultation is open until the end of July.

Keep going!