Jacinda Ardern will head to Paris next month to co-host a forum devoted to an accord on ‘eliminating terrorist and violent extremist content online’. What could such a pledge look like, and what could it usefully achieve, asks Jordan Carter of InternetNZ.
Jacinda Ardern this morning announced that New Zealand and France are working together to bring tech companies and nation states together in Paris in mid-May to agree to a “Christchurch Call”. In the PM’s words, this will be a pledge “to eliminate terrorist and violent extremist content online”.
In the wake of the terrorist attack on Christchurch’s Muslim community, and our national response to it, a call to global action is a good place to start. Our country is too small to force changes on the rest of the world, but we can lead by example and by calling for what needs to be done.
If we take that goal of eliminating terrorist and violent material online as a starting point, what could such a pledge look like, and what could it usefully achieve? Below, some initial thinking, which doesn’t try and judge what the social media platforms have done so far.
Here are six thoughts.
The scope needs to stay narrow.
“Terrorist and violent extremist content” is reasonably clear though there will be definitional questions to work through to strike the right balance in preventing the spread of such abhorrent material on the one hand, and maintaining free expression on the other. Upholding people’s rights needs to be at the core of the Call and what comes from it.
The targets need to be clear.
From the media release announcing the initiative, the focus is on “social media platforms”. I take that to mean companies like Facebook, Alphabet (through YouTube), Twitter and so on. These are big actors with significant audiences that can have a role in publishing or propagating access to the terrorist and violent extremist content the Call is aimed at. They have the highest chance of causing harm, in other words. It is a good thing the Call does not appear to target the entire Internet. This means the scale of action is probably achievable, because there are a relatively small and identifiable number of platforms of the requisite scale or reach.
The ask needs to be clear.
Most social media platforms have community standards that explicitly prohibit terrorist and violent extremist content, alongside many other things. If we assume for now that the standards are appropriate (a big assumption, one that needs more consideration later on), the Call’s ask needs to centre around the standards being consistently implemented and enforced by the platforms. Working back from a “no content ever will breach these standards” approach and exploring how AI and machine tools, and human moderation, can help should be the focus of the conversation.
There needs to be a sensible application of the ask.
Applying overly tight automated filtering would lead to very widespread overblocking. What if posting a Radio New Zealand story about the Sri Lanka attacks over the weekend on Facebook was automatically blocked? Imagine if a link to a donations site for the victims of the Christchurch attacks led to the same outcome? How about sharing a video of TV news reports on either story? This is why automation is unlikely to be the whole answer. We also will need to think through carefully about how any action arising from the Call won’t give cover for problematic actions by countries with no commitment to the free, open and secure internet.
Success needs measuring and failure needs to have a cost.
There needs to be effective monitoring that the commitments are being met. A grand gesture followed by nothing changing isn’t an acceptable outcome. If social media platforms don’t live up to the commitments that they make, the Call can be a place where governments agree that a kind of cost can be imposed. The simplest and most logical costs would tend to be financial (e.g. a reduction in the protection such platforms have from liability for content posted on them). But as a start, the Call can help harmonise initial thinking on potential national and regional regulation around these issues.
The discussion needs to be inclusive.
Besides governments and the social media platforms, the broader technology sector and various civil society interests should be in the room helping to discuss and finalise the Call. This is because the long history of Internet policy-making shows that you get the best outcomes when all the relevant voices are in the room. Civil society plays a crucial role in helping make sure blind spots on the part of big players like government and platforms aren’t overlooked. We can’t see a situation where governments and tech companies finalise the call, and the tech sector and civil society are only brought in on the “how to implement” stage.
A Call that took account of these six thoughts would have a chance of success. To achieve change it would need one more crucial point, which is why the idea of calling countries, civil society and tech platforms together is vital.
It has to have broad buy-in. It can’t be a stitch up.
Making random quick laws on our own might respond to a deep seated feeling many of us will be having that “something has to be done and NOW”. The quick action on gun laws taken in New Zealand could be seen as an example on this front.
Sadly, that won’t work in this situation. There are no global precedents for how to deal with social media and violent extremist or terrorist content. If it was already sorted, the experience we had with Christchurch would not have happened. While it might sound painful, the right place to start is the conversation.
I’ll go further. For the Call and its recommendations to be effective, it is going to take some very big players with lots of clout to come on board and support the change. For credible threats to regulate to make a difference, they will need to come from big, influential countries – and the European Union is the obvious candidate for this, which is why the French role is significant here.
It would be nice to think the United States could help. The nature of their black and white constitutional protections on free speech, and the current state of their politics, don’t leave me with any confidence that they will be able to drive change in this area.
The role of other big countries – Brazil, China, India and others – is something that we will need to engage with and explore. These are enormous populations with ever rising Internet usage, and their views matter.
If the world can get together and make a stand about this focused, narrow problem in the incredibly influential area of social media, that’ll be a wonderful sign that our society can tackle the difficulties that come with technological change in a considered and effective way. Many people get benefit from social media platforms. Many others are harmed or face negative impacts.
Maybe by taking a small first step in a sensitive way, we can then open up a broader conversation about the role and impact of social media platforms in our increasingly online lives, and how we can make sure that they work in the service of people – instead of us being their products.