Tech sector giants have a vested interest in prioritising freedom of expression, often at the expense of other rights. A new project to reduce harmful online content, presented yesterday to the Paris Peace Forum, aims to change that. One of the architects of the Christchurch Principles, Dr David Hall, explains.
What is the harm in online content? It’s just text, images, audio, video. Pixels on your screen and sound in your speakers.
But these sights and sounds convey ideas, concepts, beliefs and ideologies. And these are the means by which we make sense of our world, ourselves and our relationships to others. When these are imbued with hate, we, or someone else, is liable to suffer.
Online content can be enlightening, enriching and empowering. But it can also be hurtful or dehumanising. It can encourage people to act with grace and decency, or with contempt and cruelty. It is no different from offline speech in this respect, except that online speech exists in a realm – the internet – that isn’t bound by the same constraints, which is structured by different dynamics. Familiar harms take unfamiliar forms, which our institutions are struggling to cope with.
One prominent worry is the influence on politics. The message “You broke democracy” was once towed by an aeroplane over Facebook’s quarterly shareholder meeting in Silicon Valley.
As far as accusations go, it’s too strong to be entirely right. Some argue that the signs of democratic distress were showing well before the era of social media truly kicked in. Others argue that democracy is a destination that few, if any, countries have ever fully arrived at. On this view, democracy is an aspiration that we should nurture and cultivate.
Initially, the internet promised to assist in this journey. It offered a new infrastructure for connection, communication and interactivity – the stuff with which democratic publics are produced. But the colonisation of the internet by platform monopolies like Google, Facebook and Amazon has rapidly reconfigured this infrastructure. The way we access information online, the way we connect and in what spirit, is influenced by their design choices. And these choices are shaped, first and foremost, by commercial imperatives, not necessarily the imperatives of democracy.
Earlier this year, The Workshop released its vital report Digital Threats to Democracy. It found that digital technologies are affecting every aspect of democracy: the electoral process, civil liberties, competitive economy, active citizenship, trust in authority and shared democratic culture.
To be sure, some disruptions are positive. For example, social media provides minority groups with new opportunities to participate in public debates and political networks that they were once marginalised from.
But as the report warns, “the overall trend should raise serious concerns”.
Active citizenship is being undermined by online abuse and harassment. Social media companies have developed systems for filtering and reporting hate speech, but these are imperfect, inadequately enforced and easily out-foxed. But a lot of harmful online content simply doesn’t reach the threshold, which leaves online spaces with an ambience of vitriol that is exhausting and demoralising. People are discouraged from sticking their neck out.
This might seem a niche concern – except that online spaces are increasingly where debates of public importance happen. Partly, this reflects the struggles of traditional media – print, radio, television – which is seeing its advertising revenue gobbled up by Google and Facebook, and yet needs to deepen its dependence on these same platforms to reach its audience. As a result, information flows are increasingly hostage to the design choices of online platforms, choices which we know very little about. Arguably, this is a matter of commercial sensitivity – yet when platforms increasingly position themselves as providers of public goods, as the stewards of human knowledge and “the public conversation”, then this secrecy is in conflict with democratic values of transparency and accountability.
And while we don’t know enough about the algorithms that moderate and recommend content, we do know something about the outcomes. We face today an onslaught of disinformation, misinformation and mal-information that not only frustrates informed debate, but encourages a generalised mistrust towards expertise and institutions. We see the effects of “echo chambers” or “filter bubbles” where people get lost in their own personalised information flows, prone to radicalisation about vaccination, immigration or whatever else.
This has gnawed away at our democratic culture for years, but 15 March 2019 was a terrible awakening. This wasn’t just another online outrage, even though some people treat it this way. It was a gross violation of the most basic human rights to life and security, with about 100 people shot, 51 dying from their injuries. The wider Muslim community in New Zealand lives with a heightened sense of threat, accentuated by ongoing harassment, both offline and online. Many more New Zealanders, including many children, experienced distress after seeing the shooter’s video online, inundating mental health lines.
Of course, the Christchurch mosque attacks are more than just a story about online hate. But these events revealed once again that the internet is a driver and enabler of violent extremism – with new technologies making it ever more effective. His manifesto and video are implicated in violence elsewhere – in Poway and El Paso in the US; Bærum, Norway; and Halle, Germany. Just pixels on a screen, doing exactly what he intended it to do.
The Christchurch Principles is a democratic model for reducing harmful online content, presented this week to the Paris Peace Forum. The project is led by the Helen Clark Foundation, in collaboration with The Workshop and The Policy Observatory, AUT.
Essentially, the Christchurch Principles aims to do for the digital economy what the UN Guiding Principles for Business and Human Rights did for the global economy. It recommends a set of roles and responsibilities for digital technology companies, states, and civil society organisations to not only protect and respect human rights, but also to defend those democratic norms, practices and institutions that enable rights to flourish.
At its heart is the democratic ideal that all people should have the opportunity to participate as equals in public life.
This is vital for democratic government. We should all have an equal standing in society, so that we can influence the collective decisions that affect our needs and interests. But it is also vital for democratic culture in a broader sense. As Jack Balkin, a long-time expert on online speech, once wrote: “A democratic culture is democratic in the sense that everyone – not just political, economic or cultural elites – has a fair chance to participate in the production of culture, and in the development of ideas and meanings that constitute them.” It is from such a culture, he argues, that “liberty emerges”.
The right to freedom of expression, to freedom of speech, opinion and belief, is integral to democratic equality. The suppression of voices, especially minority voices, is one way that equal standing is lost.
But it is easy to see, especially in the digital age, how the speech of some people can interfere with the equal standing of others. If online platforms are amplifying bias and discrimination, if they are allowing the free speech of dominant groups to have a chilling effect on the speech of minorities, if they spoil public decision-making through distraction and disinformation, if they assist in suppressing votes of specific constituencies, then these platforms are incompatible with democracy. They are undermining the ideal of equal participation by making public life feel too exhausting, too burdensome, too risky for certain people to be involved in.
By thinking through the risks to rights and democracy, the Christchurch Principles applies a broad definition of harm. This is an important point of difference to the Christchurch Call, which focuses only on terrorist and violent extremist content. This is critical work, but there are a wider set of rights at stake in the digital revolution, including freedom of expression, anti-discrimination, equality, political participation and privacy. We are harmed when these rights are violated, but we are also harmed when inhibited from exercising our rights as a result of manipulation, deception, distraction or loss of trust.
To be sure, by casting the net more widely, the Christchurch Principles turns attention to speech that can’t, and shouldn’t, be targeted for legal sanction. For example, while the proliferation of falsehoods is undoubtedly harmful to democratic decision-making, we shouldn’t punish people for merely being wrong. But the Christchurch Principles works on the assumption that non-criminal remedies have an essential and neglected role to play. As David Kaye, the UN Special Rapporteur for Freedom of Expression, notes in a recent paper on online hate speech, states and businesses have a wide range of tools that don’t endanger freedom of expression in the same way as takedowns and laws. These include greater transparency requirements, education programmes, counter-speech and counter-narratives, de-amplification and de-monetisation of problematic speech, creating friction for sharing, deferring to independent judicial authorities, and improving the capabilities of civil society that can engage with fellow citizens in ways that states and businesses can’t.
Freedom of expression is a right that tech sector titans have an interest in prioritising – and a lucrative interest at that. But this devalues other rights, as well as the responsibilities that accompany them. Only once these companies acknowledge their wider obligations, only once they acknowledge the duty of care that derives from the trust that people place in them, will they turn from breaking things to mending things.
Subscribe to The Bulletin to get all the day’s key news stories in five minutes – delivered every weekday at 7.30am.