One Question Quiz
Photo: Thomas Koehler/Photothek via Getty Images
Photo: Thomas Koehler/Photothek via Getty Images

OPINIONBooksJuly 27, 2020

On the internet, freedom for some never means freedom for all

Photo: Thomas Koehler/Photothek via Getty Images
Photo: Thomas Koehler/Photothek via Getty Images

Kathy Errington introduces a conversation with Anjum Rahman on online harm, an extract from the upcoming BWB text Shouting Zeroes and Ones, edited by Andrew Chen.

Articulating what matters when we look to reduce online harm is becoming ever more important in a context where states are increasingly turning to regulation to address harms caused by social media platforms which grew too big, too fast while accepting too little responsibility.

Unfortunately, many of us have concluded that we can’t trust social media companies to properly regulate themselves. They are, after all, largely reliant on a huge volume of user-generated content, and are often unwilling to make the necessary investment into human moderators to effectively review what or whom they give a platform to. Algorithms designed to ensure users stay on the platform can create dark echo chambers in which harmful content is continually reinforced, but social media companies have become reliant on this business model and are resistant to change.

Kathy Errington: You have said that in regard to freedom of expression, freedom for some takes away freedom from others. Can you expand on that?

Anjum Rahman: What we know about hate speech is when people are really aggressive with hate, particularly when language is threatening and abusive, it silences other people. Often with hate language, not only is it belittling the targeted group, it is spreading misinformation and ascribing false motives. The aim is to enhance the negativity of one group over another, thereby undermining the authority and validity of the speaker by associating them with a group or community delineated as “evil”.

The second dynamic is the sheer numbers. Those who are involved in hate will often attack groups that they perceive to have little ability to respond. In other words, the targets have little positional power within society, and they may already be viewed negatively by a significant number of people. So, the hater is in a position of feeling secure in being able to target that group safe in the knowledge that they have numbers on their side to overwhelm existing systems which attempt to provide ‘balance’.

For example, back in the day when letters to the editor were more of a thing than social media, I could write one letter trying to defend a position, and then have to deal with 20 to 30 letters in response. And the way it worked was that I only got 200 words, I only got to publish a letter once every 10 days or so, and therefore my ability to respond back, just with the sheer numbers, was so limited.

In the online world, an additional factor is the high number of bots and paid trolls, with a lot of resourcing going into campaigns targeting particular communities. As a person being targeted, when people are threatening you with rape, or death, or harm, when they are being really abusive in their language, and when they are spreading a lot of misinformation or one-sided information, when there are literally hundreds of accounts going at you, it restricts your freedom.

The response is to just shut down your social media accounts, to get offline, which means you’ve lost your freedom to speak. We’ve seen it with celebrities, we’ve seen it with all sorts of people. The result is that the dominant groups’ freedoms are protected and advanced, while the marginalised groups lose their freedom. They’re silent because they don’t have the capacity to withstand the harassment and abuse, and they don’t have the ability to respond to all those accounts. Countering the hate is not viable, so the response is to hunker down and to be silent, which is what the haters were wanting in the first place; they want to silence the voices of people they don’t like.

That is a long answer to how freedom for some takes away the freedom of others. If you don’t balance different types of freedoms – from discrimination, from harm – then you’re actually only promoting the freedoms of dominant groups. You’re not promoting freedom for all people.

KE: How do we get regulation right? We seem caught in a conversation where any efforts to improve life online become immediately attacked for government overreach. And governments, even democratic ones, can respond in heavy-handed ways.

AR: I am not opposed to regulation, but we do need to be really careful. What we are hearing from an organisation in the United States is that, in their experience, regulation sought by marginalised groups can be used against them by a subsequent administration. Putting in regulation without also dealing with power and the way society is structured will not achieve anything. I think that we have to be really careful around legislation and we have to design the systems that administer those regulations knowing that, at some point, a hostile government will be enforcing them. What will protect marginalised groups in that situation? That has to be really seriously tested and factored into the way we regulate.

Another problem raised by civil society organisations overseas is that a lot of videos depicting violence which were taken down by YouTube were evidence of crimes by regimes, or evidence that could be used in court cases. So simply removing content has consequences that will be harmful to those that are suffering abuse. We need to think about how that content is archived. Who can have access to it? How can it be protected so that it can be accessed when it’s needed? All of this has to be factored into regulation, which is again to say, I do believe that we should have regulation. I think there are things we can do, but when it comes to implementation and enforcement, you really have to be careful.

Shouting Zeroes and Ones will be available at Unity Books from August 10.

David Hall, Curtis Barnes, Anjum Rahman, Kathy Errington, and Donna Cormack discuss the spread of disinformation, reducing online harm, and Māori data sovereignty as part of Techweek on Wednesday, July 29. Streaming details here.

Keep going!