One Question Quiz
Deans Avenue near the Al Noor Mosque in Christchurch at dawn. Photo by Fiona Goodall/Getty Images
Deans Avenue near the Al Noor Mosque in Christchurch at dawn. Photo by Fiona Goodall/Getty Images

SocietyMarch 16, 2019

The atrocity profits

Deans Avenue near the Al Noor Mosque in Christchurch at dawn. Photo by Fiona Goodall/Getty Images
Deans Avenue near the Al Noor Mosque in Christchurch at dawn. Photo by Fiona Goodall/Getty Images

New Zealanders were furious with news organisations that broadcast video and manifesto related to the atrocity in Christchurch. But what about the online giants which made it all accessible?

I’m sitting in a hotel room in Singapore, having just left Facebook’s APAC HQ, about to attend a news conference organised by Google, thinking about Christchurch. News broke here around 10am local time, a little over an hour before I was due to meet with our new account manager at the social network.

I spent the hour scrolling numbly, vainly trying to process what it all meant like everyone else. Casualties weren’t yet known, but it was clear something awful was unfolding. I wanted to cancel the meeting with Facebook, but I also wanted to walk into the belly of this company at this horrific moment and get a sense of how it operates. Because while this appalling act of terror was committed by humans who bear the vast majority of the associated culpability, I think it’s worth contemplating the role played by the two tech giants. And the law which enabled their existence and extreme growth, and continues to insulate them from any reckoning regarding their proximity to incidents like this.

Forty people are confirmed dead, and my Twitter feed lit up with furious, anguished condemnation of New Zealand news organisations’ decisions to host images from his video, his manifesto, or both. This condemnation is absolutely correct – while making calls like this in the heat of an evolving situation is incredibly complex, this should not have happened.

Yet there was far less condemnation of Google or Facebook for their roles in disseminating this material. This is not a long bow: prior to attacks, communities of hatred are facilitated by Facebook, and the forums on which they propagate their ideology are indexed and made accessible by Google.

During the attack the terrorist streamed footage of it in real time on Facebook, and its reproduction was accessible through Google parent company Alphabet’s YouTube platform immediately afterwards. As long as eight hours after the incident, the video was still being successfully uploaded to YouTube.

It’s far from the first time this has happened: in 2017 a 74 year old Cleveland man was murdered live on the internet. A Facebook spokesperson said afterwards “this is a horrific crime and we do not allow this kind of content on Facebook.”

That same year though, Facebook’s Mark Zuckerberg said “we don’t check what people say before they say it, and frankly, I don’t think society should want us to.” It reflects his and his company’s fundamental worldview: that information wants to be free, and the benefits outweigh the costs.

Nearly two years have passed since Cleveland, and whatever steps Facebook may have taken to prevent such an incident from occuring again, they didn’t stop this. As the Human Rights Commission’s Ryan Mearns pointed out, publication of the video for news organisations may well be illegal. Yet no statute I am aware of will impact either of the tech giants.

This is because of a foundational protection in law for platforms known colloquially as ‘safe harbour’. Put simply it means that tech platforms which publish text, audio and video are exempted or significantly protected from the likes of defamation, copyright violation, hate speech and other law which covers both news organisations and the public at large.

These laws were passed in the mid-’90s, before social media, before surveillance capitalism, before we knew what it was we were creating. And now, a quarter century on, with the world redically reconfigured by these companies, the law remains largely as it was.

It’s impossible to know how to weigh the different forces acting on the terrorist at the centre of this atrocity. Yet it is already clear that a desire for recognition and for his actions to be not just known but viewed around the world were part of the matrix of decision-making which drove him. The New York Times headlined an opinion piece ‘The New Zealand Massacre Was Made to Go Viral’, writing that “the killer wanted the world’s attention, and by committing an act of mass terror, he was able to get it.” We’ll never know whether access to hate forums and the ability to stream his actions pushed him over the edge, but it was manifestly in his mind.

It’s a little surreal being this close to the human faces of these normally remote organisations. We forget they exist, and often that seems by design. Yesterday my new Facebook account manager took me to lunch in a giant hall, behind two sets of security checkpoints and a multi-page confidentiality agreement, where the thousands of APAC employees eat a free lunch every day. We then walked through a huge enclave for clients, groaning with state of the art screens, light displays and ways of understanding Facebook’s reach and power. It showed the huge reach of the platform, country by country, interest group by interest group.

This ability to create communities and sell access to them is what has powered its near-unprecedented profitability. Facebook made US$6.9bn in profit on US$16.9bn in revenue in its most recent quarter, still printing money despite the way it spends it.

Stuff reports that the video continues to be posted and re-posted on both Facebook and YouTube, and its continued existence in perpetuity is now already guaranteed. New Zealand’s ISPs have taken the incredibly rare step of blocking access to sites which held his manifesto, but access to the platforms which continue to host his video is near-integral to our information economy now. To be clear – this is not an uncomplicated task. Yet both companies are among the world’s most profitable, and manifestly could do vastly more. Not doing so is a choice they make every day.

The Facebook I saw just now is its public face – open, interested, slightly goofy. I met over a dozen people working for Facebook and Google yesterday, and liked them all: sweet, funny, earnest, smart. They are not responsible for the decisions of their executives. But behind the direct employees are the contractors who moderate the endless stream of horrors its users disseminate. Not nearly enough, not appropriately compensated, working too slowly to protect anyone. Facebook still, after all this time, wants to engineer its way out of the problem, refuses to put a human in the way of the stream. A statement emailed to The Spinoff on Saturday morning talked about the advanced AI able to find ‘blood and gore’ and detect audio analogs of the recording. Yet still it abides, and thus the appalling violence now visible around the world. The law protects Facebook, and its executive amorality ensures that incidents like yesterday’s are viewed as a corporate communications problem and not something fundamentally rotten at the core of its system and worldview.

Today we’re right to feel anguish and mourn. The terrorist and his community should rightly take the brunt of our ire. In time though, it is incumbent upon us to ask whether we should continue to insulate these still-young, now-enormous platforms from responsibility for the behaviour of some of their users. To ask whether they’re truly doing all they can to ensure that the desire for infamy and a horrific martyrdom is not facilitated by their business models.

It’s an uncomfortable question. One day soon we deserve an answer.

Keep going!