One Question Quiz
Extra! Extra! Misinformation!
Extra! Extra! Misinformation!

PartnersApril 12, 2019

This is not the internet you promised us

Extra! Extra! Misinformation!
Extra! Extra! Misinformation!

The livestreamed atrocity in Christchurch has put into sharp focus the pernicious potential of online media, and the ways that misinformation can erode democracy. Russell Brown explains

Four weeks on, it has expressed the best of us. And the worst of us. On the one hand, social media has provided a valuable platform for public grieving. It let us be together and connected us with vigils. It has helped amplify the remarkable response of our prime minister to a global audience, and show all of us who our Muslim community actually is; brought us voices we weren’t used to hearing. At first, it seemed to speak of a rare national unity, born out of revulsion with what had taken place.

But it also began to show a darker side. People who felt left out of the rhetoric of inclusion shared things that seemed to validate their feelings; a kind of content that sometimes surged past validation to something like incitement.

Misinformation about the attack and then about the proposed restrictions on semi-automatic weapons were shared widely on social media. Explicitly racist and Islamophobic Facebook groups, briefly quieted, lit up. There were posts not-so-coyly advocating the shooting of politicians.

In many cases, false or hateful content was removed after being reported, and sometimes the authors’ ability to post more was limited or removed. But there are, equally, many examples of users who reported such content being told it didn’t breach Facebook’s “community standards”. Everything in the paragraph above is in that category.

Similar stories played out on Twitter and YouTube. At the time of writing the latter is still hosting Infowars videos that not only push false-flag theories, but use substantial excerpts from the killer’s live stream.

A member of Christchurch’s Muslim community stands across the road from the Dean Street mosque. (Photo: MARTY MELVILLE/AFP/Getty Images)

On one level, it’s not new. An ActionStation report found that around three quarters of non-white internet users in New Zealand had experienced racist abuse online and that social media was exacerbating polarisation. But there’s a troubling sense that the mediated trauma that led most of us to empathise drove others to more radical positions. And not only here. Reported anti-Muslim hate crimes soared nearly 600% in the UK in the week after the attacks.

Some of this is simply a jarring insight into who we are as humans. Some of it is the result of algorithms flooding us with what they think we want, effectively radicalising us in pursuit of clicks. And some of it is, chillingly, directed activity.

We must now accept that organised networks of bots and bad-faith human tweeters are a fact of life in our most sensitive political moments. A Scientific American story last year described directed “misinformation networks” focused on fake news and disruption. A Swansea University study of Twitter activity around Brexit and the 2016 US election surmised that “the aggressive use of Twitter bots, coupled by the fragmentation of social media and the role of sentiment [and] increases the polarisation of public opinions.”

In a remarkable interview last year, researcher Danah Boyd (who has been studying social media since MySpace) said that the lessons learned as social media marketing developed as a new discipline weren’t confined to people who wanted to sell us consumer goods and services.

The same practices worked for selling ideas, or simply and deliberately causing social instability. (A surge of directed Twitter activity from what are now known to be Russian-linked accounts, 45,000 tweets, fell either side of the day of the Brexit vote – it was apparently aimed less at influencing the vote than driving polarisation.)

“Many of the people who built these technologies, social media technologies, information technologies, truly imagined in the 1990s and early 2000s, that if you built social technologies, you would connect people around the globe,” Boyd said “They couldn’t fathom that their tools would be used to magnify polarisation, to magnify tribalism, and I think that’s the strange moment that we’re in today which is everybody’s sitting here and going, ‘what have we wrought?’”

British PM Theresa May dancing robotically at the recent Conservative Party conference (no, really, she did that) (Getty Images)

A recent report by Rebecca Lewis for the Data & Society Research Institute makes a similar point, describing “political influencers who adopt the techniques of brand influencers to build audiences and ‘sell’ them on far-right ideology.”

Lewis’s conclusion emphasises “an undercurrent to this report that is worth making explicit: in many ways, YouTube is built to incentivize the behaviour of these political influencers. YouTube monetises influence for everyone, regardless of how harmful their belief systems are. The platform, and its parent company, have allowed racist, misogynist, and harassing content to remain online – and in many cases, to generate advertising revenue – as long as it does not explicitly include slurs.”

Boyd’s “what have we done?” emotion isn’t limited to the US. It was palpable in the room for a panel on “keeping society safe” that I was asked to moderate at the AI Day conference in Auckland two weeks after the Christchurch attacks.

“This was an attack that happened against our way of life,” declared New Zealand-born AI expert Sean Gourley in the closing words of the panel, in response to an audience question about whether we had reached ‘Peak Social’.

“Generated by extremist actors that went on to the platforms that we’ve created, and utilised in the same way that terrorists utilised airline networks to inflict the maximum possible damage. If this isn’t the end of social, what is?”

“I feel like we’re really at a turning point with the internet,” said David Heiner, a strategic policy adviser with Microsoft in the US. “We kind of take the benefits for granted. Now we have some unintended consequences. When the technology is so pervasive, we get bad actors. Bad actors are a fact of life, and now we have to be much more deliberate about thinking about the bad actors.

“If you layer on top of that the fact that the internet is essentially unregulated – there’s an agency that handles the domain name system, but beyond that there’s nothing. And that’s always been considered a feature. There’s a libertarian ethos among tech companies that this is outside the control of any government, and that’s a good thing. And this is outside the control of any corporation. But maybe we’re coming to a point now where it’s so powerful and so pervasive that we really need to come together and envision a future that will address these concerns.”

Other panel members, including Melissa Firth, the former chief digital officer at Te Papa, expressed similar frustration. Two days after the Christchurch attacks, Firth was due to speak at Refactor, a quarterly gathering of women in tech. She focused her talk in the imperative to create “a more moral universe” in technology.

“I’ve watched it go from the utopian dream that Tim Berners-Lee had, and the whole social and democratic underpinnings of the internet, the very fact that it is both read and write, it always has felt like a real win,” says Firth.

That ended for her, she says, with the dawn of era of fake news and Cambridge Analytica.

Facebook CEO Mark Zuckerberg testifies before a combined Senate Judiciary and Commerce committee. (Photo by Alex Brandon-Pool/Getty Images).

“I feel like actually, we’ve got to this terrible place and all of us, including those of us with tech skills and in government, have a duty to upskill, move faster on policy. Designers need to understand the ethics behind what they’re doing, business needs to understand the effect of applying commercial metrics without thinking about wellbeing and humans behind it.”

Internet NZ CEO Jordan Carter acknowledges that “things have changed” since Christchurch. He’s heard senior people in his organisation talk about the problem of hateful content in a way they haven’t before.

“And I think that’s really important. Because part of what we can do with this technology is make the world a better place, and part of it is to make it worse. It’s not just neutral.”

Carter says that regulation wouldn’t necessarily breach his organisation’s long-held principles about an “open and uncapturable internet”.

“Our mandate and mission is about the internet and keeping that open and free. It’s never been about saying that all the services that go on top of that are entitled to that same openness and freedom. But it’s been elided, sometimes by organisations that are parts of our constituency into that sort of cyber-libertarian ethos of government is always bad, freedom of speech is always good, any moves to regulate content or services are always bad.

“I think that’s interesting because of what the social media platforms are. These are advertising sales machines. And they have massive impacts on media markets, on public opinion, knowledge diffusion and so on. And just as the public square and the media were always regulated, it isn’t obvious to me that these platforms should be exempt just because they’re on the internet.”

Whether the social platform owners won’t deal with bad actors or simply can’t, on one level it’s a simple proposition: nations should not continue to outsource the wellbeing of their societies to the “community standards” of platform operators whose imperatives are quite divorced from that wellbeing. It’s the kind of thing governments are supposed to address.

Exactly what governments might do about it is less clear. The underlying technologies of the internet were created within and for high-trust environments. University research departments might have had their flame wars, but they were unlikely to be the source of organised campaigns aimed at exploiting the rest of us. The internet was to be open and not to be captured. But had we anticipated spammers, scammers and phishers, we’d maybe have designed email differently and saved ourselves the ocean of junk that our email providers have to filter every day.

Things have only become more complicated. The genius of Google – creating a search engine that harnessed the wisdom of the crowd, then prioritised results in line with what we seemed to want – changed the world; and created an entirely new problem.

“It’s almost that we’ve been gifted this stream of information – more than we could ever want about more than we could ever know,” says Gourley. “That’s been amazing, the internet gave us that. But it said the only cost is that the only way you can filter this is by popularity.

(Photo: Getty Images).

“You’ve given me this gift of every bit of information, but the only thing I can do is what everyone else likes. That sucks. The algorithms are driven by popularity. That’s the single tool to filter information. Maybe there’s a little bit of demographic information and so on, but by and large it’s popularity.

“We’ve created cheap information and no way to regulate or moderate it other than popularity.”

While our prime minister has been bagged for being “soft” on Facebook for not acting as swiftly on social media as she has on the far simpler matter of semi-automatic weapons, Australia – ever swift out of the blocks with regulation – has proposed a law that makes the big platform owners criminally responsible for abhorrent violent material.

But it’s really only aimed at the very worst video content, and should be seen in light of Australia’s existing law requiring ISPs to block a list of sites carrying violence and unacceptable porn on a government list – a practice that was extended in 2015 to require a block on sites that might facilitate copyright infringement. (The block list is also trivially easy to get around.)

A new British government white paper takes a more sophisticated – not to say sweeping – approach. It proposes a formal duty of care to be applied to social media platforms, file hosting sites, forums, messaging services and search engines. All of these services may have to publish transparency reports about harmful content on their services, and the measures they take to combat it. Social media companies could face huge fines, and their senior executives criminal prosecution, for failing to meet standards. Some companies could be required to do proactive fact-checking during election periods.

The sheer reach of the British proposal – which may even take in dating apps that fail to prevent access by minors – could be a problem in itself. The Index on Censorship has already flagged a list of concerns with the proposals, declaring that the “wide range of different harms which the government is seeking to tackle in this policy process require different, tailored responses” and good evidential backing. It says a failure to properly define harms “risks incorporating legal speech, including political expression, expressions of religious views, expressions of sexuality and gender, and expression advocating on behalf of minority groups.”

“I’d be more interested in seeing a process that got better over time than someone who thinks that they can come up with the one true scheme that solve everything and put it in place,” says New Zealand tech commentator Nat Torkington. “I think that’s probably what’s been lacking – some visible means of improvement of the systems that Facebook and Twitter already have.

“If we tackle a less ambitious problem that getting Facebook to make every decision the right decision – picking it off piece by piece, adding religion as a protected category in New Zealand, so that hate speech against religions is covered as a reportable offence – that would be a great thing. That’s the low-hanging fruit.”

There may even be unintended consequences to publishing official standards.

“Facebook and Twitter do have review and moderation teams. But, holy shit, those teams are opaque, and the standards and the groups who set them,” says Torkington. “One reason for that is that these black hats operate in the grey area. Dealing with trolls is kind of like tax law, where the evaders will stick to the letter of the law but completely defeat the spirit.

“I’m not saying there aren’t mistakes made by the Facebook teams who do this stuff, but I’m also saying that one of the reasons they don’t publish their standards is because it gives some guidance to grey areas to the people who would exploit them.”

To an extent, the very big social platforms are wealthy enough to absorb a new regulatory burden – Facebook and Google both already publish transparency reports on their own terms. But, depending on the eventual detail of the British proposals, the same might not be true of smaller content businesses.

Torkington does support the application of a greater duty of care to the largest social platforms, in recognition of their reach into our lives. But security consultant Rangi Kemara believes that leaving smaller hate sites alone would be a mistake.

“While there needs to always be pressure on these platforms to evolve with emerging threats, they are scrambling as we speak to build in mechanisms to combat what they can from their end. Social media platforms that are willing to work to reduce the use of their applications to spread hate should be supported by government in partnerships.

“Others that refuse to are another matter. 8Chan for example refuses to take responsibility for its part in facilitating the publication of the murders of 50 Muslim members of this country – it only recently ceased hosting the murderous snuff film for distribution. Organisations like this need a different approach to educate them on how to be a functioning participant in providing platforms for members of the international community.”

On the day of the Christchurch attacks, Kemara set up the @BlocklistNZ Twitter account. It offers a helpful blocklist of mostly local users who “promote” Islamophobia, racism and misogyny and in particular those who attack others online.

“It also has a secondary effect of reducing troll attacks from overseas bot networks,” he says. “These overseas networks appear to be monitoring and amplifying many of the accounts of a small group of these NZ Islamophobic-xenophobic activists – which in turn can lead to the horde of overseas bots and sock accounts attacking New Zealand Twitter accounts.”

In lieu of a global response that may be a long time coming, there are, as Torkington observes, smaller steps that could be taken. Extending our existing hate speech laws to recognise what’s actually happening and encompass religious vilification is one. As academic and activist Tze Ming Mok observes, the Human Rights Commission typically applies a very high bar to complaints, so robust discussion of religion wouldn’t be impeded.

One of the British white paper’s proposals – a public education programme about the techniques of misinformation and radicalisation – would be relatively uncontroversial. (And it might be useful to more than just kids too. Kemara says the Twitter users he has seen become more extreme in their rhetoric over time are mostly middle-aged white men.) Social sanction has already forced a cleanup of sorts at Kiwiblog, to take one example, where the comments sections have carried hateful, racist, even genocidal sentiments for years.

News media have a role to play too. An address Danah Boyd gave to the Online News Association conference in September should be required reading for editors. It describes all the ways that bad actors work the system – from digital martyrdom to exploiting “data voids”.

But established media are prey to the same anything-for-clicks business logic on which the social media platforms are based. It was striking that the biggest audience response during the AI Day panel came in response to Gourley’s suggestion that the big platforms should be subject to a hypothecated tax that would fund genuine news operations.

It’s also somewhat fashionable to declare that one is swearing off social media altogether. That easier to say than it is practical. These platforms are wound into our lives in myriad ways (AUT’s World Internet Project surveys found that Māori users in particular tended to transfer entire family networks on to Facebook).

Publicly withdrawing advertising budgets from social media, however, might be more useful. I’m on the panel overseeing the NZ On Air-funded music heritage site AudioCulture, which cancelled all Facebook advertising shortly after the atrocity. Ironically, our Facebook impressions went up for the month, off the back of a post linking to AudioCulture’s 2013 article on Sisters Underground’s ‘In the Neighbourhood’, made two hours after the attacks. It’s now our most popular post ever – people seemed to hear something in that song that comforted them at a bad time. And they found it on the same platform that was still delivering the terrorist’s snuff video.

But we do have a right to demand that these perhaps-too-big-to-fail platforms are not destructive of our societies. And there is evidence that they are: ActionStation cites 2017 research conducted by Unitec into perceptions of community safety in West Auckland, which found that “high social media use, particularly of Facebook and Neighbourly, increased Pākehā people’s fear of crime, despite being the group least affected and despite crime rates consistently decreasing.” In Germany, researchers have found a consistent correlation between higher-than-average Facebook use in regional towns and attacks on refugees. Wherever per-person Facebook use rose to one standard deviation above the national average, attacks on refugees increased by about 50%.

Protesters gather to participate in a march organized by the right-wing AfD political party as well as the right-wing Pegida and “Zukunft Heimat” movements to demonstrate against violence by refugees on September 16, 2018 in Koethen, Germany. (Photo by Carsten Koall/Getty Images)

It seems likely there will eventually be some form of global response, if only because the problem itself is global. There are risks there too. Many of the technical tools that make automatic content moderation possible to conceive were developed by Western companies to enable China’s “Great Firewall” and, more recently, its “social credit” scheme, which monitors and scores the online actions of a vast population in the name of social cohesion.

If we’re to embrace the idea of governments enforcing a concept of social good online, can we assume that all governments are good? I’ve seen up close the way that Russia, China, Saudi Arabia, Iran and Indonesia exploit the UN drug conventions to justify repressive actions. Do we want to enable that?

But, as Mok says, “bad governments tend to do what they want anyway.”

Massey University Social Entrepreneur in Residence Thomas Nash, who served on the board of the Nobel Prize-winning International Campaign to Abolish Nuclear Weapons, sees a parallel with addressing harmful elements of the internet, pointing out that New Zealand “has led the way before” on human rights and other issues.

“The world is not awash with normative, resolute leadership amongst the diplomatic forums that govern things. We have a huge opportunity. We’ve got the prime minister to do it and we’ve got the context now, post-Christchurch, with the anger and frustration around these social media platforms. So I hope we do.”

“One of the things that we always say is ‘we need to go and get the tech people to go and solve our problems’,” Gourley concludes. “When did democracy work like that? We’ve got to engage in a conversation about what sort of society we want and do it in a way that acknowledges the incredible technology we’ve created and decide what we want to do with it.

“And if we expect social media to do that, we’re deluding ourselves. I don’t believe social media in its current form will exist ever again, because it is too manipulable and it is too big a part of our lives to control our view into the world – on something that’s making 30 bucks a year buying and selling us to a set of algorithms. It’s ridiculous. We’ll look back at this and it’s going to be equivalent to installing bloody asbestos.”

This content was created as part of a paid partnership with Action StationLearn more about our partnerships here.

Keep going!