One Question Quiz
Open AI declined to release the data sets ‘due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale’
Open AI declined to release the data sets ‘due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale’

MediaFebruary 24, 2019

Machine-generated text is about to break the internet

Open AI declined to release the data sets ‘due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale’
Open AI declined to release the data sets ‘due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale’

Five years ago, Mark Rickerby crafted code to analyse the full text of the Whaleoil blog after Dirty Politics. That experience, and the unveiling this month of a language model trained on internet text that can generate startlingly coherent prose, offer a profound warning of the dangers of allowing AI innovation to be controlled by a few giant players, he writes

In August 2014, Nicky Hager’s book Dirty Politics was released, revealing how Cameron Slater’s Whaleoil blog was the centrepiece of an organised system of political attacks and smear campaigns, some funded and ghost written by anonymous clients. Soon after Dirty Politics was published, I started receiving emails from New Zealand political observers asking if there was a way of using machine learning to independently identify anonymous authors and verify the claims made in the book.

I already had my suspicions about Whaleoil, which had led me to dabble with research into methods of identifying anonymous comment trails left by sock puppets. Realising there was something to this, I quickly crafted code to automatically download the entire Whaleoil blog then cleaned and transformed each of the 42,000 posts into a format suitable for analysis. Fortuitously, Nicky Hager introduced me to artist and computational linguistics researcher Douglas Bagnall who had started working on the same problem. We combined our efforts, and began to dig into the text.

The research was sprawling and difficult to explain succinctly but the results Douglas achieved reinforced our suspicions that the fingerprints of multiple authors could be detected in the publicly available blog posts using machine learning, without relying on evidence from leaked emails and chat logs.

Consistent stylistic patterns emerged that were suggestive of many hands cooking the copypasta. For instance, the spelling variations of ‘tipline’, ‘tip line’ and ‘tip-line’ tended to be linked to Cameron Slater, Simon Lusk and Carrick Graham, respectively. Statistically improbable use of the word ‘troughers’ seemed to indicate that other authors were deliberately adopting Slater’s characteristic lexicon.

Douglas’s innovative approach to authorship detection using recurrent neural networks led to him winning an international software competition and contributing to the Kōkako language recognition system for Māori broadcasting. I landed a job at a Sydney startup designing systems to process large scale social media analytics. Meanwhile, the Whaleoil text analysis remained unpublished as Slater’s status as a media influencer waned and various defamation proceedings ground their way through the cogs and wheels of the New Zealand court system.

In many ways, the Dirty Politics scandal was an early warning signal of the serious threats to democracy that emerged later in the decade, most notably in the UK and the US.

While our algorithms were crawling through the Whaleoil corpus in mid-2014, Gamergate was growing from a toxic trash fire in the gutters of the internet into a blazing wildfire sweeping across popular media platforms – a rage-fuelled inferno of entitlement and bigotry that eventually metastasised and merged into the Brexit and Trump campaigns.

While much has been made of the operations of Russian-orchestrated troll farms and political data breaches over this time period, the core narrative of the Cambridge Analytica saga is extraordinarily similar to Dirty Politics: a loosely affiliated group of wealthy corporate donors and political operatives — connected to a mainstream party but operating outside its formal structures — builds a disinformation machine to identify and exploit online fracture points. The tactical competency and levels of funding might have been worlds apart, but the strategic motivations were broadly congruent.

If we accept the claims made in Dirty Politics (and this is itself, an exploitable fracture point), Whaleoil pioneered a business model of delivering online attacks on behalf of anonymous clients and operated for years without significant pushback from news media or broader public scrutiny. The work on detecting anonymous authors could have been done independently at any time if someone with the right skills, knowledge and resources had the insight to do it, but none of us did. It took the exposure of internal documents and chat logs and the focused efforts of Nicky Hager for all this to come to light and be taken seriously.

These scandals of 2014 seem like they happened an eternity ago with the cavalcade of extremely online horror we’ve experienced since then. Political black ops have moved from the blogs and mailing lists of the early Obama years onto the giant ad-supported social media platforms and, as a consequence, these platforms are now facing unprecedented scrutiny and criticism. Particularly relevant here is the industrial scale of misinformation and deceptive messaging which has emerged in a dynamic interplay with the past decade’s dominant tech trend of algorithmically curated newsfeeds and playlists.

When we talk about algorithms promoting misinformation, it’s easy to lose sight of the fact that there are people behind the misinformation, purposefully optimising and tweaking content to exploit how the algorithms work. Understanding the funding and intentions behind these groups matters. This in no way diminishes the responsibility of platforms to moderate harmful content and respond effectively to abuse and deception. Unfortunately, the evidence to date suggests that if platforms remain locked in an arms race with those seeking to exploit them, there’s little they can do without drastically redesigning their product architectures—a move that at most would destroy their businesses, and at the very least would require investors and owners to willingly downscale and give up some of the power they hold. Engineers and product designers who’ve left the abuse prevention and safety teams at these companies have described themselves as haunted by their inability to counteract these problems.

My major concern is that our society has been far too slow in coming to terms with online abuse, misinformation and the problems of centralised platforms that chose scale over safety as their organising principle. As we grapple with understanding various existing threats to our democracy and sovereignty, we run the risk of our public discussions and responses becoming fixated on the problems of yesterday, while a whole new dimension of online hell is opening up that we are utterly unprepared for.

Machine-generated text is about to break the internet.

On February 15 2019, a furore erupted across the AI, machine learning and computational creativity communities following a remarkable announcement by Open AI — the non-profit research organisation backed by some of the most powerful people in the US tech industry, including Elon Musk, Peter Thiel and Jessica Livingston. The focus of their announcement was a demonstration of GPT-2, a language model trained on internet text that can generate writing in many different styles and also answer questions given a specific prompt. The generated prose shows a sentence and topic coherence never before seen in systems based on machine learning.

Open AI have an organisational mandate to share their research with the global scientific community and caused controversy by choosing not to release the data sets and models behind GPT-2, “due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale”.

There’s a vast range of possible bad outcomes here, from existing methods of detecting spam and fraudulent profiles becoming instantly obsolete, to the weaponisation of generated text to automate political attacks and spam platforms with astroturfed campaigns at an unprecedented rate. Even the director of AI at Tesla made an alarming statement about the emerging reality of ‘deepfake’ text: “The more of your writing you put online the higher risk you’re taking on for future language models++ fine tuned on your data to impersonate you.”

Ironically, while technologists and researchers were deep in discussion of what this all means, the GPT-2 announcement and demo (with its emphasis on mitigating potential for abuse and misinformation) actually kicked off a secondary wave of misinformation and harmful speculation.

Because nobody really understands what these systems are doing or how they work, there can be an extraordinary level of cognitive dissonance in the responses to new AI technology. In a culture awash with myths about the personhood of thinking machines and a forthcoming Singularity, there’s a whole realm of AI discourse that has become profoundly unmoored from reality.

The Turing test, the trolley problem and similar philosophical thought experiments are like catnip to rationalists and techbros who are captivated by the idea that the human dimensions of AI can be reduced into a series of games and puzzles. Sure, these puzzles are interesting and we can learn a lot from them, but the situations they describe are nowhere near adequate to understand what’s happening with the actual AI systems being deployed in 2019.

It’s simply not possible to detach technology from its social context without drastically distorting our understanding of how it works. One of the main reasons why the Turing test has so little relevance to our current generation of technology is because in practice, the systems we’re talking about are always controlled in the background by puppet masters optimising and tweaking them to respond in more and more believable ways. The entire output is human-created, human-selected, and human-edited. It takes wilful effort to avoid seeing these systems as inseparable hybrids of human intention and machine capability made possible by global supply chains and big data.

Yet at the most extreme end of the spectrum, people carried away by the discourse are asserting that the Turing Test is ipso facto valid as a consequence of how we perceive language and writing, thus a system like GPT-2 can think and should be granted personhood.

To understand the absurdity of this claim, we need to acknowledge that these systems are mostly enormous bags of numbers representing the probability of certain words appearing in phrases together. GPT-2 exhibits intelligence not because it thinks (whatever that means), but because it taps into a vast corpus of internet text like a mirror of collective consciousness that lays bare the human act of writing.

Statistically-generated writing is successful because of a principle known as the distributional hypothesis: words that often appear in the same places are likely to have the same meaning. Statements like “kitten is to cat as puppy is to dog” show how consistent language patterns—in this case, adult-child noun markers—can be extracted from large data sets without needing to refer to any knowledge about grammar and meaning.

The thought experiment discourse would be mostly harmless if it wasn’t directing attention away from the genuine problems. Speculating on personification and intelligence, while treating how and why it works as a black box mystery simply gives more power to the handful of large organisations that have exclusive control over its use and confounds public debate about the risks associated with this technology.

Systems that rely on statistical word distributions are uniquely susceptible to reproducing patterns of bias and abuse derived from the source text. We’re already struggling to maintain reasonable and ethical standards of behaviour in online spaces. Taking this mass of content and using it to build giant language processing and decision-making systems seems profoundly irresponsible without careful curation of inputs and an awareness of the social context of sexism, racism, and other implicit and explicit biases that will undoubtedly show up in the data. In a more subtle but equally dangerous confluence, content that was understood in its original setting as disputed and problematic could easily take on a value-free existence in a data set, leading future systems to make unfounded assumptions about its neutrality and objectivity.

These problems have already been observed in live systems and have caused harm in practice. Despite a chorus of warnings from experts and critics, many large AI systems are still going into production use without any standards or regulations around accountability and oversight, let alone analysis and research of their potential impact on society.

The repercussions of the GPT-2 announcement are not only about the risks of AI which are well understood and widely documented. They also prompt us to face complex concerns about the role of writing and authorship in our culture.

Whaleoil was successful for so long in part because it subverted our basic assumptions about attribution and credit. We expected to see a partisan blog. We expected that people would publish under their own names. Maybe this was naive, lacking insight into realpolitik. But it happened, and we need to learn from this.

The two-track strategy of attacks and smear campaigns described in Dirty Politics corresponds to a two-track level of media literacy. Communications professionals, information activists and anyone else with enough knowledge and training can interpret the multiple layers of meaning and intention behind online content and are able to analyse and apply counter-intelligence techniques to media. Those without this literacy tend to look at content directly as communicating facts or arguing about a topic.

High quality, machine-generated prose will soon create another literacy gap between those who can recognise the ‘tells’ of algorithmic content and understand how to feed in parameters and control its output, and those who simply see the words.

Already, we see this literacy gap emerging in the distinction between ‘weird Twitter’ which has carved out a space of creative resistance by adopting the poetics of glitchy generative bots and accidental text, compared to the vast wasteland of #MAGA profiles where real people are performatively regurgitating the language and aesthetics of automated marketing bots and astroturfed political campaigns without consciously realising it.

This is understandably frightening for writers and journalists who worry it will lead to a widespread cultural devaluing of written work. By leveraging vast data sets, these emerging systems threaten to shatter the connection between creative effort and its output.

The difficulty here is that contemporary AI has developed into a narrow, highly specialised technical discipline that is inaccessible without a mathematics or engineering background. While there are many examples of artists and writers doing fascinating things with AI, this is still the exception rather than the norm. Many people feel excluded and intimidated. Instead of seeing the incredible possibilities for AI within their own working practice, they see hostile narratives of job losses and obsoletion.

Does it matter how these systems work and why? Or do people only care about the results? With everyday access to giant data sets of human writing, will plagiarism become irrelevant? How do we think about authorship attribution and co-authorship between writers in collaboration with systems? How do we encourage and support creative and beneficial uses of this technology while avoiding potential for abuse, deception and misinformation? What are the compromises needed?

These questions are not science or engineering problems but they should be at the forefront of AI research involving generative writing.

In 2014, with techniques and algorithms that were cutting-edge at the time, it still took weeks of effort and many wrong-turns to detect the outlines of multiple authors within the relatively small scale of the Whaleoil blog corpus.

I went into the Whaleoil text analysis dreaming of tools that would automatically alert journalists and political observers to the presence of problematic content and deceptive influence campaigns on blogs and social media. Now, with experience running a marketing analytics system for large commercial customers, I have a much more nuanced understanding of how difficult, time-consuming and expensive it is to get robust detection systems into production (and to keep them running).

In 2019, there is simply no way that independent researchers — let alone New Zealand news organisations and public institutions — can respond to the potential scale and reach of automated fakery and deceptive messaging, should this emerging technological capability get into the hands of those motivated to cause chaos and disrupt politics and culture.

Before we can discuss regulating and adapting to this technology, we need to reach a broad public consensus that there is nothing necessary or deterministic about the path of technological progress. Like all other products of engineering and design, AI systems reflect the social and cultural values of their creators, determined by deliberate choices at every step of development.

The current incentives of big-budget AI research are intrinsically connected to the same obsessions with growth and scale that drive the giant social media platforms. AI and machine learning research teams frantically compete to compile larger and larger data sets and construct new algorithms to pass extraordinary benchmarks of speed and accuracy.

As these capabilities expand and the complexity of these systems grows, reproducing this technology will become more and more inaccessible, to the point where we have a dangerous power imbalance between the handful of large organisations with the resources to develop and deploy this technology, and the rest of society struggling to keep up. We should be less worried about the capabilities of AI as a technology and more worried about who controls access to it and determines its scope.

None of this is inevitable. We can change it. But in order to do so, we need to rethink a lot of basic assumptions about how we regulate and develop policy around technology and how we make it accessible to people.

In New Zealand, the worst thing we could do now in terms of AI regulation (aside from doing nothing at all) is to fixate on AI as a central organising principle with a focus on developing ‘AI policy’ or treating AI as a subset of ICT policy. The people best placed to assess the technical and economic benefits and the social consequences of AI systems are the specialists, researchers and subject-matter expects in particular sectors, whether that be agriculture, medicine, transport or journalism. Developers of technology and government policymakers alike need to be guided by this interdisciplinary expertise. Rather than introducing new regulatory bodies for AI, we should look to expand existing agencies to support these new responsibilities.

We should be wary of the easy technocratic solutions that will invariably come up. Online voting, digital identification and automated censorship systems are most likely to be promoted and sold by the same giant players and investors that allowed many of these online problems to fester in the first place.

Protecting our public sphere from organised and well-funded misinformation and attack campaigns is a difficult challenge with many different dimensions but we need to confront it. The things that will bolster our defences against abuses of AI are exactly the same as the defences against Dirty Politics: greater scrutiny of powerful organisations, transparency in campaign funding and corporate donations, more granular disclosure of special interests and lobbying, massively increased resources for journalism and investigative reporting, as well as funding to leverage emerging AI tools within news organisations. Most importantly, we need greater support for everyday participation in democracy without having to tolerate attacks and corrosive influence campaigns.

The dreams of radical transparency and freedom which characterised the early days of the World Wide Web are vanishing into the rear-view mirror of recent history, replaced by the savage memetic culture of the past decade where polarised tribes transmit amplified abuse across the thunderdome of planetary scale cloud computing.

Rather than liberate us as the techno-utopians envisaged, the internet has imprisoned us. It’s no longer a place where nobody knows you’re a dog.

We should not interpret this present cultural moment as a cue to pessimism, passively accepting an inexorable slide into digital dystopia. With the scope of these problems now in full public view, we have a huge opportunity to develop new homegrown community platforms, hold the giant social media companies more accountable and build up a creative and socially beneficial response to emerging AI technologies before they spread uncontrollably into the public sphere with no oversight.

The age of oversharing and internet empowerment is over. It’s up to us to figure out what will take its place.


References

Keep going!