Could AI convincingly write this aritcle?
Could AI convincingly write this aritcle?

SocietyJune 6, 2024

Should we be fearing or embracing AI? An argument with myself

Could AI convincingly write this aritcle?
Could AI convincingly write this aritcle?

AI might change life as we know it, or it might be profit-driven billionaire hype — maybe both. Others could easily decide for us, and we, the tired and worried, could wait it out. But should we?

Hello darkness, my old friend. I see we’re grappling with another potentially massive technological revolution. That headline almost looks like you’re asking whether artificial intelligence is good or bad. Seems unwieldy for one Spinoff article.

Ascribing moral value to technology is the source of a huge debate that has raged among technology ethicists and philosophers for years, so no, I’m not attempting that, but it’s not irrelevant in this context. “Guns don’t kill; people kill” is a prime example of an argument that says technological artefacts have no moral value in and of themselves, and I think that argument sucks. It essentially affords individual people the same power and agency as governments and the multi-billion dollar arms industry and then asks them to resist free market forces and cope with the societal conditions that lead to the breakdown of the social contract and cause violence and harm, all by themselves. Plenty of people argue that technology carries the moral value of its intended use and purpose and it picks up the most dominant ideals about what we value. 

Right, so despite saying you’re not trying to make a moral judgment about AI, you have just dropped a rather dramatic comparison with guns and suggested arguments about moral neutrality are a cop out. 

I think the comparison is useful in posing a question about exactly who is setting the line on AI’s intended purpose and use and what dominant ideals it’s carrying. What it could end up saying about us and what we actually value. As recently as yesterday, former OpenAI employees have raised the alarm about a culture of recklessness at one of the most dominant payers in the AI industry and suggested there’s a 70% probability of advanced AI destroying or catastrophically harming humanity, so the gun comparison is maybe not that dramatic. The gun lobby and the companies that make billions manufacturing guns hold enormous power. 

Technology companies also hold enormous power.  Alphabet (Google) has a market cap of $2.155 trillion. There are only 10 countries in the entire world that have a GDP larger than that. Google is also furiously engaged in a race towards AI market dominance. Money talks, and right now, I think there’s a strong case to suggest that the free-market pursuit of profit and purported potential for radical industrial change has been one of the dominant ideals embedded within AI from the get-go. Its intended purpose and use seem very skewed towards that. It isn’t being invested in as a plaything or attention trap but as a way to drive an inordinate number of better and faster outcomes. It has utility and return on investment baked in as its core promise. That feels a bit different to the most recent wave of broadly experienced and understood technological change. 

Are we not still living through an era in which our attention has been monetised, industries have been disrupted to the point of near-extinction, and entirely new ones have sprung up to replace them? Do we not have a blueprint for all this in the social media and streaming entertainment revolution?

I truly don’t think the social media era holds the blueprint for responding and making up our own minds about the purpose and intent of AI. I agree that it has hallmarks: a hype cycle, seemingly innocuous and vague promises of better lives and better societies, massive amounts of investment, concentrated power and wealth, ineffective flapping about regulation, and an opacity that hides the potential for exploitation and the remoulding of our elastic brains – but there’s one key difference.

Social media was never designed with an industrial purpose or revolutionary utility at its core. Facebook started as a site to rate college girls. Its origin story has distinct elements of “frat boys fucking around and finding out”. AI, as we understand its current explosive era of growth, is making promises that are much more concrete and much more focused on hard gains.

Social media did go on to become a multi-billion-dollar industry, and there was a while when we thought it might remodel certain aspects of business, but ultimately, I think it failed to drive big changes in the way we work. Social media largely exists as a function of distribution and marketing for most industries. It’s a standalone industry that makes its money from advertising. Its tentacles have extended into certain industries, but it hasn’t revolutionised agriculture, food production, warfare, science, medicine or engineering.

Sure, it’s wrought extraordinary amounts of change in the way we perceive ourselves, others and the world around us, and it’s brought other players within the attention economy to their knees, but ultimately social media use was democratised quite quickly and didn’t become a domain of elite or large-scale industrial pursuits. While the stakes are high at some levels, individuals can still use it in a fairly benign, social and entertaining way, at least at a surface level. Social media will be a vehicle for AI, but ultimately, I think there’s a chance we might look at it as a blip by comparison.

The potential, purpose and promise of AI feels akin to the Industrial Revolution. Optimists might align it with the Age of Enlightenment or Reason, but its intended purpose feels like a bid to revolutionise industry, business and work in ways that are far more profound than social media.

Anna Rawhiti-Connell in robot form.

Doesn’t that intended purpose and promise embedded within AI ultimately lead to broader benefit? Isn’t the promise to the everyday Joe that it will remove a heap of drudge work and allow us to focus on things that are more productive and more profitable at scale? Plenty of countries, including New Zealand, have productivity and scale issues — workers are working longer hours for less money and less output. Economies are stagnating. Wealth gaps are growing. Less drudge, more money, more time?

Possibly, and this is why my argument is not actually about whether AI is good or bad but about who is currently prescribing its value, how that value is being understood, who is not engaging with it and why. Right now, we have people atop the great mountain of techbros-cum-philosophers saying AI is the new fire (literally, it’s a comparison Google CEO Sundar Pichai made), and we have a media that is existentially threatened by it who play a role in shaping our understanding of it. Somewhere in the middle I think we have academics and public intellectuals using it and musing on its application and industries applying it in ways and at scales I can’t even contemplate.

What I don’t see is broadly democratised use cases or understanding of AI. I don’t see any real leadership at a civic level around it. I don’t see wild enthusiasm for its potential among the public. It doesn’t seem fun like social media was but instead feels cold, elite, dystopian and inaccessible. All the while, those who are privy to an understanding of its possible applications and those running at great speed to fund it, feed it, train it and use it are what? Making calls about whose drudge work is getting removed?  What jobs go and what jobs stay? What climate tradeoffs are made to resource the data centres required to run the machines that train the AI?  When something feels as mighty, frightening and incomprehensible as AI, the risk is we shy away and let things be done to us.

Good mention of the hype cycle. Some would argue those at both ends of the AI spectrum – those prophesying doom and those promising we can live in a promised land of less work and and cured cancer – are caught up in a bubble. There’s a wonderful quote doing the rounds that says this:

 

What if it’s just hype bullshit that we’re justifiably allowed to mock because it’s the same group of people that provided the inspiration for the fantastic HBO show Silicon Valley?

That show and that quote are so great because they underpin decades of failed promises about technology and allude to what we might hold sacred. For all the streaming and the Reels and the massive amounts of information, nothing makes me feel alive like bearing witness to great art made by humans. For all the ways technological advancements have changed our lives, I still have to clean my own toilet. Meanwhile, for the wider public, beyond the halls of industry, AI is very much in its novelty era. A bunch of useless shit will be created and that puts people off.

This is real hype cycle stuff, and I don’t think we’re actually near the peak of it. AI still lacks broad consumer application. Right now, for a lot of people, using ChatGPT, Meta’s Llama or Anthopic’s Claud can feel a bit like the early days of MSM messenger. It’s got a novelty factor, you can laugh at how dumb it might be or how cringe and rote the answers are. Sure, I can ask it to plan a trip to Queenstown or meals for the week, but it does none of the lifting around actually making that happen. 

I think that’s one of the fundamental problems we have in getting people beyond the ivory towers, the boardrooms and those with platforms like me to think about it. It’s not that it isn’t possibly around the corner; it’s just that right now, use cases seem niche and, honestly, a bit boring and stupid. 

The other issue is that, unlike social media, your interactions as an individual are quite private and largely based on a fairly simple model of text input (prompting) and text, image or video output. It feels stealthy and far more opaque. We’re used to opaque algorithms on social media, but we’re often collectively experiencing and expressing things together. This is more secret and hidden. While it’s entirely possible that AI could remain within a realm of elite use cases like science, research and big industry, and you and I just live in that world, it almost makes the case for understanding it rather than being inadvertently steamrolled by it, stronger. There is already talk of an AI class. Don’t you think we should all be engaged in any conversation about technology that’s either about to become ubiquitous at frightening speed or potentially segregating?

I’m actually going to call you on some hypocrisy now. For two years, you’ve been telling people talk of AI leaves you cold. Bored even. Like how conversations about crypto-currency and bitcoin left you bored. When pressed you’ve become an exhausting proponent of faith and belief in the power of human creativity and thought. Of real-life experience and connection. Of that being irreplaceable. You lived and worked through the hype cycle of social media and ultimately penned several critiques of it.  What’s changed?

Yeah, this is the bit where I confess to having spent some time wafting around the ivory towers and metaphorical board rooms of AI education and have been doing a course on it. I’ve spent more time with ChatGPT and Claud and yes, at first, it’s very similar to the early days of MSM Messenger. But that time has also made me conscious of my own arrogance, bias and fear. There’s a hump you have to get past, and then something is unlocked. It’s confusing, existentially confronting and almost quite wondrous. There is something humbling about the sum of collective human knowledge just sitting there.

The sum of collective human knowledge? Lol. Google’s AI was recommending people put glue on pizza. 

Yes, I know. The machine makes mistakes and has hallucinations, and I reserve the right to call all of this over-hyped, but the more time you spend with an AI, the more you realise that may not be the case, or at the very least, quoting Ezra Klein quoting Marshall McLuhan, there’s a message in AI that I think is worth thinking about, especially if all the purported transformation and revolution becomes even close to reality. Klein muses that AI’s message might be that everything is derivative and that we might be derivative. I think that’s interesting and for me, gets to the heart of why we need to get our heads around it and summon some energy to think about it and use it in its current and likely fast-evolving form.

Sorry, what? Derivative? That sounds horrible and the opposite of something you, with your faith and belief in humanity and love of art and human intellect, would be interested in.  It also sounds like an over-indulgent and prosaic fear a lot of writers have.

Yes, I know, but I’ve been learning how to properly write prompts and bend the AI’s responses to be more in line with what I need and want, only to be told that the skill will likely become unnecessary in a short period of time. The AI will just know what I’m after. It is a supreme mimic. 

If AI can mimic and “perform” me as a writer or speaker, what is the value I offer? What is it that people who read The Spinoff will respond to? There’s a distillation that occurs when you contemplate that your supreme reasoning and information synthesising skills may not be that supreme after all or for much longer. What is it that I value about my own work? What cannot be mimicked? Importantly, for this industry I’m in, what is it that people value and are attracted to? 

That feels like a good and important thing to be contemplating alongside the purpose of AI. If it has an extractive, exploitative and reductive purpose as far as the things we occupy our time doing, what remains? What stands? What do we hold dear? What do we not cede? What do we give away? Those are the things we should be talking about. To return to the social media comparison, I think those are the things we did not air well or openly as we embraced its utility, convenience and addictive properties.

OK, so you’ve gone from interrogating and questioning who is deciding what AI’s ultimate purpose is and concluded it’s profit and revolution. You trekked lightly though the implications for those of us who need to work for a living, ignoring the nascent “white collar vs blue collar” worker considerations and arrived at some kind of existential contemplation about all this? 

Well, I guess that’s how I got from indifference, fear and loathing to feeling more equipped to handle whatever comes next. 

With vast unknowns about its practical applications in my life and work, thinking deeply about it, learning about it, and not being afraid is all I’ve got. What I do know is that we’ve had plenty of examples of human invention that have gone on to have cataclysmic or paradigm-breaking consequences, and we should be able to see patterns in how we react and respond. Some of us see a canary in a mine, some of us see a light at the end of a tunnel, but often we feel pressed to make our minds up quickly based on the speed of innovation, and that’s influenced by fear, bias, hype or agenda. We shut ourselves off from the possibilities and pitfalls of that invention. Alternatively, we ignore thinking about these things because we feel we lack agency or we prefer to take the utility and convenience on offer, make trade-offs later and suddenly, something is ubiquitous to the point of vast unintended consequences that are beyond our control or repair. 

But it’s still not morally neutral is it?

Look, I don’t think I am trying to reach a conclusion on the morality of AI. This is more of a time and place argument. I don’t personally know how to feel about artificial intelligence right now, and I sense others feel the same. What it has summoned is a contemplation about the interplay between technological purpose, intent and value. That, at the very least, leaves me feeling somewhat empowered in the face of bigger forces that might be steering its destiny and our fate.

As a sidebar, you know this exact conversation is a use case for AI right? This kind of prompt engineering?

Stop talking about prompt engineering. You sound like a wanker. Probably stop listening to Ezra Klein podcasts too. 

But you could prompt AI to engage in this argument, right?

I did, it’s here.

Hmmm. 1,200 words eh?

Yes, fine. This version, written first by me, the human, is too long, but you can see how the experience of reprocessing it through AI is quite humbling, right? And confronting. And lacking something. Maybe readers can decide what that is. 

Keep going!