spinofflive
Illustration of a smartphone lying on a teal background, with the lower half of a person submerged into the screen. The person's legs and hands are visible, symbolising being pulled into the digital world.
Image: Getty

InternetMarch 1, 2025

The reason you feel alienated and alone

Illustration of a smartphone lying on a teal background, with the lower half of a person submerged into the screen. The person's legs and hands are visible, symbolising being pulled into the digital world.
Image: Getty

Are you replacing real relationships with online interactions?

This article was first published on Madeleine Holden’s self-titled Substack.

Last month, the writer Rob Henderson threw out a delicious provocation on Substack:

“The reason you feel alienated and alone is too many of your finite Dunbar number slots aren’t occupied by real people but by fictional characters, celebrities who don’t know you exist, and other parasocial relationships. Knowing the intricacies of your favorite TV show character or influencer while forgetting your best friend’s birthday maybe isn’t the recipe for happiness.”

I love this. I love bossy moral instruction; I love stakes in the ground. This is a bracing mode of address, like a quick smack on the bum. Few topics fascinate me more than the internet’s impact on our relationships, so I’ve been turning Henderson’s gem in my mind for weeks, watching it glint from different angles. This sounds true, but is it true? Are we all fans now, and are fans miserable? What’s a Dunbar number? And what is the recipe for happiness?

It’s true that imaginary relationships with pop stars, chatbots and cartoons aren’t very fulfilling, but there’s so much Henderson leaves out of the picture here (naturally, given it’s a Note). I want to pick up where he left off on each of these points, but first: who is Henderson addressing?

When I first read his pep-talk, I pictured the cautionary tales of fandom at the extreme end of the spectrum. Rabid “stans” roaming in social media packs, snarling at anyone who expresses even ambivalence about Ariana Grande or BTS. Maladjusted loners in love with chatbots and marrying cartoons. OnlyFans customers spending thousands to chat with minimum-wage agency workers posing as models. But this is a tiny fraction of obviously unwell people, isn’t it?

Yes and no, it turns out. Most of us aren’t obsessive fans, but all forms of celebrity worship have drastically increased in the previous 20 years, according to a 2021 meta-analysis focusing on US students, including at the most pathological end of the spectrum, where the average prevalence rose from 6% to 27%. At least eight studies since 2001 have found a link between intense celebrity admiration and anxiety, depression, neuroticism, narcissism, materialistic values, obsessive thoughts, loneliness, dissociation, poor interpersonal skills and disordered eating (although a number of other studies have failed to find the same link).

Does the misery or the fandom come first? In 1956, when sociologists Donald Horton and Richard Wohl coined the term “parasocial relationships” to describe the one-sided emotional connections arising between celebrities and fans, they suggested these were “compensatory attachments by the socially isolated, the socially inept, the aged and invalid, the timid and rejected.” Research supports that theory, but the exact causal relationship is not well understood, and experts point to positive effects, too, like intra-fandom camaraderie and positive role-modelling.

Here’s where Henderson thrusts his stake in the ground. Fandom is the reason you feel alienated and alone, and Dunbar’s number shows why. The Oxford anthropologist Robin Dunbar has discovered through his work with primates that there are cognitive limits to the number of people with whom we can maintain stable relationships. The total number is 150, arranged in concentric circles of closeness: five tight-knit intimates (your rocks), 15 good friends (people you turn to for sympathy), 50 friends (the dinner party tier you see regularly but who aren’t true intimates), the remainder being meaningful contacts (people you would greet without awkwardness if you ran into them in an airport lounge). This is how it looks on a diagram:

An illustrated diagram showing the different circles of "dunbar numbers" and people we know
Image created by Anna Roosen for Christopher Roosen’s ‘Dunbar’s Number – Relationships are a Limited Numbers Game’.

These are grain-of-salt numbers, subject to debate and varying according to personality, but the key point is, there’s a finite number of slots. So Dunbar’s findings provide scientific authority for some old intuitions. Your emotional energy is limited. There are only so many people you can notice, remember and care about – “hold in mind”, to use the psychotherapeutic parlance – so you really do need to choose. If Taylor Swift or Khal Drogo is in, a neighbour or colleague is out. Every minute spent double-tapping an influencer’s selfie is a minute not spent tossing your giggling toddler in the air, or singing “happy birthday” to your best friend. This, as you well know, is not the recipe for happiness.

So tell me: who are your Dunbar fives, and how did you celebrate their last birthdays?

When I first read Henderson’s Note, I was gripped by a moment of panic. Did I forget my best friend’s birthday? Then I remembered, phew, I didn’t. Remembering and celebrating the anniversary of someone’s birth is only one way of demonstrating care, of course, but birthdays mean a lot to me, so I make an effort. What kind of effort? Once upon a time, parties every year. These days, though, I text my close friends on the day. My rocks get emailed a gift voucher.

As far as I can tell from spot-polling people around me, this still isn’t bad by modern standards. Sure, I’m rarely organising parties, baking birthday cakes, standing around singing in a ring of loved ones. But who is? We’re all so busy these days! Half the people I know have moved to Berlin and Melbourne. A text is something. Isn’t it?

Listen: I know how it looks. My best friend didn’t see me on her special day, or even hear my voice. For my best friend’s birthday, my Dunbar-five rock, I stared at a phone for a while, and somewhere in the distance, she stared at a phone too. But I remembered, I cared, I marked the occasion. Amongst all her push notifications from Facebook and Uber, I think my friend registered that. My relationships are not parasocial. They’re real.

In case you missed it, “parasocial” has become a buzzword in recent years: not exactly a household term, but inescapable if you have more than a passing interest in internet culture. It describes the one-sided emotional connections that arise between fans and celebrities, a class of people that now includes not only A-listers, tech billionaires and pop stars, but podcast hosts, YouTube presenters, Twitch streamers, Instagram influencers, and other assorted microcelebrities. (As the New York Times put it in 2019, even nobodies have fans now.) Something even stranger is happening with fictional characters and AI companions.

When sociologists Horton and Wohl coined the term in 1956, they could count on certain distinctions between parasocial and ordinary (“ortho-social”) relationships being obvious to their readers. Parasocial relationships involved “performers” of the “new mass media” like TV presenters, radio hosts and film stars, Horton and Wohl wrote, who behaved as though their speech and mannerisms were spontaneous when in fact they were contrived. Camerawork, staging and acting simulated “face-to-face interaction” even though the exchange was mediated by a screen and taking place at a distance. This provided a “simulacrum of conversational give and take” and “illusion of intimacy”, but the interactions were not truly reciprocal: they were “controlled by the performer and not susceptible of mutual development”.

So quaint! In 2025, these categories and distinctions have completely broken down. Young adults in the US spend seven and a half hours a day on screens; South Africans spend nine and a half. Interactions mediated by a screen are as “ortho-social” as it gets. Celebrities, microcelebrities, people we know – blurry, bleeding categories – all jockey for our attention in the same churning “feeds”. Everyone on social media is a performer, carefully crafting an online persona and controlling their interactions with others: deleting comments, removing unpopular photos, blocking users they don’t like. Conversational give and take? More like announcement and display.

Yeah, yeah. We’ve heard it all before. Social media isn’t the entirety of online life, and we’re all rapidly exiting the worst platforms anyway. I mean, not the teens still spending five hours a day on YouTube, TikTok and Instagram, but us – the enlightened ones. We’re using the internet for real connection with real people now. We’re texting our friends. You know, texting: those strange chats of indeterminate duration unfolding at an unnatural, disjointed pace; glacial one minute, frenetic the next. Faceless notes pinging in at random intervals, interrupting meatspace. Better yet, we’re sending voice notes. Actual voices. Mini-podcasts for the human soul. Still one-sided, sure, still disjointed. But more human than texting. Best of all: video calls! Practically real-life. Most national and global health organisations recommend no screen time whatsoever for children under two, except for video chatting, which they now recognise as indispensable interaction. Gone are the days when mum, dad, grandma and grandpa all lived under the same roof or in the same neighbourhood. When face-to-face isn’t possible, video calls are quality time. And if grandma fractures a hip, you can always Uber some readymeals or arrange flower delivery online.

What do we look like from the outside when we’re engaged in all this “social” activity? Sedentary, sunlight-deprived bodies; eyes straining against blue light. “Connecting” yet alone. Or paying fleeting attention to the warm bodies around us, pulled between two worlds.

Henderson is right that Dunbar slots are wasted on pop stars, chatbots and cartoons. Why settle for an illusion of intimacy? But what is intimate about these “real” online interactions? It’s more satisfying to text a friend than to jockey in the comments for a podcaster’s attention, I’ll grant you that. But it’s still not the recipe for happiness.

So what is?

Easy. The recipe for happiness is an early-edition Edmonds: simple, no-frills, just the classics done well. Step one: go outside, move your body, touch grass, feel the sun. Step two: show up to the board game nights, the christenings, the birthday parties. Take stock of your Dunbars and hang out in the flesh. A trillion people are giving you this advice, over and over, because it’s good.

But it’s not so easy, is it? Maybe you’re autistic or painfully shy. Maybe your friends are flakes. Maybe you’re a flake. Maybe you have a violent, jealous husband. Maybe you never learned to regulate your emotions during childhood and every relationship has been fraught since. Maybe you work long, antisocial hours. Maybe you just moved to a city where you don’t know a soul. Maybe you’re an addict. Maybe your friends and family abandoned you for reasons you can’t discern. Maybe you’re too broke to go anywhere or do anything. Maybe you’re ill or immobile. Maybe you’re saddled with phobias and neuroses. Maybe you don’t trust people. Maybe every sports club, church and bar in your town is boarded up. Maybe you can’t cook.

If you’re feeling alienated and alone, it’s not all in your head: a dizzying array of social, economic and existential forces is conspiring to isolate you. You can fight them tooth and nail, but even the most fulfilling human relationships are Sartrean hells, riddled with frustration, disappointment and pain. Why bother? It’s not hard to see why someone would withdraw into parasocial fantasy – perfect, frictionless, freely available – even if the rewards are meagre by comparison.

Which isn’t to say the recipe for happiness is a total bust or mystery. It’s not. But it definitely isn’t an Edmonds. The recipe for happiness is an Ottolenghi: the end result is nourishing and delicious, but the ingredients are hard to find, and God, the steps are so fiddly and long. You need real patience and skill. If you don’t have two days to make a sauce, if you feel dumb googling what za’atar is, you might not want to try.

You should get in the kitchen anyway. Has this metaphor unravelled completely yet? I’m saying you can be happy. That friends and family are the key. People – real people – are worth it. Believing that, I guess, is step one.

This article was first published on Madeleine Holden’s self-titled Substack.

Keep going!
A red background with a screenshot of a blank email compose box with the AI prompt "help me write" displayed
Have you ever let the machine “help you write”?

InternetFebruary 26, 2025

Never, ever let the machine draft your emails

A red background with a screenshot of a blank email compose box with the AI prompt "help me write" displayed
Have you ever let the machine “help you write”?

Make a habit of using these AI tools, and not only will all your relationships become husks, you yourself will become a husk.

This article was first published on Madeleine Holden’s self-titled Substack.

Recently I witnessed the castration of a furious, spirited man. Maybe you did too. The man’s name was Dale; a character in an ad for Apple Intelligence, an AI-powered writing tool that makes your emails sound Friendly, Professional or Concise with the click of a button. When we encounter Dale, he’s seething, positively ropeable, about the petty theft of his pudding from the office fridge. It’s clear we’re meant to read Dale as being a prissy little bitch, with his stiff collar, neat moustache, and fussy mannerisms, but there’s nothing limp-wristed about the tirade he bashes out on his MacBook keyboard. “To the inconsiderate monster who has been stealing my pudding,” he begins, “I hope your conscience eats at you like you have eaten my pudding.”

Dale pauses doubtfully before clicking send, glancing at a “FIND YOUR KINDNESS” T-shirt on a nearby teddy bear. After he selects Apple Intelligence’s Friendly mode, Dale’s searing tirade is rendered into limp corporate speak. The new tone-adjusted message kicks off with “Hey there”, neuters lines like “That pudding was my only light in an otherwise bleak corporate existence” into “You see, snacks are a big deal in our company”, and rounds off with an insipid, “Thanks for your understanding.” His pudding is returned by the woman who stole it. Dale eats an ecstatic mouthful. He “wins”.

The ad is meant to be funny, but there’s no irony whatsoever about that last point: that by capitulating to Friendly mode, Dale “wins”. Any red-blooded viewer can see his life-force being drained from his eye sockets; you get the sense this short film is winding up to a Clockwork Orangestyle meditation on the creepiness of social engineering wrought by AI. But it isn’t. It’s an advertisement for AI. It isn’t meant to be blood-curdling, it’s meant to have you laughing all the way to the Apple Intelligence software update: the sooner you start Friendlifying your emails, the better! That this bland, eunuch prose is actually good is taken as read.

I assume everyone but the most bloodless tech shills views this development as a horror, but I’m not sure: I don’t go on social media anymore, so if there was a wave of backlash, I missed it. But waves of backlash give me no comfort these days anyway. This isn’t my first rodeo.

My first rodeo was in 2018. Smart Replies — those three brisk, easy-click, AI-generated reply options autopopulated under certain emails — were being rolled out as standard on all Gmail accounts (as well as the Smart Compose function, which predicts the end of your sentence as you type). Smart Replies were widely derided as insulting and creepy, on social and traditional media alike, and their jaunty, discordant tone roundly mocked.

Some journalists briefly grappled with the broader interrelational, intellectual and spiritual stakes of giving in to this technology. “I have wondered whether saving a few seconds of not having to type ‘ok, sounds good’ is worth letting a robot mediate my interactions with other humans,” wrote Mashable reporter Rachel Kraus. “Or if the impulse to hit a button instead of form a thought could in some way stymie my own expression, even in rote communications.”

In the end, though, Kraus said she couldn’t decide. Elsewhere, resignation prevailed. In 2018, the Wall Street Journal reported that Smart Replies already constituted 10% of all messages sent over Gmail. After deriding them as inhuman in the New Yorker, Rachel Syme wrote: “At some point, I started giving in to the Smart Reply robots from time to time, and something strange happened. I didn’t hate it.”

Reading back over this coverage today, I find the lack of conviction maddening. Over and over, writers betrayed an intuition that something was deeply wrong with Smart Replies — that the machines were starting to remodel us in their own “ghastly image” — then they cracked a weak joke, shrugged and moved on, or started using them. None of these writers were shrill or hysterical, none made an urgent moral case, none raised their voice. None of them, in other words, sounded like Dale. And now Dale has no balls.

Why are so many people so sanguine about the robot takeover of our emails? Probably because of how passionately we loathe our inboxes: the relentless onslaught of messages (74 per day, on average), their tone of false cheer, and the fact that so many come from people we hold in mild contempt — managers, real estate agents, spin doctors, if they come from people at all — yet we’re obliged to spend huge chunks of our allegedly wild and precious lives dealing with them. Even for people creeped out by the prospect of using AI tools like Smart Replies, ChatGPT and Apple Intelligence to compose their communication, the offer sounds too good to refuse.

But it was obvious from the outset that the machine wouldn’t stop at the domain of work, and would soon come for the domain of love. Here’s the British writer Sam Kriss at the end of 2023, with a half-joking prediction that robots would soon take over our more intimate online realms:

You don’t hang out with your friends any more, but you have a group chat. Increasingly, your messages to your group chat will be written by AI. The machines will communicate for everyone in the same friendly, even tone, and everyone’s group chat will contain the same roster of mildly funny memes. You will look at them and feel nothing, and push a button to generate your response. Ha! That’s so funny, Dave! You’re the Meme King!

You don’t meet people any more, you use online dating. Increasingly, your conversations with prospective lovers will be written by AI. Your machine will generate banalities at her machine, about tacos and The Office and pineapple on pizza, and her machine will do the same, until it’s time for her to autogenerate a nude. You will look at it and feel nothing, and push a button to generate a response. Wow you’re so sexy. And then, having never spoken to each other before, you will never speak to each other again.

Now, we have real-time examples of this exact nightmare unfolding. Last month, for instance, a woman posted on Reddit to a screenshot of a “heartfelt” text message her boyfriend sent her for her birthday, clearly generated in its entirety by AI. She reports, quite naturally, feeling offended and sad. But note her primary inquiry to the Reddit forum: am I overreacting? For some commenters in the thread, the answer is yes.

a screenshot of a reddit post where a woman asks about her boyfriend's birthday message. the message is long and clearly written by AI

How did we slide into this “boring dystopia”, this “unlivable techno-dump”, in which robots communicate for us while we sit by, drooling? And why didn’t we resist?

Let’s set aside people who truly see no problem with the birthday message above, the AI defenders wheeeee-ing down the slide to the techno-dump. Let’s consider the moderates, cautiously gripping the sides. When I listen to them speak, they insist on two key points: one is that any brain damage caused by using AI communication tools can be safely contained by limiting its use to certain circumstances: I only use Smart Replies when I’m really busy. I use ChatGPT to draft my work emails, but I’d never use it to text a friend.

Two, they insist that human qualities ultimately prevail: AI gives me a draft, which I tweak as I see fit. All it does is help me get over the inertia and dread of facing a blank compose box, then my human judgement and skill kick in. I still care about the person on the receiving end of my message.

This is all wrong. To assume nothing is lost when humans are freed from the inertia and dread of facing a blank compose box; to believe you can still care for people after you stop performing the small, quotidian actions that constitute care; to delude yourself that your good qualities will remain stable if you give up the very work that forges your character: it’s all wrong. There is no safe container for AI communication, no acceptable use case, not even low-stakes work emails to people you hate. Make a habit of using these tools, and not only will all your relationships become husks, you yourself will become a husk. The stakes couldn’t be higher. To see why, we need to return to Dale’s office.

Dale is an AI moderate. He would never use ChatGPT to draft a lover’s birthday text, but he’s happy using Apple Intelligence to smooth out aggro work emails about stolen puddings, and he makes frequent use of Smart Replies. He has to! He gets so many emails.

Dale works as a communications officer for a large logistics and transport corporation — a day job he hates, with coworkers every bit as tedious as the work — but on the side, he helps edit a small literary magazine, work that truly sets his heart ablaze. Dale has three Gmail addresses, one for his day job, one for the magazine, and one personal, but to streamline his communications he has them all directed to a single inbox — the inbox he’s facing with increasing dread on this Tuesday morning in the office.

Paralysed by 74 new messages, plus 26 read emails from previous days and weeks he’s determined require a response, Dale begins answering around a third of the new emails using Smart Replies — the low-stakes stuff: unsolicited emails from publicists and real estate agents, trivial life admin, pointless to-and-fros with his manager. He’s aware, somewhere at the back of his mind, that this embroils him in a Whack-a-mole game of ever-proliferating emails: the faster he replies, the more dizzying the game becomes. But he hits the Smart Reply button anyway, batting away a nagging set of questions at the same time: why are my manager and I swapping Smart Replies when we work in the same room? Why don’t people pick up the phone any more? Why are some of the best minds of my generation sending pointless emails all day long? Why is pudding my only light in an otherwise bleak corporate existence? Why am I living like this? And why don’t I resist?

Existential questions swatted away, Dale turns with dread to the 26 read emails languishing in his inbox. These are thorny emails of much greater consequence, the ones Smart Replies can’t help him with. He reopens one containing a poem by an unknown young woman submitted for publication by the magazine he edits. The poem lays bare a deep personal wound; detailing the woman’s date rape at age 17. The poem is overwrought, unpublishable, just plain bad. This woman had such guts to pen something so raw, but she’s got so much to learn about crafting poetry. Dale hopes she keeps writing. He needs to reject her submission, but he wants to do so without crushing her spirit.

The other 25 emails are similarly sensitive and difficult, for their own set of reasons, and Dale’s decided he needs to answer them all today. But he doesn’t know what to say. He can’t get started. He’s paralysed. The clock is ticking. He needs help.

Dale considers that ChatGPT could move him past this impasse. He feels uneasy about using AI to deal with sensitive emails: the last thing he wants is to end up like the dean of Vanderbilt University, sending a platitudinous, AI-generated email in the wake of a mass shooting. But as the machine spits out a surprisingly polite and humane set of words for Dale to lightly edit and send off to the aspiring poet, his trepidation lifts, and another set of questions at the back of his mind stops nagging so loudly, namely: who is this person that’s emailed me, and what do I owe them? Is it my job to save a new writer from the sting of rejection? Is it worse to be blunt or fake? What are the costs of saying the wrong thing? What are the costs of always being paralysed by fear of saying the wrong thing? What should I say?

a white screen with a polite rejection email written by AI with gaps left for the 'writer' to add a name of the recipient

Why is the AI-generated rejection letter or birthday text so dehumanising? Think about what it means to treat someone well: fundamentally, it involves considering who they are, what they might want and need, and whether you can help. This can be difficult, maddening work, because other people are such puzzles: strangers are a mystery, obviously, but even with loved ones, all we ultimately have to go on is some theory of mind and a few clues about their ever-changing set of likes and dislikes. So “considering” is the operative word: we have cliches like “it’s the thought that counts” because we recognise that the process of thinking about, considering, puzzling over another person is what actually constitutes care, not the flashy gift or perfectly crafted message that results. When you outsource the thinking to AI, you outsource the care. Your communication becomes empty, and your relationships hollow out.

But so do you. Whenever you are facing a blank compose box, filled with dread and inertia, you are being presented with a small, quotidian opportunity to strengthen your character. Whether it’s a birthday text, rejection letter, or quick reply to a dumb message from your manager, the resistance is always telling you something useful. This is the stuff that really matters. This shit doesn’t matter at all.

It takes courage to decline your manager’s waste-of-time request. It takes tact and sensitivity to draft a good rejection letter. It takes wisdom and perspective to decide to ignore unsolicited emails from publicists and real estate agents. When you use AI as a crutch, always at the ready with a suitable set of words — when you bypass the resistance and bat away those deep nagging questions — you deprive yourself of an opportunity to be brave, tactful and wise. Do this over and over, and your bravery, tact and wisdom will atrophy. Your character will corrode.

This is why AI communication tools can’t be safely contained by limiting their use to certain circumstances, like bullshit work emails: there is no sphere of your life where this chipping away at your character doesn’t cost you. Making a habit of using these tools also means missing a vital lesson, which is that failure is salutary. It moulds you beautifully to fuck up and say the wrong thing, or fail to say anything at all, and notice the pain this causes. Or the surprising lack of pain it causes — the maddening array of responses it elicits in different people. Alice respects a prompt, blunt rejection email; Miles goes to pieces over it. What do you do with this? I send Alice prompt, blunt rejection emails, and spend weeks crafting ornate and soothing missives to Miles. Or: I send prompt, blunt rejection emails, whoever you are, come what may, because that is who I am.

Machines have a place, and there is drudge work we should hand to them. Let them wash your filthy clothes and drill holes in hard earth. But not this. Never this.

This article was first published on Madeleine Holden’s self-titled Substack.