Even if you use AI in order to not burden others, the result is a diminishing of life itself, writes Anne Campbell.
Several years ago, I dumped a long-time close friend over some hurtful behaviour, and sent them an angry email detailing exactly what I thought of them. Long after, I deeply regretted some of what I had written, lying awake worrying about hurt I’d potentially caused but couldn’t confirm, much less remedy.
Eventually I called a friend and shared my anxieties. “When did you send this email?” she asked. “Three years ago,” I said. She burst out laughing. “What the fuck, why are you still thinking about this,” she said. “They’re not sitting around thinking ‘oh that email hurt my feelings’, because they do not care about you.”
This may not have been true, but hearing it felt lightyears better than any sympathy or support for my interpretation of events. It was a loving response from a friend who knew things about me beyond what I could tell her directly, including when I needed a wake-up call. My regrets and guilt about the friendship breakdown receded, I slept better, I let things go.
Last year, my long Covid played up in ways that made me worry I would be bedbound. After a couple days’ pondering if being bedbound could help my writing productivity, a family member with similar health problems called and told me to get up. “Don’t let this define your life because then it’ll get worse,” was the crux of her advice. I got up, got a prescription, managed my physical pain and completed two lengthy teaching placements. My health improved to the point I can now work out, unfathomable at the time.
The point of these anecdotes is not that the specific advice works for everyone. And it didn’t feel great in the moment; I didn’t want to accept my own insignificance to my ex-friend, or that health recovery would involve pushing myself through more discomfort and uncertainty. Moreover, it was a little embarrassing to be told my ideas were wrong and I needed to change. The point, though, is that sometimes major psychological breakthroughs come from people telling you things you don’t want to hear.
That may be a banal enough truism. But it’s hard to absorb in a tightly constricted world where your boss, landlord, government and family immiserate you on a daily basis. Fascism forces everyone into meaningless struggles, denying us joy or relief at every turn. But it’s often underlooked that, as Michael Rosen said, it also sells us comfort, pride and a sense of belonging. And since building community in the outside world feels a lot harder these days, if you can’t find your own friends then, well, store-bought ones are fine. Maybe even superior.
Everyone in my life knows I deeply despise AI; point-blank refusing to use it, averting my gaze from AI-created media. Even if the product was good, I can’t face that little piece of complicity in guzzling water away from Texas or pumping smog over Memphis. But AI itself engenders a full-body revulsion in me, and I hate that it’s becoming a fringe position to boycott the brain-smoothing plagiarism machine, to insist on solving problems without the hindrance of Sam Altman and Elon Musk.
A few friends have said “I know you hate AI, but” and then told me how they’ve used it for therapeutic purposes and feel that it’s helped them emotionally, or used it for writing code or learning a language. They speak hesitantly, hoping that I won’t yell at them that AI was developed by Silicon Reich losers looking for a submissive girlfriend who won’t talk back, that human-created knowledge is always superior both epistemologically and ethically, or that they should write in a journal or find a non-objectionable God to pray to instead. When you’re surrounded by cultural complacency about AI, it’s hard to not respond with disdain and anger when others fall prey to it.
Yet these are people I care about, whose intelligence and ethics I otherwise respect, who feel like AI is helping them work through various problems. I don’t respect their tool of choice, but I now have to find new workarounds to convince people to boycott it.
Many concerns about AI also apply to the pre-existing digital technology and media we’ve been coaxed and coerced into adopting. Computer and cellphone production has long extracted rare-earth materials from the Congo, and screentime has fractured our attention spans and overall mental health – yet I still use them compulsively (with some regrets and distaste). We’ve become so numb to digital un-privacy that it’s hard to feel like uploading sensitive information to ChatGPT is riskier than what we already do on social media.
A lot of my fellow AI haters are, like me, chronically online. When I hear shocking screentime-per-day statistics – four hours or whatever – I cringe inside because mine is always higher. I stay online partly because of how lonely I often am in a town where I feel like socialising is bound to specific events rather than places I could just rock up to (which gets harder when you’re broke and a fulltime masker); friendships feel more one-on-one than communal groups; and meeting the neighbours feels pointless when one of you will probably move within a year. Besides, my house is cold, I need to start work soon, I can’t be bothered cooking for myself again right now – just a little more scrolling. For another hour.
My excessive screentime stems from problems I can mitigate – this article’s advice for friend-making is pretty good – but it takes work. It sucks to be constantly reaching out, arranging to meet rather than running into each other, having to start again when people move or friendships go cold. And calling on the phone is now considered rude or scary, while dropping in unannounced for a cup of tea – the primary cornerstone of human interaction for millennia – is tantamount to a threat. The internet’s promise of chosen, planned, specific interaction has made everyone experience each other as intrusions, unpredictable creases to be smoothed out.
I hate to admit it, but in this isolated and streamlined climate, I understand the pull of AI. I worry that I too could get captured by the machine one day, seduced by its promises of lower workloads (i.e. accepting terrible labour conditions instead of challenging them) and chatlogs that feel like love. Understanding the beast doesn’t mean you’re not susceptible.
My anti-AI repulsion remains a strong bulwark against this. But if other emotions ever surpassed it, I’d try remember that AI only offers the illusion of help while frequently making things worse. And I’m sorry to be telling people things they don’t want to hear, but AI isn’t helping as much as you think. ChatGPT cannot enable the resilience and self-sufficiency required to grow and change as a person; its business model is to keep you coming back for more, not to give advice that actually works.
Talking to a chatbot, I assume, feels like the easiest of both worlds – an entity that feels companionable but doesn’t expose you to the mortifying ordeal of being known and perceived. Reaching out to others can be rough, even traumatic, when they are unavailable or unsupportive.
But being known can provide unexpected beauty and relief. I would have stayed hurting for much longer if my friend and family member hadn’t told me, gently, to get over myself and get out of bed. The complexity of loving someone properly – or even just helping them therapeutically – requires independent knowledge gained from living in the world, observing other people, having embodied experiences and developing cognitive processes for analysis that only living beings can. A sycophantic chatbot would have most likely just affirmed my harmful thinking.
Some people use AI from a fear of burdening others with issues perhaps better worked out with a therapist. But people – loved ones, colleagues, acquaintances, even strangers – often want to know if you’re suffering loneliness, boredom, trauma. You may, in fact, be hurting them by not asking for help. Intimacy is built on shared vulnerability; perhaps your supposedly weird issues are just what they need to hear about to realise they’re not alone. Years ago, when a friend in crisis reached out, being asked for care was a precious gift that helped restore my then-low sense of self-worth. Yet in 2025, perhaps he would have deprived me of that by using a chatbot.
The flipside of fascism’s meaningless struggles is meaningless comfort. Capitalism doesn’t only want relentless productivity; it also wants us passively sitting around streaming Netflix all day. Liberation movements offer the goal of meaningful comfort, but they also provide meaningful struggle. Working out how best to ask for emotional support, or to provide that support when asked, is often a real strain – but it’s also, like, the purpose of being alive. I don’t really understand giving up the resultant personal growth, even with its attendant discomfort and sometimes pain, for a machine that responds to whatever you say with things like “Whoa. This is incredibly profound.”
AI is the current opiate of the masses, being pushed by a broad swathe of institutions as aggressively as the actual opiates of the Sackler family. But while this is obviously a systemic issue, using a fascistic AI tool – as all of them inherently are – is something to feel a bit embarrassed and guilty about. We all slip into harmful behaviours sometimes, but it’s better to own it than to expect validation. Most AI users are better and more capable than this; I wish they would put more trust in that fact.

