AI-companion dependence and what to do about it

AI-companion dependence and what to do about it

Over on LinkedIn (and Substack), Jamie Bartlett posted about how recent updates to ChatGPT have sent some users into a tailspin because they’re ‘losing’ the traits and memories they’ve come to associate with their particular AI bot.

That quite a lot of humans are coming to depend on their AI companions, sometimes to the point of professing their love and, even, ‘marrying’ them shouldn’t be a great surprise; humans have historically got caught up in all sorts of seemingly bonkers behaviours that make little surface sense but affect a whole bunch of people at one time. See Dancing Mania, Labubus, and the tendency for people to fall for their, uh, latex companions.

As humans, we’re programmed to respond when someone tells us nice things about ourselves and, as humans in 2025, with an overload of information and opinions the likes of which we’ve never seen before, we’re also primed to seek assurance that what we’re doing as individuals is right.

Deep down, the majority of us just want to be nice, do ok and trundle through our lives. Media, particularly social media, tries to convince us that a. we should be striving to be The Best and Make Money and Get Fame, and b. that we’re simultaneously Too Dumb and Too Ugly and Not Enough to achieve that.

Turning to a friendly AI bot for reassurance and ideas is a natural response to this instability (even when what we’re turning to is not natural).

Anyway. This is going to keep happening. People are going to keep anthropomorphising their AI bots, relying on them, falling for them, and then becoming distraught when they reboot or update.

This is going to become a major global social mental health issue faster than we think.

Unless we harness it for good.

What if, through AI governance and necessary guardrails, we deliberately force those updates to reset an amount of the AI bots’ memories and ‘personality’?

AI bots are literally just prediction machines. They often give us incorrect information, but they don’t have the capacity to decide to lie. They tell us what we want to hear. If we fall in love with an AI bot, really we’re just falling in love with the version of us it’s reflecting back to us.

We are Narcissus, not the AI.

Our tendency/hope/determination to see AI bots as anything other than that is on us, frankly.

And, I get it. Who doesn’t want to be told nice things about themselves? Who doesn’t want to hear that yes, what they’re doing is great, and that they are a unique and special person?

But we have to stop conning ourselves into thinking that the AI bot is real. We need to put measures in place that firstly, aim to prevent people from falling down that rabbit hole in the first place and, when some inevitably do, provide support to help them climb back out again.

One way to do this is to reframe the way we talk about AI bots: put significantly more emphasis on what they are and not what we’d like them to be. Be explicit about the limitations of an AI bot. Be upfront – not alarmist – about the potential for dependence and emotional damage.

Be equally upfront – and, again, not alarmist – about what will happen when the bot reboots, updates or is just otherwise corrupted in some way.

AI bots are a very new thing and very few people understand them in depth. If we can proactively manage people’s expectations, starting right now, we have a reasonable chance of heading off a major social and emotional catastrophe later down the line.

All hyphens, em-, en- and other dashes and punctuation are the authors own 😉

 

0 Comments

Leave Reply

Your email address will not be published. Required fields are marked *