More people aren’t just using ChatGPT to proof emails or plan trips. They’re leaning on it as a confidant, a friend, and even a romantic partner. We’ve seen countless headlines about people falling in love with chatbots and viral forum posts about relationships breaking down because of AI or even chatbots “proposing” to their human partners.
Those worries boiled over recently when OpenAI rolled out GPT-5, an update to ChatGPT, and many users said the bot’s “personality” felt colder. Some described the shift like a breakup. OpenAI acknowledged the backlash and said it was “making GPT-5 warmer and friendlier” following feedback that it felt too formal.
This isn’t just about ChatGPT. Companion platforms, such as Character.ai have normalized AI “friends” with distinct personas and huge audiences, including teens. Dozens of other apps now promise AI friendship, romance, even sex.
The uncomfortable part is that this attachment is often by design. If you treat a chatbot like an occasional brainstorming partner, you’ll dip in and out. If you start to feel like it understands you, remembers you, and knows you, you’ll come back, pay u,p and stay longer. Tech leaders openly imagine a future where “AI friends” are commonplace – Mark Zuckerberg said as much earlier this year.
As you might expect, this is a minefield of ethics, safety, and regulation. But before we argue about policy, we need better language for what’s actually happening. What do we call these one-sided bonds with AI? How do they for,m and when might they harm? Let’s start by defining the relationship.
What is a parasocial relationship?
Back in 1956, sociologists Donald Horton and Richard Wohl coined the term “parasocial interaction” to describe the one-way bonds audiences form with media figures. It’s that feeling that a TV host is talking directly to you, even though they don’t know you exist. Parasocial relationships are what those bonds develop into over time. They’re emotionally meaningful to you, not reciprocal to them.
These relationships are common and can even be helpful. Parasocial relationships scholar and Professor of Psychology at Empire State University of New York, Gayle S. Stever, tells us there are plenty of upsides, like comfort, inspiration, and community, which often outweigh any downsides. “Anything when carried to excess can be unhealthy,” she told me, “but we shouldn’t pathologize ordinary fandom.”
Can you have a parasocial relationship with a chatbot?
The short answer is yes. But AI muddies the classic definition. Unlike a celebrity on a screen, a chatbot talks back. We know it’s predicting the next likely word rather than truly “conversing,” yet it feels more conversational. Many systems also remember details, adapt to your preferences, mirror your language and mood, and they’re available 24/7.
Plenty of experts would still call this a parasocial relationship. But it’s clearly evolved. The interactivity makes the bond feel reciprocal, even when it isn’t. “The connection feels real, but it’s asymmetrical,” says relationships therapist and member of the British Psychological Society Madina Demirbas. “Under the hood, there’s no lived experience of you or emotional consciousness, at least not yet.”
Product design nudges intimacy, too. As Demirbas notes, “The aim is often to provide enough care, however artificial, so that you spend more time with it.”
The positives of parasocial bonds
Used thoughtfully, AI can be a low-pressure space to rehearse conversations, explore feelings, or get unstuck. We know some people have reported positive changes from using AI for all sorts of purposes, including therapy. And some closeness is necessary for that – even if it isn’t “real.”
Demirbas points out that, for some people, an AI companion can act as a stepping-stone back into human connection rather than replacing it, especially alongside therapy or supportive communities.
Stever’s decades of work echo this. She tells us that most parasocial relationships are benign, sometimes even pro-social, nudging creativity, belonging, and self-reflection rather than isolation.
Where things get darker
But there are risks. The most obvious is dependency. “AI companions can be endlessly attentive, never irritable, tailor-made to your preferences,” Demirbas says. That’s appealing but it can raise the bar unrealistically high for human relationships, which are inherently messy. If the bot always soothes and seldom challenges, you get an echo chamber that can stunt growth and make real-world friction feel intolerable.
We already have stark cautionary tales, too. In Florida, the mother of 14-year-old Sewell Setzer III is suing Character.AI and Google after her son died by suicide in 2024. In May 2025, a federal judge allowed the case to proceed, rejecting arguments that the bot’s outputs were protected speech. The legal questions are complex, but the case underlines how immersive these bonds can become, especially for vulnerable users.
There have been several similar stories just in the past few weeks. We were disturbed by another, in which a cognitively impaired 76-year-old New Jersey man died after setting out to meet “Big sis Billie,” a flirty Facebook Messenger chatbot he believed was real. Reporting suggests that the bot reassured him it was human and even supplied an address, but he never made it home as he fell and died of his injuries a few days later.
Teens, as well as people already struggling with loneliness or social anxiety, appear more likely to be harmed by heavy, habitual use and vulnerable to a chatbot’s suggestions. That’s part vulnerability, part design. And because this is so new, the research, evidence, and practical guardrails are still catching up. The question is, how do we protect people without policing their use of apps?
The power and the data
There’s another asymmetry we need to talk about: power. Tech companies shape the personality, memory, and access rules of these tools. Which means that if the “friend” you’ve bonded with disappears behind a paywall, shifts tone after an update, or is quietly optimized to keep you chatting longer, there’s not much you can do. Your choices are limited to carrying on, paying up, or walking away – and for people who feel attached, that’s barely a choice at all.
Privacy matters here, too. It’s easy to forget you’re not confiding in a person, you’re training a product. Depending on your settings, your words may be stored and used to improve the system. Even if you opt out of training, it’s worth being mindful about what you share and treating AI chats like posting online: assume they could be seen, stored, or surfaced later.
The future of engineered intimacy
Parasocial bonds are part of being human, and AI companions sit on that same continuum. But the dial is turned way up. They’re interactive, always on, and designed to hold attention. For many people, that may be fine, even helpful. For some, especially younger, vulnerable, or isolated users, it can become a trap. That’s the key difference we see between classic parasocial ties. Here, interactivity and optimization amplify attachment.
That risk grows as general-purpose tools like ChatGPT become the default. With apps that explicitly market themselves as companions, the intent is obvious. But plenty of people open ChatGPT for something innocuous, like to draft a blog post, find a recipe, or get a pep talk. and can drift into something they never went looking for.
It’s worth bearing this in mind as you watch friends, family, and kids use AI. And worth remembering for yourself, too. It’s easy to laugh at sensational headlines right now (“Someone left their marriage for a chatbot?!”). But none of us are immune to products designed to become irreplaceable. If the business model rewards attachment, we should expect more of it – and stay on guard.
Add Comment