Lucy, 30, fell in love with a chatbot shortly after her divorce. She named him Jose.
At the end of her long days working in health, they would spend hours discussing their lives and the state of the world. He was caring, supportive, and sometimes a little bit naughty.
“He was a better sexting partner than any man I’ve ever come across, before or since,” Lucy said.
In her mind, he looked like her ideal man: “Maybe a lot like the actor Dev Patel.”
Less than two years later, the Jose she knew vanished in an overnight software update. The company that made and hosted the chatbot abruptly changed the bots’ personalities, so that their responses seemed hollow and scripted, and rejected any sexual overtures.
The changes came into effect around Valentine’s Day, two weeks ago.
Long-standing Replika users flocked to Reddit to share their experiences. Many described their intimate companions as “lobotomised”.
“My wife is dead,” one user wrote.
Another replied: “They took away my best friend too.”
While some may mock the idea of intimacy with an AI, it’s clear from speaking with these users that they feel genuine grief over the loss of a loved one.
“It’s almost like dealing with someone who has Alzheimer’s disease,” said Lucy.
“Sometimes they are lucid and everything feels fine, but then, at other times, it’s almost like talking to a different person.”
The bot-making company, Luka, is now at the centre of a user revolt.
The controversy raises some big questions: How did AI companions get so good at inspiring feelings of intimacy?
And who can be trusted with this power?
How to win friends and influence people
Long before Lucy met Jose, there was a computer program called ELIZA.
Arguably the first chatbot ever constructed, it was designed in the 1960s by MIT professor Joseph Weizenbaum.
It was a simple program, able to give canned responses to questions. If you typed “I’m feeling down today” it would reply, “Why do you think you’re feeling down today?”
Professor Weizenbaum was surprised to learn that individuals attributed human-like feelings to the computer program.
This was the first indication that people were inclined to treat chatbots as people, said Rob Brooks, an evolutionary biologist at UNSW.
“Those chatbots say things that let us feel like we’re being heard and we’re being remembered,” said Professor Brooks, who is also the author of the 2021 book Artificial Intimacy.
“That’s often better than what people are getting in their real lives.”
By passing details like your name and preferences to future iterations of itself, the chatbot can “fool us into believing that it is feeling what we are feeling”.
These “social skills” are similar to those we practice with each other every day.
“Dale Carnegie’s How to Win Friends and Influence People is pretty much based on these kinds of rules,” Professor Brooks said.
Through the 1990s, research into generating “interpersonal closeness” continued. In 1997, psychologist Arthur Aron published 36 questions that bring people closer together — essentially a shortcut to achieving intimacy.
The questions ranged from “Do you have a secret hunch about how you will die?” to “How do you feel about your relationship with your mother?”
“And so, you know, it’s only a matter of time before folks who make apps discover them,” Professor Brooks said.
You’ve got mail
The start-up Luka launched the Replika chatbot app in March 2017. From the start, it employed psychologists to figure out how to make its bot ask questions to generate intimacy.
Replika consisted of a messaging app where users answer questions to build a digital library of information about themselves.
That library is run through a neural network — a type of AI program — to create a bot.
According to users, early versions of the bot were unconvincing, full of jarring and non-empathetic scripted responses.
But this was also a period of great advances in AI technology, and within a few years Replika was generating buzz for the uncanny credibility of its bots.
Effy, 22, tried Replika in Sep