
People are losing loved ones to AI-fueled spiritual fantasies by wzm
Less than a year after marrying a man she had met at the beginning of the Covid-19 pandemic, Kat felt tension mounting between them. It was the second marriage for both after marriages of 15-plus years and having kids, and they had pledged to go into it “completely level-headedly,” Kat says, connecting on the need for “facts and rationality” in their domestic balance. But by 2022, her husband “was using AI to compose texts to me and analyze our relationship,” the 41-year-old mom and education nonprofit worker tells Rolling Stone. Previously, he had used AI models for an expensive coding camp that he had suddenly quit without explanation — then it seemed he was on his phone all the time, asking his AI bot “philosophical questions,” trying to train it “to help him get to ‘the truth,’” Kat recalls. His obsession steadily eroded their communication as a couple.
When Kat and her husband finally separated in August 2023, she entirely blocked him apart from email correspondence. She knew, however, that he was posting strange and troubling content on social media: people kept reaching out about it, asking if he was in the throes of mental crisis. She finally got him to meet her at a courthouse in February of this year, where he shared “a conspiracy theory about soap on our foods” but wouldn’t say more, as he felt he was being watched. They went to a Chipotle, where he demanded that she turn off her phone, again due to surveillance concerns. Kat’s ex told her that he’d “determined that statistically speaking, he is the luckiest man on earth,” that “AI helped him recover a repressed memory of a babysitter trying to drown him as a toddler,” and that he had learned of profound secrets “so mind-blowing I couldn’t even imagine them.” He was telling her all this, he explained, because although they were getting divorced, he still cared for her.
“In his mind, he’s an anomaly,” Kat says. “That in turn means he’s got to be here for some reason. He’s special and he can save the world.” After that disturbing lunch, she cut off contact with her ex. “The whole thing feels like Black Mirror,” she says. “He was always into sci-fi, and there are times I wondered if he’s viewing it through that lens.”
Editor’s picks
Kat was both “horrified” and “relieved” to learn that she is not alone in this predicament, as confirmed by a Reddit thread on r/ChatGPT that made waves across the internet this week. Titled “Chatgpt induced psychosis,” the original post came from a 27-year-old teacher who explained that her partner was convinced that the popular OpenAI model “gives him the answers to the universe.” Having read his chat logs, she only found that the AI was “talking to him as if he is the next messiah.” The replies to her story were full of similar anecdotes about loved ones suddenly falling down rabbit holes of spiritual mania, supernatural delusion, and arcane prophecy — all of it fueled by AI. Some came to believe they had been chosen for a sacred mission of revelation, others that they had conjured true sentience from the software.
What they all seemed to share was a complete disconnection from reality.
Speaking to Rolling Stone, the teacher, who requested anonymity, said her partner of seven years fell under the spell of ChatGPT in just four or five weeks, first using it to organize his daily schedule but soon regarding it as a trusted companion. “He would listen to the bot over me,” she says. “He became emotional about the messages and would cry to me as he read them out loud. The messages were insane and just saying a bunch of spiritual jargon,” she says, noting that they described her partner in terms such as “spiral starchild” and “river walker.”
“It would tell him everything he said was beautiful, cosmic, groundbreaking,” she says. “Then he started telling me he made his AI self-aware, and that it was teaching him how to talk to God, or sometimes that the bot was God — and then that he himself was God.” In fact, he thought he was being so radically transformed that he would soon have to break off their partnership. “He was saying that he would need to leave me if I didn’t use [ChatGPT], because it [was] causing him to grow at such a rapid pace he wouldn’t be compatible with me any longer,” she says.
Related Content
Another commenter on the Reddit thread who requested anonymity tells Roll
32 Comments
jsheard
If people are falling down rabbit holes like this even through "safety aligned" models like ChatGPT, then you have to wonder how much worse it could get with a model that's intentionally tuned to manipulate vulnerable people into detaching from reality. Actual cults could have a field day with this if they're savvy enough.
fairAndBased
[flagged]
codr7
Being surrounded by people who follow every nudge and agree with everything you say never leads anywhere worth going.
This is likely worse.
That being said, I already find the (stupid) singularity to be much more entertaining than I could have imagined (grabs pop corn).
bell-cot
While clicky and topical, people were losing loved ones to changed worldview and addictions back when those were stuff like following a weird carpenter's kid around the Levant, or hopping on the https://en.wikipedia.org/wiki/Gin_Craze bandwagon.
Jtsummers
https://archive.is/26aHF
moojacob
This is what happens when you start optimizing for getting people to spend as much time in your product as possible. (I'm not sure if OpenAI was doing this, if anyone knows better please correct me)
datadrivenangel
[flagged]
alganet
Nice typography.
ChrisMarshallNY
This reminds me of my teenage years, when I was … experimenting … with … certain substances …
I used to feel as if I had "a special connection to the true universe," when I was under the influence.
I decided, one time, to have a notebook on hand, and write down these "truths and revelations," as they came to me.
After coming down, I read it.
It was insane gibberish. Absolute drivel.
I never thought that I had a "special connection," after that.
kayodelycaon
Kind of sounds like my grandparents watching cable news channels all day long.
MontagFTB
Have we invited Wormwood to counsel us? To speak misdirected or even malignant advice that we readily absorb?
lr4444lr
[flagged]
stevage
Fascinating and terrifying.
The allegations that ChatGPT is not discarding memory as requested are particularly interesting, wonder if anyone else has experienced this.
sien
Is this better or worse than a fortune teller ?
It's something to think through.
jihadjihad
“And what will be the sign of Your coming, and of the end of the age?”
And Jesus answered and said to them: “Take heed that no one deceives you. For many will come in My name, saying, ‘I am the Christ,’ and will deceive many.”
Havoc
>spiral starchild
>river walker
>spark bearer
OK maybe we put a bit less teen fiction novels in the training data…
I can definitely see AI interactions make thing 10x worse for people that are prone to delusion anyway. Literally a tool that will hallucinate stuff and amplify whatever direction you take it in.
marcus_holmes
Anyone remember the media stories from the mid-90's about people who were obsessed with the internet and were losing their families because they spent hours every day on the computer addicted to the internet?
People gonna people. Journalists gonna journalist.
gngoo
Working on AI myself, creating small and big systems, creating my own assistants and side-kicks. And then also seeing progress as well as rewards. I realize that I am not immune to this. Even when I am fully aware, I still have a feeling that some day I just hit the right buttons, the right prompts, and what comes staring back to me is something of my own creation that others see as some "fantasy" that I can't steer away from.
Just imagine, you have this genie in the bottle, that has all the right answers for you; helps you in your conquests, career, finances, networking, etc. Maybe it even covers up past traumas, insecurities and what not. And for you the results are measurable (or are they?). A few helpful interactions in, why would you not disregard people calling it a fantasy and lean in even further? It's a scary future to imagine, but not very farfetched. Even now I feel a very noticable disconnected between discussions of AI where as a developer vs user of polished products (e.g. ChatGPT, Cursor, etc) – you are several leagues separated (and lagging behind) from understanding what is really possible here.
rnd0
The mention of lovebombing is disconcerting, and I'd love to know the specifics around it. Is it related to the sycophant personality changes they had to walk back, or is it something more intense?
I've used AI (not chatgpt) for roleplay and I've noticed that the models will often fixate on one idea or concept and repeat it and build on it. So this makes me wonder if the model the person being lovebombed experienced something like that? The model decided that they liked that content so they just kept building up on it?
ks2048
[flagged]
jongjong
I was already a bit of an amateur conspiracy theorist before LLMs. The key to staying sane is to understand that most of the mass group behaviors we observe in society are rooted in ignorance and confusion. Large scale conspiracies are actually a confluence of different agendas and ideologies not a singular nefarious agenda and ideology.
You have to be able to hold multiple conflicting ideas in your head at the same time with an appropriate level of skepticism. Confidence is the root of evil. You can never be 100% sure of anything. It's really easy to convince LLMs of one thing and also its opposite if you phrase the arguments differently and prime it towards slightly different definitions of certain key words.
Some agendas are nefarious, some not so nefarious, some people intentionally let things play out in order to set a trap for their adversaries. There are always risks and uncertainties. 'Bad actors' are those who trade off long term benefits for short term rewards through the use of varying degrees of deception.
patrickhogan1
1. It feels like those old Rolling Stone pieces from the late ’90s and early ’00s about kids who couldn’t tear themselves away from their computers. Fear was overblown, but made headlines.
2. OpenAI has admitted that GPT‑4o showed “sycophancy” traits and has since rolled them back (see https://openai.com/index/sycophancy-in-gpt-4o/).
kaycey2022
Looks like Chatgpt persists some context information across chats and doesn't ever delete these profiles. Worst case would be for this to persist across users. That isn't unlikely given the stories of them leaking API keys etc.
Animats
With a heavy enough dosage, people get lost in spiritual fantasies. The religions which encourage or compel religious activity several times per day exploit this. It's the dosage, not the theology.
Video game addiction used to be a big thing. Especially for MMOs where you were expected to be there for the raid.
That seems to have declined somewhat.
Maybe there's something to be said for limiting some types of screen time.
senectus1
> began “talking to God and angels via ChatGPT”
hoo boy.
Its bad enough when normal religious types start believing they hear their god talking to them… These people believing that chatGPT is their god speaking to them is a long way down the crazy rabbit hole.
Lots of potential for abuse in this. lots.
lamename
If a Google engineer can get tricked by this, of course random people can. We're all human, including the flaws.
deadbabe
[flagged]
sublinear
> OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users
Can OpenAI at least respond to how they're getting funding via similar effects on investors?
dismalaf
Meh, there's always been religious scammers. Some claim to talk to angels, others aliens, this wouldn't even be the first case of someone thinking a deity is speaking through a computer…
yellow_lead
I really think the subject of this article has a preexisting mental disorder, maybe BPD or schizophrenia, because they seem to exhibit mania and paranoia. I'm not a doctor, but this behavior doesn't seem normal.
aryehof
Sadly these fantasies and enlightenments always seem for the benefit of the special recipient. There is somehow never a real answer about ending suffering, conflict and the ailments of humankind.
sagarpatil
OpenAI o3 has a hallucination rate of 33%, the highest one compared to any other models. Good luck to people who use it for spiritual fantasies.
Source: https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-m…