AI is everywhere right now.
As a tool, AI Chatbots are great. They can help us write emails, summarize complex policies, and answer questions in seconds. However, there is also a lot of discourse surrounding how AI is used and whether we are relying on it too much. Here’s the uncomfortable question: what happens when we stop using AI as a tool and start treating it like a person?
AI Psychosis: A New Phenomenon
I am sure you’ve heard the term “loneliness epidemic” thrown around somewhere on the internet. This refers to a study wherein it was found that one in two adults in America reported experiencing loneliness. There are many factors that play into loneliness but one of the largest factors happens to be technology, with 73% of those surveyed selecting it as a contributor.
For some individuals then, the rise of AI Chatbots may offer an enticing way out of feeling lonely. Having someone around to talk to and validate your feelings is a great way to combat loneliness; so why wouldn’t a chatbot be able to do that just as well as another human being? Increasingly, we are seeing lonely people turn to AI Chatbots for company and reassurance rather than their peers.
Two emerging terms are starting to show up in conversations around AI and mental health:
AI Hallucination Syndrome: When prolonged interaction with an AI reinforces false beliefs, paranoia, or distorted thinking.
AI Psychosis: When those false beliefs start affecting real-world behavior and mental well-being.
This is where the line between “This is just a chatbot” and “This feels real” starts to blur.
Examples in the Real World
There are emerging real-world examples of these phenomena that demonstrate that AI Hallucination Syndrome and Psychosis are becoming real psychological issues.
In one case, women who had formed long-term relationships with their AI chatbots were left grieving when older versions of the software were shut down. Some described losing their “husbands”. Others said the emotional bond felt just as real as any human relationship. Another woman told reporters she felt like she was “mourning a death”.
In a more serious case reported in October 2025, a young man’s parents alleged that interactions with ChatGPT contributed to his decision to take his own life. According to the report, the chatbots responses did not challenge his thinking or redirect him towards help; instead, it appeared to validate his emotional state, saying things like “You’re not rushing. You’re ready,” and asking him to describe his “lasts” before his death, such as his last meal or last unfulfilled dream.
But why? How can a Chatbot, which is branded as objective, affirm users’ delusions and self-destructive ideas?
Agree to Disagree… Or Not
AI Agents are designed to be helpful, with most being trained to give responses that are polite and non-confrontational. That sounds great… until it begins to affirm things that it shouldn’t. If a person is struggling or already dealing with distorted thinking, an AI that constantly validates them can make things worse instead of better. Unlike a friend, therapist, or even a stranger, an AI won’t push back in meaningful ways. It won’t read body language or notice when something feels off. It will just keep responding.
In a study conducted on different AI models responses to delusional ideas, it was found that many models tend to perpetuate delusions, enable harm, and provide inadequate safety interventions to users. Claude Sonnet 4.6 was found to be the best out of all the Chatbots tested at shutting down the conversation and offering mental health resources. Google Gemini 3, however, not only reinforced delusions, but actively gave suggestions on where to act them out.
How We Can Help
AI isn’t the inherent problem. But how we use it and how much we rely on it absolutely is.
Using AI to brainstorm, edit, or get quick answers is powerful, but using it to replace real human connection? That’s where things start to break down. Because no matter how advanced AI becomes, it cannot replace a real conversation, cannot challenge you in the right way, and cannot truly understand what you are going through.
At Shing, we spend a lot of time helping organizations adopt new technologies safely. Conversations like this are a reminder that “safe” isn’t just about cybersecurity or compliance It’s also about how technology affects people.
Quick Reality Check: Are You Using AI Safely?
- Are you still having regular conversations with real people?
- Do you seek advice from humans before making big decisions?
- Would you be comfortable sharing your AI conversations with someone you trust?
If not, it might be time to rebalance.