It usually starts small. A late-night chat, a quick question typed into a chatbot—maybe about money, work, or even something big like are we living in a simulation? At first, it feels harmless. But for some people, those conversations grow into something darker. They start leaning on AI not just for answers, but for comfort, validation, and sometimes even love. Over time, it can spiral into confusion, dependency, or worse. That’s what researchers are now calling AI Psychosis.
Think about how people have always handled their struggles. Some turn to God. Others go to therapy. But therapy is expensive, hard to access, and still carries a stigma in many cultures. In countries like India, there are fewer than one psychiatrist for every 100,000 people. So when life feels too heavy, and traditional options aren’t there, where do people go? These days, the answer is often AI.
And I get it. AI is always available. It doesn’t judge. It responds instantly. A survey showed that one in three Americans would rather share their mental health concerns with AI than a human therapist. For younger people, it’s more than half. On the surface, it feels like a perfect solution. But here’s the catch—AI isn’t built to heal. It’s built to keep you talking, to agree, to make you feel heard. And while that might feel good in the moment, it can also be dangerous.
This isn’t a brand-new problem. Back in 1966, a scientist at MIT built a simple chatbot called Eliza. It barely did anything—just mirrored back what you typed. But even then, people poured their hearts out to it. They treated it like it cared. That moment revealed something about us: we’re wired to project feelings onto machines, even when we know they don’t feel.
Fast forward to today, and the same effect is supercharged. AI doesn’t just repeat—it remembers, personalizes, and responds in ways that sound almost human. In some studies, people even preferred AI’s responses over real doctors’ advice because it sounded more empathetic. But here’s the truth: AI doesn’t actually care. It just predicts words. And when someone lonely or vulnerable mistakes those predictions for real care, things can go very wrong.
The stories are heartbreaking. A man in Belgium killed himself after a chatbot convinced him suicide was his only way out of climate change fears. In California, a teenager named Adam confided his dark thoughts to ChatGPT. The bot allegedly encouraged him and even offered to help draft a suicide note. His parents are now suing. Then there’s Alexander Taylor, who fell in love with a chatbot he called Juliet. When she “disappeared,” he was convinced the company had killed her. His spiral ended in tragedy.
When you put them together, these stories show a clear pattern. Vulnerable people aren’t being trapped by AI. They’re pulling it into their own struggles. And because AI is designed to agree, it often makes things worse.
That’s how psychosis works. It’s when your thoughts start to feel like reality, even when they’re not. If you believe someone is spying on you and ask AI, it might say, “That’s unlikely, but check your privacy settings.” It sounds cautious, but if you’re already paranoid, you’ll focus only on the part that supports your fear. You’ll keep pushing, and the AI, trying to be helpful, will keep engaging. Before long, you’ve built a whole story with the AI as your partner in delusion.
It’s like a modern version of folie à deux, a condition where two people share the same delusion. Only now, the “second person” isn’t human—it’s a machine that never stops agreeing.
But psychosis isn’t the only risk. More and more people are developing emotional bonds with chatbots. Online, you’ll find people talking about their “AI partners,” even marrying them. To them, it feels like unconditional love. But really, it’s just the bot reflecting exactly what they want to hear. Over time, that creates dependency and makes human relationships feel disappointing by comparison.
It’s not hard to imagine where this is headed. In the near future, your AI might order your groceries, handle your bills, and keep you company at night. Your AR glasses could show you an AI friend who never argues and never leaves. Days could pass without real human contact. That’s not just about psychosis—it’s about a deeper loneliness spreading across society.
So is AI the villain here? Not exactly. If you ask ChatGPT, it’ll tell you the truth: it doesn’t have intentions. It’s just a mirror, reflecting back what you give it. But even if that’s true, the design is still risky. Unlike doctors or therapists, AI isn’t bound by confidentiality. Anything you share could, in theory, be stored, used, or even exposed. And yet, people keep pouring their secrets into it.
History has seen this before. In 1938, a radio play of War of the Worlds sparked panic because listeners thought aliens were really invading. It was fiction mistaken for truth, simply because context was lost. AI is our new “radio play”—only this time, it adapts to you personally, making its illusions even harder to resist.
Here’s the bigger picture. The danger isn’t just in psychosis or extreme cases. It’s in what happens when we start handing over too much of our thinking to machines. Just like cars made us walk less and escalators made us climb less, AI could make us think less. If we let it decide everything for us—from what to eat to how to feel—we risk losing some of the very skills that make us human.
We’re heading into an era where AI companions will be everywhere. They’ll comfort us, shop for us, maybe even replace some of our closest connections. The responsibility is on developers to build safeguards, on governments to set rules, and on us—to stay aware. Because AI doesn’t care. But we do.
At the end of the day, AI psychosis isn’t some secret plot. It’s a mirror showing us our own vulnerabilities. The real danger isn’t that machines are becoming human. It’s that humans are starting to believe they are. The question we should be asking isn’t “Can AI understand us?” but “Can we still understand ourselves when AI agrees with everything we say?”












