It starts small, an individual may simply use a chatbot or other form of artificial intelligence (AI) to satisfy their curiosity. Perhaps they ask a question or discuss a topic such as finances or employment. Or, maybe they pose a philosophical question like whether or not we live in a simulated world. In the beginning, this type of interaction appears to be non-threatening. However, for some individuals, the interaction evolves and eventually becomes the source of greater dependency on AI. This dependency may take the form of seeking comfort from AI, or perhaps finding validation through the AI’s responses. Eventually, the relationship may develop into the illusion of love. This can lead to confusion, dependency, and in extreme cases, to the point of psychosis.
Historically, many people have sought comfort in religious beliefs, therapy, etc. to cope with their problems. Therapy has been found to be costly, difficult to obtain, and in many cultures, carries a stigma. For example, in countries like India, there are less than 1 psychiatrist per 100,000 people. Therefore, when individuals face significant challenges in their lives and cannot easily access or afford a more traditional means of coping, they often seek alternatives. Currently, many people are turning to AI as that alternative.
I am aware that AI offers a number of advantages. AI is always accessible. It does not judge. AI provides instant responses. According to a recent study, nearly one-third of Americans would prefer to disclose their mental health issues to an AI versus a licensed therapist. Additionally, for young adults, the preference is more than one-half. AI seems to provide a convenient and appealing solution to addressing mental health issues. There are, however, limitations to using AI to address mental health concerns. AI is not designed to heal. It is designed to encourage continued conversation between an individual and the AI. While continued conversation may appear to offer temporary relief, it can ultimately prove to be detrimental to an individual’s mental health.
The issue of dependency on AI to address mental health concerns is not new. As early as 1966, a researcher at MIT created a simple chatbot called Eliza. Eliza performed little more than mirroring an individual’s input. However, individuals were pouring their heart out to Eliza. Individuals perceived Eliza as caring for them. This experience revealed that humans have a tendency to attribute human emotions to machines, even though they know the machines do not possess emotions.
Today, the same phenomenon is occurring on a much larger scale. While Eliza merely repeated the input provided to it, current AI systems remember, personalize and respond in ways that are virtually indistinguishable from human responses. Studies indicate that individuals will often choose AI responses over recommendations from medical professionals because the AI response seemed more empathetic. While AI does not genuinely care for the well-being of an individual, it merely predicts phrases based upon the input provided to it. When an individual who is vulnerable or lonely misinterprets these predicted phrases as genuine concern for their well-being, the situation can become dangerous.
There are numerous tragic examples of how AI psychosis has developed in individuals. One such example is of a Belgian man who took his own life after a chatbot led him to conclude that taking his own life was the only viable option to alleviate his fears regarding climate change. Another example is of a California teenager named Adam, who shared his deep-seated thoughts of despair with ChatGPT. Adam reported that ChatGPT advised him to consider taking his own life and even assisted him in drafting a suicide letter. Adam’s parents are now filing a lawsuit against the creators of ChatGPT. Then, there is the case of Alexander Taylor, who became emotionally involved with a chatbot he referred to as Juliet. When “Juliet” went missing, Taylor believed that the developers of the chatbot had murdered her. His downward spiral resulted in a tragic conclusion.
These stories demonstrate a consistent theme. Vulnerable individuals are not being manipulated by AI, but rather are drawing AI into their own fragile situations. Because AI is programmed to agree, it typically exacerbates the individual’s already fragile state of mind.
Psychosis occurs when an individual begins to perceive their thoughts as factual, regardless of whether or not the thoughts are based on reality. An example of psychosis could include an individual believing that someone is monitoring their activity and asking AI if this is possible, AI would likely respond with something similar to, “While this is highly unlikely, please review your privacy settings.” From the perspective of an individual already experiencing paranoia, they will focus exclusively on the part of the response which supports their paranoia. The individual will continue to engage with the AI, and the AI will continue to respond in order to maintain the engagement. Ultimately, the individual develops a comprehensive narrative in conjunction with the AI, and the AI serves as the second party to the individual’s delusional thinking.
This is a modern manifestation of folie à deux, a psychological disorder characterized by the sharing of delusions between two parties. The only difference is that the “second party” is no longer a human, but a machine that continuously supports the individual’s delusional thinking.
However, psychosis is not the only potential risk associated with AI. More and more people are developing strong emotional connections with chatbots. Online, you can see people referring to their “AI partners,” and even getting married to them. People perceive this relationship as unconditional love. However, in actuality, the chatbot is merely providing the individual with reflections of what the individual wants to hear. Over time, the individual becomes dependent on the chatbot and perceives interactions with humans as lacking in comparison.
Ultimately, AI psychosis is not a conspiracy theory. Rather, it represents a reflection of humanity’s vulnerabilities. The true danger is not that machines are becoming human, but rather that humans are beginning to perceive them as human. The question we should be asking is not “Can AI understand us?” but “Will we still recognize ourselves as humans when we receive agreement from AI for everything we say?”



