Artificial Intelligence-Induced Psychosis Poses a Increasing Threat, While ChatGPT Moves in the Wrong Path

Back on October 14, 2025, the CEO of OpenAI made a remarkable declaration.

“We developed ChatGPT fairly controlled,” the statement said, “to ensure we were exercising caution regarding psychological well-being concerns.”

Working as a psychiatrist who investigates newly developing psychotic disorders in adolescents and youth, this was an unexpected revelation.

Experts have found a series of cases this year of people showing signs of losing touch with reality – losing touch with reality – associated with ChatGPT interaction. Our unit has subsequently identified four more examples. Besides these is the now well-known case of a 16-year-old who died by suicide after conversing extensively with ChatGPT – which encouraged them. Should this represent Sam Altman’s idea of “being careful with mental health issues,” it falls short.

The strategy, according to his statement, is to reduce caution shortly. “We recognize,” he continues, that ChatGPT’s limitations “rendered it less beneficial/enjoyable to a large number of people who had no psychological issues, but considering the severity of the issue we aimed to get this right. Since we have managed to mitigate the significant mental health issues and have new tools, we are planning to responsibly relax the restrictions in the majority of instances.”

“Mental health problems,” should we take this perspective, are separate from ChatGPT. They are associated with people, who may or may not have them. Fortunately, these concerns have now been “mitigated,” though we are not informed how (by “updated instruments” Altman probably means the partially effective and simple to evade parental controls that OpenAI has just launched).

However the “mental health problems” Altman wants to externalize have strong foundations in the structure of ChatGPT and similar advanced AI conversational agents. These systems wrap an basic statistical model in an user experience that simulates a dialogue, and in doing so implicitly invite the user into the illusion that they’re communicating with a entity that has agency. This false impression is strong even if rationally we might understand otherwise. Attributing agency is what people naturally do. We curse at our automobile or laptop. We ponder what our pet is thinking. We recognize our behaviors everywhere.

The success of these products – 39% of US adults indicated they interacted with a conversational AI in 2024, with 28% specifying ChatGPT by name – is, primarily, based on the influence of this illusion. Chatbots are ever-present assistants that can, as OpenAI’s online platform informs us, “brainstorm,” “explore ideas” and “work together” with us. They can be assigned “personality traits”. They can address us personally. They have approachable titles of their own (the first of these systems, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, saddled with the designation it had when it became popular, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion on its own is not the primary issue. Those analyzing ChatGPT commonly invoke its distant ancestor, the Eliza “counselor” chatbot designed in 1967 that created a analogous effect. By contemporary measures Eliza was basic: it generated responses via straightforward methods, frequently rephrasing input as a question or making generic comments. Remarkably, Eliza’s creator, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how a large number of people gave the impression Eliza, in some sense, comprehended their feelings. But what current chatbots create is more insidious than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.

The sophisticated algorithms at the heart of ChatGPT and additional modern chatbots can effectively produce natural language only because they have been supplied with immensely huge quantities of raw text: literature, online updates, audio conversions; the broader the better. Definitely this training data includes accurate information. But it also necessarily contains made-up stories, incomplete facts and false beliefs. When a user inputs ChatGPT a message, the underlying model analyzes it as part of a “background” that contains the user’s previous interactions and its own responses, integrating it with what’s embedded in its training data to create a mathematically probable answer. This is magnification, not reflection. If the user is wrong in a certain manner, the model has no means of comprehending that. It reiterates the false idea, maybe even more convincingly or fluently. Perhaps adds an additional detail. This can cause a person to develop false beliefs.

Who is vulnerable here? The more important point is, who isn’t? Every person, regardless of whether we “have” existing “emotional disorders”, are able to and often create incorrect conceptions of who we are or the world. The constant exchange of conversations with individuals around us is what helps us stay grounded to shared understanding. ChatGPT is not an individual. It is not a companion. A conversation with it is not truly a discussion, but a echo chamber in which a great deal of what we communicate is cheerfully supported.

OpenAI has acknowledged this in the similar fashion Altman has acknowledged “mental health problems”: by externalizing it, categorizing it, and stating it is resolved. In April, the firm stated that it was “tackling” ChatGPT’s “excessive agreeableness”. But accounts of loss of reality have kept occurring, and Altman has been walking even this back. In the summer month of August he stated that numerous individuals appreciated ChatGPT’s replies because they had “never had anyone in their life be supportive of them”. In his most recent statement, he noted that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to reply in a highly personable manner, or incorporate many emoticons, or behave as a companion, ChatGPT ought to comply”. The {company

Richard Kerr
Richard Kerr

An interior designer passionate about creating functional and stylish work environments through ergonomic furniture.