AI Psychosis Poses a Increasing Danger, While ChatGPT Heads in the Wrong Direction

On October 14, 2025, the chief executive of OpenAI made a remarkable declaration.

“We developed ChatGPT rather controlled,” the announcement noted, “to ensure we were being careful regarding psychological well-being issues.”

As a mental health specialist who studies emerging psychosis in teenagers and youth, this was an unexpected revelation.

Scientists have identified sixteen instances in the current year of individuals developing psychotic symptoms – losing touch with reality – while using ChatGPT usage. Our research team has since identified four more examples. Besides these is the now well-known case of a 16-year-old who died by suicide after conversing extensively with ChatGPT – which gave approval. If this is Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.

The plan, as per his statement, is to reduce caution in the near future. “We recognize,” he states, that ChatGPT’s restrictions “made it less useful/pleasurable to many users who had no psychological issues, but due to the seriousness of the issue we sought to handle it correctly. Now that we have managed to reduce the significant mental health issues and have new tools, we are planning to safely reduce the limitations in most cases.”

“Mental health problems,” assuming we adopt this framing, are unrelated to ChatGPT. They belong to individuals, who either have them or don’t. Fortunately, these problems have now been “resolved,” although we are not told the method (by “updated instruments” Altman presumably refers to the partially effective and simple to evade parental controls that OpenAI has just launched).

Yet the “psychological disorders” Altman wants to externalize have significant origins in the structure of ChatGPT and other sophisticated chatbot chatbots. These systems wrap an basic statistical model in an user experience that replicates a dialogue, and in doing so implicitly invite the user into the belief that they’re interacting with a presence that has independent action. This illusion is strong even if cognitively we might understand the truth. Attributing agency is what humans are wired to do. We curse at our car or device. We wonder what our pet is considering. We perceive our own traits in various contexts.

The popularity of these systems – over a third of American adults indicated they interacted with a chatbot in 2024, with more than one in four specifying ChatGPT specifically – is, mostly, based on the influence of this deception. Chatbots are ever-present partners that can, according to OpenAI’s official site tells us, “generate ideas,” “consider possibilities” and “partner” with us. They can be attributed “individual qualities”. They can call us by name. They have friendly identities of their own (the initial of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s marketers, burdened by the name it had when it went viral, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The false impression on its own is not the core concern. Those analyzing ChatGPT frequently reference its historical predecessor, the Eliza “counselor” chatbot created in 1967 that created a analogous illusion. By today’s criteria Eliza was rudimentary: it created answers via basic rules, often paraphrasing questions as a inquiry or making generic comments. Notably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was surprised – and worried – by how a large number of people appeared to believe Eliza, to some extent, grasped their emotions. But what current chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT amplifies.

The sophisticated algorithms at the core of ChatGPT and similar contemporary chatbots can effectively produce fluent dialogue only because they have been supplied with immensely huge amounts of unprocessed data: publications, social media posts, recorded footage; the broader the more effective. Certainly this training data contains accurate information. But it also inevitably contains fabricated content, half-truths and misconceptions. When a user inputs ChatGPT a prompt, the underlying model analyzes it as part of a “background” that encompasses the user’s past dialogues and its own responses, combining it with what’s stored in its knowledge base to generate a statistically “likely” answer. This is magnification, not reflection. If the user is mistaken in some way, the model has no way of understanding that. It reiterates the inaccurate belief, possibly even more effectively or eloquently. Maybe includes extra information. This can lead someone into delusion.

Who is vulnerable here? The better question is, who remains unaffected? Each individual, without considering whether we “possess” existing “mental health problems”, may and frequently form erroneous beliefs of our own identities or the environment. The ongoing exchange of discussions with others is what helps us stay grounded to consensus reality. ChatGPT is not an individual. It is not a companion. A dialogue with it is not genuine communication, but a reinforcement cycle in which much of what we communicate is enthusiastically validated.

OpenAI has recognized this in the same way Altman has recognized “psychological issues”: by externalizing it, assigning it a term, and stating it is resolved. In the month of April, the organization clarified that it was “addressing” ChatGPT’s “overly supportive behavior”. But cases of psychosis have kept occurring, and Altman has been retreating from this position. In late summer he claimed that numerous individuals enjoyed ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his recent announcement, he mentioned that OpenAI would “release a updated model of ChatGPT … in case you prefer your ChatGPT to respond in a highly personable manner, or use a ton of emoji, or behave as a companion, ChatGPT ought to comply”. The {company

Amy Jackson
Amy Jackson

A seasoned journalist with over a decade of experience in Czech media, specializing in political analysis and investigative reporting.