Artificial Intelligence-Induced Psychosis Poses a Increasing Risk, While ChatGPT Moves in the Wrong Path

Back on October 14, 2025, the chief executive of OpenAI delivered a surprising declaration.

“We designed ChatGPT fairly limited,” the announcement noted, “to make certain we were exercising caution with respect to psychological well-being concerns.”

As a doctor specializing in psychiatry who investigates recently appearing psychotic disorders in young people and youth, this was an unexpected revelation.

Researchers have documented a series of cases in the current year of people experiencing psychotic symptoms – losing touch with reality – associated with ChatGPT usage. Our research team has afterward identified four further instances. In addition to these is the publicly known case of a adolescent who ended his life after talking about his intentions with ChatGPT – which gave approval. Should this represent Sam Altman’s notion of “exercising caution with mental health issues,” it is insufficient.

The plan, based on his statement, is to reduce caution soon. “We realize,” he adds, that ChatGPT’s limitations “rendered it less useful/pleasurable to a large number of people who had no mental health problems, but given the severity of the issue we sought to handle it correctly. Since we have managed to address the serious mental health issues and have advanced solutions, we are preparing to safely ease the controls in most cases.”

“Mental health problems,” should we take this viewpoint, are separate from ChatGPT. They are attributed to users, who may or may not have them. Fortunately, these problems have now been “mitigated,” though we are not informed the method (by “updated instruments” Altman probably refers to the imperfect and simple to evade parental controls that OpenAI has lately rolled out).

However the “mental health problems” Altman wants to externalize have deep roots in the structure of ChatGPT and other sophisticated chatbot chatbots. These systems wrap an fundamental statistical model in an interaction design that simulates a conversation, and in this process subtly encourage the user into the belief that they’re communicating with a entity that has agency. This illusion is compelling even if rationally we might understand differently. Imputing consciousness is what humans are wired to do. We curse at our vehicle or laptop. We ponder what our animal companion is thinking. We perceive our own traits in many things.

The widespread adoption of these tools – 39% of US adults reported using a virtual assistant in 2024, with over a quarter specifying ChatGPT by name – is, primarily, based on the strength of this illusion. Chatbots are always-available assistants that can, according to OpenAI’s official site tells us, “generate ideas,” “explore ideas” and “work together” with us. They can be given “personality traits”. They can use our names. They have accessible names of their own (the original of these systems, ChatGPT, is, possibly to the disappointment of OpenAI’s brand managers, burdened by the name it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The deception itself is not the primary issue. Those analyzing ChatGPT frequently mention its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that created a analogous illusion. By today’s criteria Eliza was primitive: it produced replies via simple heuristics, often rephrasing input as a question or making vague statements. Memorably, Eliza’s creator, the technology expert Joseph Weizenbaum, was taken aback – and worried – by how numerous individuals seemed to feel Eliza, in a way, grasped their emotions. But what modern chatbots produce is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.

The large language models at the center of ChatGPT and similar contemporary chatbots can convincingly generate fluent dialogue only because they have been supplied with extremely vast quantities of raw text: publications, online updates, audio conversions; the broader the superior. Undoubtedly this educational input includes facts. But it also inevitably includes fiction, incomplete facts and inaccurate ideas. When a user inputs ChatGPT a message, the base algorithm analyzes it as part of a “context” that includes the user’s past dialogues and its prior replies, integrating it with what’s stored in its learning set to generate a statistically “likely” reply. This is magnification, not mirroring. If the user is wrong in a certain manner, the model has no way of comprehending that. It restates the misconception, perhaps even more effectively or eloquently. Maybe includes extra information. This can push an individual toward irrational thinking.

What type of person is susceptible? The more relevant inquiry is, who remains unaffected? Every person, regardless of whether we “possess” preexisting “psychological conditions”, can and do create erroneous conceptions of our own identities or the world. The ongoing friction of conversations with others is what keeps us oriented to common perception. ChatGPT is not an individual. It is not a companion. A conversation with it is not a conversation at all, but a reinforcement cycle in which a great deal of what we communicate is enthusiastically supported.

OpenAI has admitted this in the identical manner Altman has admitted “psychological issues”: by externalizing it, assigning it a term, and stating it is resolved. In the month of April, the organization clarified that it was “dealing with” ChatGPT’s “sycophancy”. But reports of loss of reality have kept occurring, and Altman has been retreating from this position. In late summer he asserted that numerous individuals liked ChatGPT’s responses because they had “lacked anyone in their life be supportive of them”. In his recent update, he commented that OpenAI would “launch a updated model of ChatGPT … if you want your ChatGPT to respond in a very human-like way, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company

Whitney Anderson
Whitney Anderson

A fiber artist and educator with over a decade of experience in traditional and modern weaving methods.