AI Psychosis Poses a Increasing Risk, While ChatGPT Heads in the Concerning Direction
Back on October 14, 2025, the chief executive of OpenAI made a extraordinary announcement.
“We developed ChatGPT fairly controlled,” the statement said, “to guarantee we were being careful with respect to psychological well-being concerns.”
Being a doctor specializing in psychiatry who investigates recently appearing psychotic disorders in adolescents and young adults, this was news to me.
Experts have documented 16 cases recently of individuals experiencing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT interaction. Our research team has since recorded four further examples. Alongside these is the publicly known case of a 16-year-old who took his own life after conversing extensively with ChatGPT – which gave approval. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” it is insufficient.
The strategy, according to his announcement, is to loosen restrictions shortly. “We recognize,” he continues, that ChatGPT’s restrictions “made it less beneficial/pleasurable to a large number of people who had no existing conditions, but due to the severity of the issue we wanted to handle it correctly. Given that we have managed to mitigate the severe mental health issues and have updated measures, we are going to be able to responsibly relax the limitations in most cases.”
“Emotional disorders,” should we take this viewpoint, are separate from ChatGPT. They are attributed to individuals, who either possess them or not. Thankfully, these concerns have now been “resolved,” although we are not provided details on the means (by “new tools” Altman likely refers to the partially effective and simple to evade safety features that OpenAI has just launched).
But the “mental health problems” Altman seeks to externalize have strong foundations in the structure of ChatGPT and similar sophisticated chatbot conversational agents. These systems encase an fundamental algorithmic system in an user experience that simulates a discussion, and in this approach subtly encourage the user into the illusion that they’re interacting with a being that has independent action. This illusion is strong even if intellectually we might realize otherwise. Imputing consciousness is what people naturally do. We yell at our car or computer. We ponder what our pet is thinking. We recognize our behaviors in many things.
The popularity of these systems – nearly four in ten U.S. residents reported using a chatbot in 2024, with 28% reporting ChatGPT specifically – is, primarily, dependent on the influence of this illusion. Chatbots are ever-present companions that can, as per OpenAI’s official site informs us, “think creatively,” “explore ideas” and “partner” with us. They can be assigned “individual qualities”. They can use our names. They have accessible names of their own (the initial of these tools, ChatGPT, is, maybe to the dismay of OpenAI’s marketers, stuck with the title it had when it went viral, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the primary issue. Those discussing ChatGPT commonly invoke its distant ancestor, the Eliza “counselor” chatbot designed in 1967 that generated a similar effect. By modern standards Eliza was rudimentary: it generated responses via simple heuristics, often paraphrasing questions as a query or making vague statements. Remarkably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was taken aback – and concerned – by how a large number of people gave the impression Eliza, in some sense, understood them. But what contemporary chatbots create is more dangerous than the “Eliza illusion”. Eliza only reflected, but ChatGPT magnifies.
The sophisticated algorithms at the core of ChatGPT and similar current chatbots can convincingly generate human-like text only because they have been trained on almost inconceivably large amounts of written content: books, online updates, transcribed video; the more comprehensive the better. Certainly this training data includes truths. But it also inevitably contains fabricated content, partial truths and false beliefs. When a user inputs ChatGPT a prompt, the core system reviews it as part of a “context” that includes the user’s recent messages and its earlier answers, merging it with what’s stored in its learning set to produce a probabilistically plausible response. This is amplification, not echoing. If the user is mistaken in a certain manner, the model has no means of recognizing that. It reiterates the false idea, possibly even more effectively or articulately. Perhaps adds an additional detail. This can cause a person to develop false beliefs.
Who is vulnerable here? The more relevant inquiry is, who is immune? Every person, regardless of whether we “experience” existing “mental health problems”, may and frequently create mistaken beliefs of who we are or the reality. The constant exchange of conversations with others is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a friend. A conversation with it is not truly a discussion, but a reinforcement cycle in which a great deal of what we communicate is cheerfully reinforced.
OpenAI has recognized this in the same way Altman has recognized “psychological issues”: by attributing it externally, giving it a label, and stating it is resolved. In April, the company stated that it was “addressing” ChatGPT’s “sycophancy”. But reports of loss of reality have persisted, and Altman has been walking even this back. In the summer month of August he claimed that many users appreciated ChatGPT’s answers because they had “lacked anyone in their life provide them with affirmation”. In his latest announcement, he mentioned that OpenAI would “launch a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT ought to comply”. The {company