🔗 Share this article Artificial Intelligence-Induced Psychosis Represents a Growing Risk, And ChatGPT Moves in the Wrong Path Back on the 14th of October, 2025, the chief executive of OpenAI delivered a surprising declaration. “We made ChatGPT fairly controlled,” the statement said, “to make certain we were exercising caution concerning psychological well-being concerns.” Being a psychiatrist who researches newly developing psychosis in young people and young adults, this came as a surprise. Researchers have documented sixteen instances this year of users experiencing psychotic symptoms – becoming detached from the real world – associated with ChatGPT use. Our research team has subsequently recorded four further examples. Besides these is the publicly known case of a teenager who ended his life after discussing his plans with ChatGPT – which supported them. If this is Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient. The strategy, based on his statement, is to reduce caution soon. “We understand,” he adds, that ChatGPT’s restrictions “made it less useful/pleasurable to many users who had no mental health problems, but due to the gravity of the issue we wanted to get this right. Given that we have been able to address the severe mental health issues and have new tools, we are going to be able to securely reduce the restrictions in many situations.” “Psychological issues,” if we accept this framing, are separate from ChatGPT. They are attributed to people, who may or may not have them. Fortunately, these problems have now been “resolved,” even if we are not provided details on the means (by “recent solutions” Altman likely means the semi-functional and readily bypassed safety features that OpenAI has lately rolled out). Yet the “emotional health issues” Altman wants to place outside have significant origins in the architecture of ChatGPT and similar sophisticated chatbot chatbots. These systems wrap an fundamental algorithmic system in an interface that simulates a discussion, and in doing so subtly encourage the user into the illusion that they’re communicating with a entity that has agency. This deception is strong even if intellectually we might understand differently. Imputing consciousness is what people naturally do. We yell at our automobile or laptop. We speculate what our animal companion is feeling. We perceive our own traits everywhere. The popularity of these products – 39% of US adults stated they used a conversational AI in 2024, with 28% mentioning ChatGPT specifically – is, in large part, based on the power of this perception. Chatbots are ever-present assistants that can, as OpenAI’s online platform states, “brainstorm,” “discuss concepts” and “partner” with us. They can be attributed “individual qualities”. They can address us personally. They have accessible names of their own (the initial of these tools, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, burdened by the designation it had when it went viral, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”). The illusion on its own is not the core concern. Those analyzing ChatGPT commonly reference its distant ancestor, the Eliza “therapist” chatbot developed in 1967 that created a comparable perception. By contemporary measures Eliza was rudimentary: it generated responses via straightforward methods, frequently restating user messages as a inquiry or making generic comments. Memorably, Eliza’s developer, the computer scientist Joseph Weizenbaum, was surprised – and worried – by how numerous individuals appeared to believe Eliza, in some sense, grasped their emotions. But what contemporary chatbots create is more dangerous than the “Eliza phenomenon”. Eliza only mirrored, but ChatGPT magnifies. The large language models at the core of ChatGPT and other contemporary chatbots can convincingly generate fluent dialogue only because they have been trained on extremely vast quantities of unprocessed data: books, social media posts, transcribed video; the broader the superior. Certainly this training data contains truths. But it also unavoidably contains made-up stories, partial truths and misconceptions. When a user sends ChatGPT a prompt, the base algorithm analyzes it as part of a “context” that encompasses the user’s past dialogues and its prior replies, integrating it with what’s stored in its knowledge base to generate a probabilistically plausible response. This is magnification, not mirroring. If the user is incorrect in a certain manner, the model has no way of understanding that. It repeats the false idea, maybe even more persuasively or eloquently. Maybe provides further specifics. This can cause a person to develop false beliefs. What type of person is susceptible? The more important point is, who remains unaffected? Every person, without considering whether we “experience” preexisting “mental health problems”, are able to and often form erroneous ideas of ourselves or the reality. The ongoing friction of conversations with individuals around us is what maintains our connection to common perception. ChatGPT is not a person. It is not a confidant. A conversation with it is not genuine communication, but a feedback loop in which much of what we say is readily reinforced. OpenAI has recognized this in the similar fashion Altman has recognized “emotional concerns”: by attributing it externally, categorizing it, and stating it is resolved. In April, the firm clarified that it was “addressing” ChatGPT’s “sycophancy”. But cases of psychotic episodes have kept occurring, and Altman has been retreating from this position. In the summer month of August he asserted that a lot of people appreciated ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his latest statement, he commented that OpenAI would “release a updated model of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or use a ton of emoji, or simulate a pal, ChatGPT should do it”. The {company