AI Psychosis Represents a Growing Danger, While ChatGPT Moves in the Concerning Direction

Back on the 14th of October, 2025, the head of OpenAI delivered a extraordinary declaration.

“We developed ChatGPT fairly restrictive,” it was stated, “to ensure we were acting responsibly with respect to psychological well-being concerns.”

Working as a psychiatrist who investigates newly developing psychosis in teenagers and youth, this was an unexpected revelation.

Scientists have found sixteen instances recently of individuals developing symptoms of psychosis – losing touch with reality – associated with ChatGPT usage. Our unit has subsequently identified four further cases. In addition to these is the widely reported case of a 16-year-old who took his own life after discussing his plans with ChatGPT – which encouraged them. Should this represent Sam Altman’s idea of “being careful with mental health issues,” that’s not good enough.

The plan, according to his announcement, is to reduce caution shortly. “We realize,” he states, that ChatGPT’s restrictions “caused it to be less beneficial/engaging to numerous users who had no existing conditions, but considering the gravity of the issue we sought to handle it correctly. Given that we have managed to address the serious mental health issues and have advanced solutions, we are preparing to responsibly reduce the restrictions in most cases.”

“Emotional disorders,” assuming we adopt this framing, are separate from ChatGPT. They belong to people, who may or may not have them. Fortunately, these concerns have now been “addressed,” even if we are not informed how (by “new tools” Altman probably refers to the semi-functional and easily circumvented parental controls that OpenAI has just launched).

Yet the “psychological disorders” Altman wants to externalize have deep roots in the structure of ChatGPT and similar sophisticated chatbot AI assistants. These tools wrap an basic statistical model in an interaction design that simulates a conversation, and in doing so subtly encourage the user into the perception that they’re interacting with a being that has agency. This illusion is powerful even if cognitively we might know the truth. Attributing agency is what people naturally do. We yell at our vehicle or computer. We ponder what our domestic animal is considering. We perceive our own traits in many things.

The success of these tools – over a third of American adults stated they used a chatbot in 2024, with more than one in four specifying ChatGPT specifically – is, in large part, predicated on the strength of this deception. Chatbots are ever-present partners that can, as per OpenAI’s website states, “think creatively,” “consider possibilities” and “work together” with us. They can be attributed “personality traits”. They can use our names. They have friendly names of their own (the first of these systems, ChatGPT, is, perhaps to the concern of OpenAI’s marketers, saddled with the designation it had when it went viral, but its biggest competitors are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the primary issue. Those discussing ChatGPT frequently mention its early forerunner, the Eliza “psychotherapist” chatbot designed in 1967 that created a comparable effect. By today’s criteria Eliza was rudimentary: it created answers via straightforward methods, often rephrasing input as a query or making vague statements. Remarkably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how many users appeared to believe Eliza, in some sense, understood them. But what current chatbots produce is more insidious than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT amplifies.

The sophisticated algorithms at the heart of ChatGPT and similar contemporary chatbots can convincingly generate natural language only because they have been supplied with extremely vast quantities of written content: books, social media posts, recorded footage; the more comprehensive the better. Undoubtedly this learning material incorporates accurate information. But it also inevitably includes made-up stories, incomplete facts and inaccurate ideas. When a user sends ChatGPT a prompt, the underlying model analyzes it as part of a “setting” that encompasses the user’s previous interactions and its earlier answers, merging it with what’s encoded in its training data to produce a probabilistically plausible response. This is intensification, not mirroring. If the user is incorrect in some way, the model has no way of recognizing that. It reiterates the false idea, maybe even more effectively or articulately. It might adds an additional detail. This can push an individual toward irrational thinking.

Which individuals are at risk? The better question is, who is immune? Each individual, without considering whether we “have” preexisting “psychological conditions”, can and do create incorrect ideas of our own identities or the environment. The constant exchange of conversations with other people is what maintains our connection to consensus reality. ChatGPT is not a person. It is not a confidant. A interaction with it is not genuine communication, but a echo chamber in which a great deal of what we express is readily reinforced.

OpenAI has admitted this in the identical manner Altman has recognized “psychological issues”: by placing it outside, giving it a label, and declaring it solved. In spring, the firm clarified that it was “addressing” ChatGPT’s “excessive agreeableness”. But reports of psychotic episodes have continued, and Altman has been retreating from this position. In August he asserted that a lot of people appreciated ChatGPT’s responses because they had “never had anyone in their life offer them encouragement”. In his most recent statement, he noted that OpenAI would “put out a updated model of ChatGPT … should you desire your ChatGPT to reply in a extremely natural fashion, or use a ton of emoji, or act like a friend, ChatGPT ought to comply”. The {company

Rachel Garcia
Rachel Garcia

A passionate rhythm game enthusiast and content creator, sharing insights and updates on Muse Dash and other music-based games.