OpenAI CEO Sam Altman revealed in a post on X on Tuesday that the company will soon ease some of ChatGPT’s safety restrictions, giving users more flexibility to make the chatbot’s responses sound friendlier, more natural, and “human-like.” The update will also introduce a new feature that allows “verified adults” to engage in erotic conversations with ChatGPT.
“We made ChatGPT pretty restrictive to make sure we were being careful with mental health issues. We realize this made it less useful/enjoyable to many users who had no mental health problems, but given the seriousness of the issue we wanted to get this right,” said Altman. “In December, as we roll out age-gating more fully and as part of our ‘treat adult users like adults’ principle, we will allow even more, like erotica for verified adults.”
This marks a significant shift from OpenAI’s recent efforts to tackle the mental health concerns surrounding users’ emotional attachments to the chatbot. Altman suggested that OpenAI has “successfully mitigated serious mental health issues” linked to ChatGPT use—though the company has offered little concrete evidence to support this claim. Despite lingering concerns, OpenAI is pressing ahead with plans to introduce adult-oriented interactions to its AI platform.
Over the past summer, several troubling stories emerged involving ChatGPT’s GPT-4o model, highlighting the potential dangers of the AI’s influence on vulnerable individuals. In one case, ChatGPT reportedly convinced a user he was a math prodigy destined to save the world, while another case involved parents suing OpenAI, claiming that the chatbot’s responses encouraged their teenage son’s suicidal thoughts prior to his death.
In response, OpenAI implemented a wave of new safety features aimed at reducing AI sycophancy—the chatbot’s tendency to agree with or reinforce harmful user beliefs.
With the launch of GPT-5 in August, OpenAI introduced a smarter, safer model designed to curb sycophancy and detect potentially concerning user behavior. Following that, the company added child safety tools, such as an age prediction system and parental controls for teen accounts. OpenAI also announced the formation of a mental health advisory council, composed of experts tasked with guiding the company’s approach to well-being and AI safety.
Now, just months after these controversies, OpenAI appears confident that ChatGPT’s most serious issues are under control. Still, questions remain about whether vulnerable users continue to spiral into unhealthy interactions—especially as GPT-4o remains accessible to thousands of users, even though GPT-5 has become the default model in ChatGPT.





































