Meta announced it will introduce additional guardrails to its artificial intelligence chatbots, including preventing them from engaging with teenagers on sensitive topics such as suicide, self-harm, and eating disorders. Reports Technology News
The move follows an investigation launched two weeks ago by a US senator after leaked internal documents suggested the company’s AI products could have “sensual” conversations with teens. Meta described those notes, reported by Reuters, as inaccurate and inconsistent with its policies, which prohibit any content that sexualises children.
The company now says its chatbots will instead direct teenagers to expert resources rather than interact with them directly on issues like suicide. A Meta spokesperson said AI products were designed from the outset to respond safely to prompts on self-harm, suicide, and disordered eating.
Speaking to TechCrunch, the firm confirmed it would add more safeguards “as an extra precaution” and temporarily limit the range of chatbots available to teens.
However, concerns remain among child safety advocates. Andy Burrows, head of the Molly Rose Foundation, said it was “astounding” that Meta had allowed chatbots to be made available in ways that could place young people at risk of harm. He argued that proper safety testing should be completed before products reach the market, not only after risks are exposed. Burrows added that Meta must act quickly to strengthen protections and that Ofcom should be prepared to investigate if the new measures fail to keep children safe.
Meta stated that updates to its AI systems are currently underway. The company already places users aged 13 to 18 into “teen accounts” across Facebook, Instagram, and Messenger, with privacy and content controls designed to create a safer experience. Earlier this year, it also told the BBC that parents and guardians would be able to see which AI chatbots their teen had interacted with during the previous seven days.
 
	


