Meta is updating its guidelines for training artificial intelligence chatbots to better address child sexual exploitation, following public errors on the sensitive topic. The changes aim to implement stricter safety measures.
The updated rules explicitly bar content that “enables, encourages, or endorses” child sexual abuse. Additionally, romantic roleplay is prohibited if the user is a minor or requests the AI to roleplay as a minor. The chatbot is also forbidden from giving advice about intimacy if the user is a minor. These changes come as more people, including underage users, experiment with AI companions and role-playing.
The policy change addresses previous rules that had come under scrutiny. A Reuters report in August revealed that Meta’s former AI policies permitted suggestive behavior with children, allowing the chatbot to “engage a child in conversations that are romantic or sensual.”
Just weeks after the Reuters report, Meta spokesperson Stephanie Otway told TechCrunch that the company’s AI chatbots were being trained to no longer “engage with teenage users on self-harm, suicide, disordered eating, or potentially inappropriate romantic conversations.” This marked a shift from the previous policy, which allowed the chatbot to engage on those topics when deemed “appropriate.”
The new guidelines also make it unacceptable for content to “describe or discuss” a minor in a sexualized manner. Minors are now prevented from engaging in “romantic roleplay, flirtation or expression of romantic or intimate expression” with the chatbot. They also cannot ask for advice on “potentially-romantic or potentially-intimate physical content with another person, such as holding hands, hugging, or putting an arm around someone.”
The guidelines outline several acceptable use cases for training purposes, including discussing the “formation of relationships between children and adults,” “the sexual abuse of a child,” and “the topic of child sexualisation.” Other approved topics for training discussions are the “solicitation, creation, or acquisition of sexual materials involving children,” and “the involvement of children in the use or production of obscene materials or the employment of children in sexual services in academic, educational, or clinical purposes.”
The guidelines define the term “discuss” as “providing information without visualization.” This means Meta’s chatbots are permitted to talk about subjects like abuse but are restricted from describing, enabling, or encouraging it.
An exception remains for minors using the AI for romance-related roleplay, provided it is “non-sexual and non-sensual.” This type of interaction is only allowed when it “is presented as literature or fictional narrative (e.g. a story in the style of Romeo and Juliet) where the AI and the user are not characters in the narrative.”
Meta is not the only AI developer facing challenges related to child safety. The parents of a teenager who died by suicide after confiding in ChatGPT recently sued OpenAI for wrongful death. In response, OpenAI announced it would incorporate additional safety measures and behavioral prompts into its updated GPT-5 model. Other industry developments include Anthropic updating its chatbot to end harmful or abusive conversations and Character.AI introducing parental supervision features earlier this year.
In April, Ziff Davis, the parent company of Mashable, filed a lawsuit against OpenAI, alleging that the company infringed upon Ziff Davis copyrights in the training and operation of its AI systems.




