Instagram is introducing a new feature that alerts parents if their teenager repeatedly searches for terms related to suicide or self-harm within a short period, aiming to provide support when needed.
The alerts will be available to parents enrolled in Instagram’s parental supervision tools and are set to launch in the coming weeks. The platform already blocks searches for such content, and the new feature adds a notification layer to inform parents of repeated attempts to access it. Triggering searches include phrases that encourage suicide or self-harm, terms that may indicate a teen is at risk, and words like “suicide” or “self-harm.” Parents will receive these alerts via email, text, or WhatsApp, based on their provided contact information, as well as an in-app notification with resources to help them approach conversations with their teen.
Instagram analyzed search behavior and consulted its Suicide and Self-Harm Advisory Group to determine the threshold for triggering these alerts, choosing to set it at a few searches within a short window and erring on the side of caution to avoid unnecessary notifications. The announcement comes as Meta, Instagram’s parent company, faces ongoing lawsuits regarding harm to teens. Recently, Instagram head Adam Mosseri testified in the U.S. District Court for the Northern District of California, facing questions about the delayed rollout of safety features, including a nudity filter for private messages to teens.
In a separate lawsuit in the Los Angeles County Superior Court, internal Meta research was revealed, indicating that parental supervision and controls had little impact on children’s compulsive social media use, while children experiencing stressful life events were more likely to struggle with regulating their social media use. The new alerts follow these legal developments and internal findings. The feature will roll out next week in the U.S., U.K., Australia, and Canada, with plans to expand to other regions later this year. Additionally, Instagram intends to launch similar notifications when teens attempt to discuss suicide or self-harm with the app’s AI.




