12 C
London
Thursday, February 26, 2026

Instagram to alert parents if teens search for suicide and self-harm content

BusinessInstagram to alert parents if teens search for suicide and self-harm content


Instagram will begin notifying parents if their teenagers repeatedly search for suicide or self-harm related content, marking the first time owner Meta has proactively flagged search behaviour rather than simply blocking it.

From next week, parents and teenagers enrolled in Instagram’s “Teen Accounts” supervision programme in the UK, US, Australia and Canada will receive alerts if a young user searches for harmful terms within a short period of time. The feature will be rolled out globally at a later stage.

Previously, Instagram restricted access to certain harmful material and redirected users to support resources. The new measure goes further by directly alerting parents via email, text message, WhatsApp or within the Instagram app itself, depending on available contact details.

Meta said the alerts are designed to flag sudden changes in search patterns that may indicate distress. Notifications will be accompanied by guidance and expert-backed resources to help parents navigate what are likely to be sensitive conversations.

The move has been met with sharp criticism from the Molly Rose Foundation, established by the family of Molly Russell, who died in 2017 aged 14 after viewing self-harm and suicide content online.

Chief executive Andy Burrows described the announcement as “fraught with risk”, warning that “forced disclosures could do more harm than good”.

“Every parent would want to know if their child is struggling,” Burrows said, “but these flimsy notifications will leave parents panicked and ill-prepared to have the sensitive and difficult conversations that will follow.”

He added that the onus should be on preventing harmful content from appearing in the first place, rather than shifting responsibility onto families after the fact.

The foundation previously published research claiming Instagram was still actively recommending content related to depression, suicide and self-harm to vulnerable young people. Meta rejected those findings, saying they misrepresented its safety efforts.

Ged Flynn, chief executive of Papyrus Prevention of Young Suicide, welcomed the attempt to increase transparency but argued that it did not address deeper systemic issues.

“Parents contact us every day to say how worried they are about their children online,” he said. “They don’t want to be warned after their children search for harmful content, they don’t want it to be spoon-fed to them by unthinking algorithms.”

‘Erring on the side of caution’

Meta said the system is designed to “err on the side of caution” and acknowledged that parents may occasionally receive alerts even when there is no serious cause for concern.

The company said the feature builds on broader Teen Account protections, which include automatically limiting exposure to sensitive material, restricting who can contact teens, and blocking certain harmful searches outright.

Two in-app screenshots released by Meta show alerts titled “Alert about your teen’s safety” followed by a screen offering advice on “How you can support your teen”.

Sameer Hinduja, co-director of the Cyberbullying Research Center, said the impact of the new feature would depend heavily on the quality of guidance provided alongside the alert.

“You can’t drop a notification on a parent and leave them on their own,” he said. “What matters is the immediate support and context that follows.”

Meta also confirmed that it plans to introduce similar parental alerts in the coming months if teenagers discuss self-harm or suicide with Instagram’s AI chatbot. The company said young people are increasingly turning to AI tools for advice and emotional support.

The expansion comes amid heightened scrutiny of social media companies’ impact on children’s mental health.

Australia recently passed legislation banning social media access for under-16s, while policymakers in Spain, France and the UK are considering similar measures. In the US, Meta chief executive Mark Zuckerberg and Instagram head Adam Mosseri have faced legal challenges and congressional hearings over allegations the company’s platforms were designed to attract and retain younger users.

For now, Instagram’s new alert system represents a shift in Meta’s child-safety strategy — moving from passive content restriction to active parental notification. Whether that approach proves protective or problematic will likely depend on how families, regulators and mental health experts respond in the months ahead.


Jamie Young

Jamie is Senior Reporter at Business Matters, bringing over a decade of experience in UK SME business reporting.
Jamie holds a degree in Business Administration and regularly participates in industry conferences and workshops.

When not reporting on the latest business developments, Jamie is passionate about mentoring up-and-coming journalists and entrepreneurs to inspire the next generation of business leaders.





Source link

Check out our other content

Check out other tags:

Most Popular Articles