Meta is introducing Instagram alerts to notify parents when teens search for self-harm content, a move aimed at increasing supervision that has been condemned by some child safety advocates as a “risky” and “clumsy” shift of responsibility onto parents.
Meta is set to launch a new feature on Instagram that proactively notifies parents if their teenager repeatedly searches for suicide or self-harm-related terms, marking the first time the company will alert guardians rather than simply blocking the content.
While Meta contends that these notifications will be paired with expert resources to help families “navigate difficult conversations,” the move has drawn sharp criticism from the Molly Rose Foundation, which warns that “forced disclosures could do more harm than good.” Chief executive Andy Burrows described the alerts as “flimsy notifications” that may leave parents “panicked and ill-prepared,” while Ian Russell, father of the late Molly Russell, questioned the wisdom of delivering such distressing news via automated messages. Despite the backlash and accusations from charities that the platform is “neglecting the real issue” of harmful algorithms, Meta plans to roll out the alerts in the UK, US, Australia, and Canada next week, with intentions to expand the system to its AI chatbot features in the coming months.
NEWS NOW:
- Gavin Newsom claims Trump wanted Ivanka to date Tom Brady: memoir
- Court rejects bid to block IRS from sharing immigrants data with DHS
- ‘You deserve an unwavering fighter’: Ted Cruz appears in campaign ad for Dan Crenshaw primary challenger
- NYPD arrests 27-year-old for allegedly assaulting officers with snowballs

