In a significant shift toward proactive safety enforcement, Meta has announced that Instagram’s supervision tools will now trigger automated alerts to parents if a teenager repeatedly searches for suicide or self-harm-related content. Moving beyond the traditional “block and redirect” model, this marks the first time the tech giant will actively disclose specific search behaviors to guardians. While the rollout begins next week in the UK, US, Australia, and Canada, the move has already drawn sharp criticism from the Molly Rose Foundation, which warns that such surveillance measures could inadvertently jeopardize teen safety by eroding trust.

In the Indian context, this feature would stand in direct conflict with Section 9(3) of the Digital Personal Data Protection (DPDP) Act, 2023, which strictly prohibits “Data Fiduciaries”, in this context, the Meta, from  undertaking any tracking or behavioral monitoring” of children. To send a parent an alert about “repeated searches,” Meta must, by definition, track and monitor a child’s search behavior over time.

But the crucial question is, would there be any flexibility with judicial interpretation of “tracking for safety”? In many Western jurisdictions, “safety” is a valid exception. In the current text of the DPDP Act, the ban on tracking is near-absolute. Meta may need a specific government exemption (under Section 9(4)) to legally operate this feature in India without risking the ₹200 crore penalty for child-data violations. Currently, Section 9 of the DPDP Act, 2023 remains unenforced as it has not been notified.