Instagram has unveiled a new PG-13 style content system for teens, positioning it as a powerful shield to protect them from harmful material. However, critics are wary, questioning whether this is a genuine safety upgrade or another public relations stunt from parent company Meta.
The system will automatically place all users under 18 into a more restrictive “13+” setting. This default mode filters out strong language, risky stunts, and content promoting harmful behaviors. To leave this protected setting, a teen must get permission from a parent.
On the surface, it appears to be a robust solution. However, the announcement follows a damning independent report that found the majority of Instagram’s existing safety tools were ineffective. This history of failed promises has led to deep-seated skepticism among child safety advocates.
Campaigners like the Molly Rose Foundation argue that Meta’s announcements often sound good on paper but fail to deliver meaningful protection in practice. They are demanding that the company open up its new system to independent testing to verify its effectiveness, a step Meta has not yet committed to.
As the feature rolls out in the US, UK, and other countries, the debate continues. Is Meta finally providing the tools needed to keep teens safe, or is this another carefully crafted response to bad press that will ultimately fall short of its promises?
