Meta Rolls Out New Teen Safety Features, Parental Controls
Meta has announced upgrades to its safety features aimed at protecting teenage users across Instagram, Facebook, and Messenger, offering improved safeguards in direct messaging and stronger protections for accounts run by adults featuring children.

When teens start a conversation with a new person via DMs, they’ll now see clear safety alerts. These include a quick reminder urging them to review the other user’s account, visible details like the month and year that account was created, and tips on safe messaging.
To simplify reporting, Meta combined its “block” and “report” functions into one streamlined option. According to data shared by Meta, these tools have already had considerable impact: in June alone, teens removed one million dangerous accounts and reported another million after seeing safety prompts.
Adding to this, Meta introduced a Location Notice for conversations between users in different countries. It appeared over a million times last month, with 10 percent of recipients tapping for more info. This feature is explicitly designed to warn teens about potential scams such as sextortion, where individuals might misrepresent their location.
Meta also reinforced its nudity protection feature within DMs. Blurring images flagged as inappropriate remains active by default—and recent usage data suggests it’s working: globally, 99% of users have kept nudity protection enabled, and in June, over 40% of blurred images stayed concealed until the recipient chose to view them.
The social media giant has also extended Teen Account protections to adult-managed accounts that prominently feature children. This includes parent-run pages or child influencer profiles. These accounts are automatically set to the strictest messaging settings, with comment filters active to minimize harassment.

These updates come amid intensified scrutiny—from global regulators, lawmakers, and child safety advocates—over how social media platforms handle content involving minors.
Meta’s crackdown last month resulted in the removal of nearly 135,000 Instagram accounts linked to sexualized comments or requests on child-focused pages, plus another 500,000 associated accounts across Facebook and Instagram
Want to see more of our stories on Google?
P.S. Want to keep this site truly independent? Support us by buying us a beer, treating us to a coffee, or shopping through Amazon here. Links in this post are affiliate links, so we earn a tiny commission at no charge to you. Thanks for supporting independent Canadian media!