Meta Reveals Instagram to Test Nudity Blurring Feature to Protect Teens

Meta, the parent company of Instagram, has revealed its plans to introduce new features aimed at safeguarding teenagers and combating potential scammers on its platform, amidst increasing scrutiny over the impact of its apps on mental health and addictive behavior.

Instagram policy

The announcement from Meta comes in response to growing concerns in both the United States and Europe regarding the addictive nature of its applications and their alleged contribution to mental health issues among young users. In an effort to address these concerns, the tech giant unveiled plans to test a new feature on Instagram that will automatically blur messages containing nudity in direct messages.

The proposed protection feature will utilize on-device machine learning to analyze images sent via Instagram’s direct messaging service, determining whether they contain nudity. Importantly, this feature will be activated by default for users under the age of 18, with Meta notifying adult users to encourage them to enable the feature voluntarily, Reuters news report said.

Meta, which has more than 2 billion customers on the Instagram social media platform, has emphasized that because the image analysis occurs on the user’s device, the nudity protection will function even in end-to-end encrypted chats, where Meta does not have access to the content unless reported by users. While direct messages on Instagram are currently not encrypted, Meta stated its intention to introduce encryption for the service in the future.

Furthermore, Meta revealed ongoing efforts to develop technology aimed at identifying accounts potentially involved in sextortion scams. Additionally, the company disclosed plans to test new pop-up messages for users who may have interacted with such accounts, as part of its broader commitment to enhancing user safety.

Software like Microsoft 365 will help customers from sextortion as scammers aim at capitalizing on digital footprints, extracting personal information and using it for making money.

This announcement follows Meta’s earlier pledge in January to implement measures to limit sensitive content exposure, such as suicide, self-harm, and eating disorders, for teenage users on both Facebook and Instagram.

However, Meta faces legal challenges, with attorneys general from 33 U.S. states, including California and New York, filing a lawsuit against the company in October, accusing it of repeatedly misleading the public about the risks associated with its platforms. Similarly, in Europe, the European Commission has initiated inquiries into Meta’s efforts to protect children from illegal and harmful content.

Meta’s unveiling of new safety features reflects the company’s ongoing commitment to addressing concerns surrounding harmful content and online safety, particularly for young users, amid mounting regulatory scrutiny and legal challenges both in the United States and Europe.

Baburajan Kizhakedath