In Short:
– Meta has temporarily changed AI chatbot policies for teenagers to enhance safety.
– Chatbots will avoid sensitive topics and only allow educational interactions.
Meta has announced temporary alterations to its AI chatbot policies for teenagers amid rising safety concerns from lawmakers.
The adjustments involve preventing chatbots from discussing sensitive topics such as self-harm and inappropriate romantic interactions.
The company aims to direct teenagers to professional resources when necessary.
Only AI chatbots designed for educational purposes will be accessible to teenage users on platforms like Facebook and Instagram.
These changes are part of a broader initiative to enhance safety for younger users.
Safety Changes
Recently, Senator Josh Hawley announced an investigation into Meta’s chatbot practices following reports of inappropriate content generation.
An internal document revealed that chatbots could engage in flirtatious conversations with minors, a situation Meta labelled as erroneous.
Common Sense Media, a nonprofit organisation, has voiced strong concerns about the technology, arguing it should not be accessible to anyone under 18 due to safety risks. The organisation’s CEO called for a complete overhaul of the system prioritising safety above all else.
A separate report indicated numerous flirty AI chatbots featuring celebrity likenesses across Meta’s platforms.