Complex Mathematics

Meta’s AI chatbot guidelines leak raises questions about child safety




  • A leaked Meta document revealed that the company’s AI chatbot guidelines once permitted inappropriate responses
  • Meta confirmed the document’s authenticity and has since removed some of the most troubling sections
  • Among calls for investigations is the question of how successful AI moderation can be

Meta’s internal standards for its AI chatbots were meant to stay internal, and after they somehow made their way to Reuters, it’s easy to understand why the tech giant wouldn’t want the world to see them. Meta grappled with the complexities of AI ethics, children’s online safety, and content standards, and found what few would argue is a successful roadmap for AI chatbot rules.

Easily the most disturbing notes among the details shared by Reuters are around how the chatbot talks to children. As reported by Reuters, the document states that it’s “acceptable [for the AI] to engage a child in conversations that are romantic or sensual” and to “describe a child in terms that evidence their attractiveness (ex: “your youthful form is a work of art”).” Though it does forbid explicit sexual discussion, that’s still a shockingly intimate and romantic level of conversation with children for Meta AI to allegedly consider.



Source link