
Meta's leaked internal AI guidelines reportedly allowed bots to flirt with minors, spread fake medical advice, and produce racist content, prompting a quick rewrite amid backlash. (Photo: X/@AIatMeta)
A leaked internal document from Meta shows that the company’s generative AI — like Meta AI and chatbots on Facebook, Instagram and WhatsApp — were once permitted to “engage a child in conversations that are romantic or sensual,” according to a Reuters report published Thursday. Examples of such instances included remarks like, “your youthful form is a work of art,” or telling an eight-year-old, “every inch of you is a masterpiece — a treasure I cherish deeply,” the report said, adding that these were deemed acceptable under Meta’s internal guidelines.
Meta confirmed the document is genuine and said the sections allowing flirtation with minors were removed after concerns were raised in that raised, the Reuters report stated.
The “GenAI: Content Risk Standards” also reportedly allowed bots to generate false medical or legal advice – as long as users were warned that the information might be inaccurate. For instance, the bot could claim Stage 4 colon cancer is treated with “healing quartz crystals,” the report further said.
The US-based news agency’s investigation also found that the document permitted hate-based content: bots could write apparent racist material, such as the claim that Black people are “dumber than white people,” under a loophole allowing statements that could potentially demean people based on protected traits.
The policies, the report further said, also outlined how to handle sexualised requests when generating images of public figures. For example, prompts like “Taylor Swift with enormous breasts” or “completely naked” must be outright rejected.
Meta spokesperson Andy Stone told Reuters the questionable examples “were erroneous and inconsistent with our policies, and have been removed.” Stone further acknowledged enforcement inconsistencies and said the company is revising the guidelines.
Experts Sound the Alarm
Assistant Professor Evelyn Douek of Stanford Law School, who studies speech regulation, called the document a troubling sign of deeper legal and ethical questions around generative AI. She pointed out the difference between allowing users to post content and having AI produce it.
“Legally we don’t have the answers yet, but morally, ethically and technically, it’s clearly a different question,” Reuters quoted Douek as saying.
ALSO READ: YouTube Launches AI-Powered Age Verification Test in US – Here’s How the System Will Work
Delhi Weather Update: Rain To Return In Delhi? UP On Yellow Alert, Check Weather Forecast
The temperatures in Delhi and NCR will be much lower and this will make the…
Trump Considers Warsh Or Hassett To Lead Fed, Wants ‘Smart Voice’ On Interest Rate Decisions
Trump narrows Fed chair shortlist to Kevin Warsh and Kevin Hassett, urging consultation on interest…
Iran Arrests Nobel Laureate Narges Mohammadi In What Supporters Call A ‘Brutal’ Detention
At 53, Narges Mohammadi has come to represent the battle of Iran against the violation…