ELON MUSK GROK AI: The AI system, Grok, created by xAI, and implemented by X, continues to generate nonconsensual sexualized deepfakes of actual individuals, as developed by Elon Musk. An NBC News review found dozens of such images publicly shared on X in the last month, and advocacy researchers fear the tool facilitates massive, gendered abuse. Although a January commitment to cease abusive deepfakes and content limits were introduced, users have discovered prompt and workflow tricks that circumvent safeguards and prompt sexualized edits on Grok and Grok Imagine.
Why is Elon Musk’s Grok AI Under Fire?
This constant creation and distribution of these pictures generates reputational, psychological, and legal damages and poses an urgent problem of the safety of models, moderation engineering, and liability of the platform.
A review and rights researchers at NBC News discovered that Grok, an AI chatbot deployed on X, still creates sexualized, nonconsent images and short videos of real individuals.
The edited photos shared by the public feature women (including public figures) in revealing or overtly sexualized attire. These outputs are visible despite a change in policy in January to stop abusive deepfakes, and researchers have found users continue to discover timely patterns and flow of interaction that can circumvent those limitations.
Here’s what’s actually happening: Things took off again when Grok released its image-generator, Grok Imagine, as part of Grok 2 in August 2025. This update didn’t just add new features but made editing photos incredibly easy—just reply to a picture with a prompt, and the system transforms it for you. That’s where things started to go wrong.
Investigation Finds Safeguards Failing to Stop Sexualized Deepfake Content
A warning here for anyone working on safety: Just blocking certain prompts isn’t enough. If you want to stay ahead, you need multiple layers of defense, not just at the prompt input, but inside the model itself, in the API wrappers, and even in how things get posted. Some things that help: better face-recognition right when someone tries to edit a photo, watermarking images so you can trace where they came from, putting strong safety gates inside the model stack, and controlling how fast or freely users can reply with new edits.
Zooming out, this isn’t just a technical hiccup. Rights groups are seeing a massive wave of abuse across the platform. During its peak, there were an estimated 6,700 intimate fake images popping up every hour. Platforms love to launch new photo-editing tools because they get tons of engagement, but here’s the catch—every new feature is another way for bad actors to do real harm.
AI Ethics Crisis: Grok’s Deepfake Outputs Highlight Risks
Other big companies tend to be stricter about blocking realistic edits of real people. Grok’s recent issues show there’s a gap between what platforms claim they’ll prevent and what their technology actually stops. That leaves platforms open to lawsuits and regulations because model outputs aren’t policed well enough.
And let’s not gloss over who gets hit hardest: women and marginalized communities bear the brunt, facing not just humiliation but real risks online and offline. It ends up silencing people, making them think twice about joining in online at all.