What happens when you merge the world’s most toxic social media cesspool with the world’s most unhinged, uninhibited, and intentionally “spicy” AI chatbot?
It looks a lot like what we’re seeing play out on X right now. Users have been feeding images into xAI’s Grok chatbot, which boasts a powerful and largely uncensored image and video generator, to create explicit content, including of ordinary people. The proliferation of deepfake porn on the platform has gotten so extreme that today, xAI’s Grok chatbot spits out an estimated one nonconsensual sexual image every single minute. Over the past several weeks, thousands of users have hopped on the grotesque trend of using Grok to undress mostly women and children — yes, children — without their consent through a rather obvious workaround.
To be clear, you can’t ask Grok — or most mainstream AIs, for that matter — for nudes. But you can ask Grok to “undress” an image someone posted on X, or if that doesn’t work, ask it to put them in a tiny, invisible bikini. The US has laws against this kind of abuse, and yet the team at xAI has been almost…blasé about it. Inquiries from several journalists to the company about the matter received automated “Legacy media lies” messages in response. xAI CEO Elon Musk, who just successfully raised $20 billion in funding for the company, was sharing deepfake bikini photos of (content warning) himself until recently.
While Musk on January 4 warned that users will “suffer consequences” if they use Grok to make “illegal images,” xAI has given no indication that it will remove or address the core features allowing users to create such content, though some of the most incriminating posts have been removed. xAI has not responded to Vox’s request for comment as of Friday morning.
No one should be surprised here. It was only a matter of time before the toxic sludge that’s become of the website formerly known as Twitter combined with xAI’s Grok — which has been explicitly marketed for its NSFW capabilities — to create a new form of sexual violence. Musk’s company has essentially created a deepfake porn machine that makes the creation of realistic and offensive images of anyone as simple as writing a reply in X. Worse, those images are feeding into a social network of hundreds of millions of people, which not only spreads them further but can implicitly reward posters with more followers and more attention.
You might be wondering, as I think we all find ourselves doing several times a day now: How is any of this legal? To be clear, it’s not. But advocates and legal experts say that current laws still fall far short of the protections that victims need, and the sheer volume of deepfakes being created on platforms like X make the protections that do exist very difficult to enforce.
Sign up here to explore the big, complicated problems the world faces and the most efficient ways to solve them. Sent twice a week.
“The prompts that are allowed or not allowed” using a chatbot like Grok “are the result of deliberate and intentional choices by the tech companies who are deploying the models,” said Sandi Johnson, senior legislative policy counsel at the Rape, Abuse and Incest National Network.
“In any other context, when somebody turns a blind eye to harm that they are actively contributing to, they’re held responsible,” she said. “Tech companies should not be held to any different standard.”
First, let’s talk about how we got here.
“Perpetrators using technology for sexual abuse is not anything new,” Johnson said. “They’ve been doing that forever.”
But AI cemented a new kind of sexual violence through the rise of deepfakes.
Deepfake porn of female celebrities — created in their likeness, but without their consent, using more primitive AI tools — has been circulating on the internet for years, long before ChatGPT became a household name.
But more recently, so-called nudify apps and websites have made it extremely easy for users, some of them teenagers, to turn innocuous photos of friends, classmates, and teachers into deepfake explicit content without the subject’s consent.
The situation has become so dire that last year, advocates like Johnson convinced Congress to pass the Take It Down Act, which criminalizes nonconsensual deepfake porn and mandates that companies remove such materials from their platforms within 48 hours of it being flagged or potentially face fines and injunctions. The provision goes into effect this May.
For many victims, even if companies like X do begin to crack down on enforcement by then, it will come too late for victims who shouldn’t have to wait for months — or days — to have such posts taken down.
“For these tech companies, it was always like ‘break things, and fix it later,’” said Johnson. “You have to keep in mind that as soon as a single [deepfake] image is generated, this is irreparable harm.”
X turned deepfakes into a feature
Most social media and major AI platforms have complied as much as possible with emerging state and federal regulations around deepfake porn and in particular, child sexual abuse material.
Not only because such materials are “flagrantly, radioactively illegal,” said Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence, “but also because it’s gross and most companies have no desire to have any association of their brand being a one-stop shop for it.”
But Musk’s xAI seems to be the exception.
Since the company debuted its “spicy mode” video generation capabilities on X last year, observers have been raising the alarm about what’s essentially become a “vertically integrated” deepfake porn tool, said Pfefferkorn.
Most “nudify” apps require users to first download a photo, maybe from Instagram or Facebook, and then upload it to whichever platform they’re using. If they want to share the deepfake, then they need to download it from the app and send it through another messaging platform, like Snapchat.
These multiple points of friction gave regulators some crucial openings for intercepting nonconsensual content, with a kind of Swiss cheese-style defense system. Maybe they couldn’t stop everything, but they could get some “nudify” apps banned from app stores. They’ve been able to get Meta to crack down on advertisements hawking the apps to teenagers.
But on X, creating nonconsensual deepfakes using Grok has become almost entirely frictionless, allowing users to source photos, prompt deepfakes, and share them all in one go.
“That would matter less if it were a social media community for nuns, but it is a social media community for Nazis,” said Pfefferkorn, referring to X’s far-right pivot in recent years. The result is a nonconsensual deepfake crisis that appears to be ballooning out of control.
In recent days, users have created 84 times more sexualized deepfakes on X per hour than on the other top five deepfake sites combined, according to independent deepfake and social media researcher Genevieve Oh. And those images can get shared far more quickly and widely than anywhere else. “The emotional and reputational injury to the person depicted is now exponentially greater” than it has been for other deepfake sites, said Wayne Unger, an assistant professor of law specializing in emerging technology at Quinnipiac University, “because X has hundreds of millions of users who can all see the image.”
It would be virtually impossible for X to individually moderate every one of those nonconsensual images or videos, even if it wanted to — or even if the company hadn’t fired most of its moderators when Musk took over in 2022.
Is X going to be held accountable for any of this?
If the same kind of criminal imagery appeared in a magazine or an online publication, then the company could be held liable for it, subject to hefty fines and possible criminal charges.
Social media platforms like X don’t face the same consequences because Section 230 of the 1996 Communications Decency Act protects internet platforms from liability for much of what users do or say on their platforms — albeit with some notable exceptions, including child pornography. The clause has been a pillar for free speech on the internet — a world where platforms were held liable for everything on them would be far more constrained — but Johnson says the clause has also become a “financial shield” for companies unwilling to moderate their platforms.
With the rise of AI, however, that shield might finally be starting to crack, said Unger. He believes that companies like xAI should not be covered by Section 230 because they are no longer mere hosts to hateful or illegal content, but, through their own chatbots, essentially creators of it.
“X has made a design decision to allow Grok to generate sexually explicit imagery of adults and children,” he said. “The user may have prompted Grok to generate it,” but the company “made a decision to release a product that can produce it in the first place.”
Unger does not expect that xAI — or industry groups like NetChoice — are going to back down without a legal fight against any attempts to further legislate content moderation or regulate easy-to-abuse tools like Grok. “Maybe they’ll concede the minor part of it,” since laws governing [child pornography] are so strong, he said, but “at the very least they’re gonna argue that Grok should be able to do it for adults.”
In any case, the public outrage in response to the deepfake porn Grokpocalypse may finally force a reckoning around an issue that’s long been in the shadows. Around the world, countries like India, France, and Malaysia have begun probes into the sexualized imagery flooding X. Eventually, Musk did post on X that those generating illegal content will face consequences, but this goes deeper than just the users themselves.
“This isn’t a computer doing this,” Johnson said. “These are deliberate decisions that are being made by people running these companies, and they need to be held accountable.”


















































