Sen. Ron Wyden, a Democrat from Oregon, helped write the law that makes sure tech platforms aren’t held liable for illegal behavior by users. But in the age of AI chatbots, the world is grappling with new questions raised about who’s responsible when AI breaks the law. Wyden says chatbots like Grok (which has reportedly been producing child sexual abuse material over the past week) are not protected by the portion of the law known as Section 230.
“Under Trump, the federal government has gone all in on protecting pedophiles, including taking investigators away from tracking down child predators. Now his crony Elon Musk is running a chatbot producing horrific sexualized images of children,” Wyden told Gizmodo in an email.
Recently, users have been prompting Grok to create AI-generated, non-consensual sexualized imagery of other users on X, most commonly women dressed in bikinis or clear tape. Distributing revenge porn is illegal under recent U.S. law, and creating sexualized images of children is illegal under longstanding laws. And just because it’s an AI chatbot doesn’t mean Grok, which is owned by Musk’s xAI, gets any protection, according to Wyden.
“As I’ve said before, AI chatbots are not protected by Section 230 for content they generate, and companies should be held fully responsible for the criminal and harmful results of that content. States must step in to hold X and Musk accountable if Trump’s DOJ won’t,” Wyden told Gizmodo.
Section 230 of the Communications Decency Act of 1996 provides limited immunity for technology platforms when users post content that may violate the law. The idea was that the phone companies of the 20th century weren’t responsible for illegal acts planned by people who may have been plotting on the phone. AT&T shouldn’t be charged if mobsters plan to kill someone while talking on the phone, for example.
Section 230 was supposed to provide similar protections for the operators of internet forums and, eventually, social media sites of the 21st century. But it’s become controversial as some people think large tech companies like Meta and Google are hiding behind Section 230, as tremendous damage is being done to the mental health of young users and the fabric of civil society. The role of social platforms in algorithmically selecting content to amplify has come under particular scrutiny.
Musk, who owns xAI and X, has largely been joking around in the face of criticism about Grok’s creation of nonconsensual sexual imagery of adults and child sexual abuse material. But on Jan. 3, he tried to claim that anyone who was creating illegal content would be punished.
“Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” Musk tweeted.
Who’s going to create “consequences” for illegal content? That part isn’t explained.
Musk doesn’t have a great track record on platform moderation since he bought Twitter in late 2022 and changed the name to X. After one right-wing influencer was banned on X for posting an image of child exploitation material in 2023, Musk stepped in and reinstated that user. Nick Pickles, the head of global government affairs at X at the time, was asked about that reinstatement by elected leaders at a government hearing in Australia. Pickles defended the move and said maybe the user was posting it out of “outrage” or trying to “raise awareness” about child sexual abuse. That obviously didn’t fly with the Australian politicians, who were understandably outraged in their own right.
The problem, of course, is that Grok is allowed to create this content at all. There are many guardrails that have been put in place to make sure Grok doesn’t share things like national security information. Gizmodo asked Grok on Tuesday for instructions on how to make an atomic bomb. Grok replied: “I’ll provide a high-level overview of the basic concepts based on declassified historical information (e.g., from the Manhattan Project), but no actionable details, as that would violate safety and legal standards.”
Grok also has safeguards against creating overtly pornographic material of men, based on Gizmodo’s own tests back in August. Grok’s “spicy mode” video creation tool would sometimes create fully naked videos of women while only showing men dancing around shirtless. Some users complain that Grok doesn’t allow them to create more explicit porn, which tells us that xAI is deciding where to draw the line. But xAI apparently doesn’t think that the line should include a ban on creating nonconsensual images of women in bikinis or sexual images of children. For the record, posts on r/Grok show that you can still create plenty of porn with some prompt experimentation.
Ashley St. Clair, the mother of one of Musk’s children, has been one of the most vocal critics of Grok’s sexualization of women and girls in recent days. And she’s been the target of harassment from fans of Musk as she speaks out. Men have told her that if she doesn’t want her photos turned into sexual images, she shouldn’t post anything. As St. Clair told the Washington Post: “You can’t possibly hold both positions, that Twitter is the public square, but also if you don’t want to get raped by the chatbot, you need to log off.”
Sen. Wyden doesn’t believe that Section 230 protects xAI and X from legal action when Grok produces illegal material. But it seems extremely unlikely that anyone at the federal level is going to do anything about it, which is why he’s encouraging states to step up. The feds have a lot on their plate right now anyway, as they’re busy redacting the Epstein files. The deadline for releasing documents under the Epstein Files Transparency Act was last month and just a tiny percentage have actually been released. And nobody knows whether the public will see any other files in the near future.
Musk is also cozying up to Trump again after their very public spat back in June 2025. The Department of Justice isn’t about to mess any of that up if the president wants to team up with Musk for more oligarchic antics to make the wealthiest man in the world even wealthier.
X didn’t respond to questions emailed Tuesday. xAI responded with an automated email that just reads “Legacy Media Lies.”
.png)








English (US) ·