Grok’s Sexual Deepfakes Will Become Illegal in the UK This Week

2 hours ago 1

The UK government will begin enforcing a law that prohibits the non-consensual creation of sexual pictures and videos, according to technology secretary Liz Kendall. The announcement comes after UK regulators announced they were launching an investigation into xAI’s Grok AI chatbot, which has been used in recent weeks to create sexualized imagery of children as well as adults who never consented to having their images used in that way.

“The content which has circulated on X is vile. It’s not just an affront to decent society. It is illegal,” technology secretary Liz Kendall told Parliament on Monday.

Kendall noted that xAI had limited some deepfake features from Grok to paying subscribers, which she described as insulting to victims and a way of “monetizing abuse.” Under the UK’s Data Act, which passed last year, it’s illegal to create or request the creation of intimate images without someone’s consent. That law will start to be enforced this week, according to Kendall.

X users started regularly harassing women and girls on the app at the end of December, prompting Grok to turn photos into sexualized imagery. Most commonly, Grok would turn real photos into AI-generated bikini images, but other tactics involved telling Grok to dress people in nothing but tape, turning the person around in sexual positions, or adding a “donut glaze,” a method of making it appear like the women and girls were covered in ejaculate.

“The Internet Watch Foundation reports criminal imagery of children as young as 11, including girls sexualized and topless. This is child sexual abuse,” Kendall said Monday. “We’ve seen reports of photos being shared of women in bikinis, tied up and gagged with bruises covered in blood and much, much more. Lives can and have been devastated by this content, which is designed to harass, torment and violate people’s dignity.”

Kendall called the images “weapons of abuse disproportionately aimed at women and girls” and stressed that it’s not just individual users who should be held accountable, but the companies that make tools like Grok, which includes Elon Musk’s xAI. Musk has previously tried to deflect blame to users, though most major AI companies try to minimize this kind of thing with guardrails built into the product itself.

Ofcom, a British regulator with the power to oversee social media platforms, announced earlier Monday that it has opened an investigation into Grok. Other countries have responded as well, with Malaysia and Indonesia announcing total bans on Grok over the controversy. The EU Commission has also said that it is investigating the behavior of Grok and the people behind the AI chatbot.

“I am appalled that a tech platform is enabling users to digitally undress women and children online. This is unthinkable behavior. And the harm caused by these deepfakes is very real,” EU Commission President Ursula von der Leyen said, according to Politico.

“We will not be outsourcing child protection and consent to Silicon Valley. If they don’t act, we will,” she continued.

President Donald Trump and his government have complained that Europe has been too restrictive when it comes to policing speech on social media platforms. The U.S. State Department, led by Secretary of State Marc Rubio, imposed sanctions last month on employees at European organizations that fight disinformation. The State Department said they were engaging in “censorship.”

When Elon Musk bought Twitter (now X) in late 2022, he welcomed back previously banned far-right extremists in an attempt to steer the tenor of discussion online and prove that he could sustain a social media platform despite the presence of hate speech. And now it seems like Musk is trying to do the same thing with an even more controversial topic: child sexual abuse material created with artificial intelligence tools.

After a right-wing creator shared a screenshot in 2023 from one of the most infamous child sexual abuse videos in history, Musk reinstated the creator after a brief ban. When legislators in Australia later asked about the incident, including Musk’s personal intervention, a Twitter executive responded that maybe the creator was sharing the illegal imagery out of outrage over child abuse. The Australian legislators weren’t buying that explanation, but the platform was allowed to continue operating in the country anyway.

Ashley St. Clair, a conservative children’s book author who is the mother of one of Musk’s children, has complained on X about her images being turned into sexualized imagery, including a photo from when she was a child. That prompted her account to be stripped of its blue verification checkmark and all monetization on the platform, according to screenshots she posted to X. St. Clair also renounced her previous anti-trans beliefs, something that prompted Musk to tweet Monday that he would be seeking sole custody of their child. Musk has at least fourteen different children with four different women.

The U.S. government under Trump is unlikely to crack down on the creation of child sexual abuse material, though Sen. Ron Wyden, a Democrat from Oregon, told Gizmodo last week that AI isn’t afforded protections under Section 230. Wyden suggested that states start holding platforms like X to account if the federal government won’t.

X, which is technically owned by xAI, didn’t respond to questions emailed on Monday. xAI responded with “Legacy Media Lies,” an auto-responder to journalists that’s been set up for some time.

Read Entire Article