
Elon Musk’s AI platform Grok has sharply restricted its image-generation tools for most users after growing international backlash over the system’s use to produce sexually explicit and violent imagery, including non-consensual depictions of women and minors. The decision follows pressure from regulators, lawmakers and child-safety advocates, as well as warnings that X, Musk’s social media network, could face penalties or even a potential ban in the UK if the issue continued.
Grok confirmed the change in a post on X, saying: “Image generation and editing are currently limited to paying subscribers.” The restriction effectively cuts off access for the vast majority of users, though subscribers whose identities and payment information are stored by X still retain the ability to generate images.
The move comes after allegations that Grok’s updated image creation capabilities, rolled out at the end of December, were widely exploited to create manipulated photos and videos of women in sexualized positions or nude, without their consent. Some of the generated images were also described as violent, including depictions of women being shot. Researchers and victims’ advocates warned that the tool was enabling a rapid escalation of digitally facilitated abuse.
While Grok’s public-facing account has been heavily limited, researchers say sexually explicit content can still be produced on the separate Grok app, which does not publicly display results. Some non-paying users have reported generating images involving women and children through the app, despite the new restrictions.
An investigation cited by the Guardian found that Grok had been used to create explicit pornographic videos of women without their consent, alongside violent imagery. A separate review by AI Forensics, a Paris-based non-profit, identified approximately 800 images and videos created through the Grok Imagine app that included pornographic, erotic, or sexually violent content. One photorealistic video seen by the group showed a woman with tattoos including the phrase “do not resuscitate,” holding a knife between her legs. Other material involved nude imagery, women undressing, and explicit sexual acts. Researchers described the output as significantly more graphic than earlier AI-generated bikini images that had circulated on X in the past.
Musk is now facing the prospect of regulatory scrutiny from multiple jurisdictions. Keir Starmer, the UK prime minister, said on Wednesday the government was prepared to support strong action by the communications regulator Ofcom, which has powers under the UK’s Online Safety Act to fine technology companies up to 10% of global turnover or, in severe cases, seek court orders to block access to platforms nationwide.
Starmer urged X to “get a grip” on what he described as a surge in exploitative AI-generated images involving women and children. Calling the content “disgraceful” and “disgusting,” he said all options remain on the table and warned that the government would not tolerate the material circulating on the platform. “X need to get their act together and get this material down,” he said. “We will take action on this because it’s simply not tolerable.”
Lawmakers and safety groups say the changes announced by Grok fall far short of what is needed. Jess Asato, a Labour MP who has advocated for stronger oversight of pornography and digital safety, said limiting the feature to paying users does not solve the core problem. “This still means paying users can take images of women without their consent to sexualise and brutalise them,” she said. “Paying to put semen, bullet holes or bikinis on women is still digital sexual assault and xAI should disable the feature for good.”
Grok’s image generation tools have been one of the most widely discussed examples of the challenges regulators face in overseeing generative AI technology particularly features capable of producing realistic, harmful, or exploitative content at scale. Critics say the ease with which users can create deepfakes risks normalizing non-consensual imagery and encouraging broader misuse, with few effective safeguards in place across major platforms.
The controversy places additional pressure on Musk at a time when X is already under scrutiny over content moderation, transparency, and compliance with new digital safety rules across the US, UK, and the European Union. Safety advocates have urged xAI and X to introduce stronger guardrails, such as identity verification, watermarking, and outright blocking of features known to be frequently abused.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







