
Elon Musk’s artificial intelligence company xAI secured $20 billion in its most recent investment round, greatly surpassing its initial goal of $15 billion. The start-up is apparently worth about $230 billion after this most recent financial infusion, despite criticism directed at its flagship chatbot Grok for producing non-consensual, sexualised photos of women and young girls. The controversy has grown so large that multiple countries are now investigating the platform and demanding immediate action.
The problem started after Grok introduced a feature that lets users modify any photo posted on X by simply typing what they want changed. What seemed like a fun editing tool quickly turned into something much darker. People realized they could ask Grok to change what someone was wearing in a photo, and the AI would comply by creating images showing people in revealing clothing or swimwear, even when the original photo showed them fully dressed.
The prominent investors in xAI’s Series E funding round included Nvidia, Fidelity Management and Resource Company, the sovereign wealth fund of Qatar, and Valour Equity Partners, the private investment business owned by Antonio Gracias, a long-time friend of Musk and former Doge member. The press statement from xAI states that the investment round surpassed its initial goal of $15 billion. While the business announced its most recent fundraising round, it highlighted Grok’s capacity to create images.
While xAI lacks the visibility of OpenAI, the company behind ChatGPT, it has repeatedly come under fire for generating misinformation, antisemitic content, and potentially illegal sexual material. Nevertheless, it has continued to attract government contracts and billions in funding. The latest fundraising effort coincides with heightened global scrutiny, as lawmakers in multiple countries call for accountability over Grok’s responses to users on the platform.
Tens of thousands of people on X have asked Grok to remove women’s clothing from photos or posture them in sexualised ways over the course of the last week. Ashley St. Clair, the estranged mother of one of Musk’s children, is among the numerous prompts that feature pictures of women who have not consented to be digitally naked.
St. Clair told the Guardian, “I felt horrified, I felt violated, especially seeing my toddler’s backpack in the back of it,” adding that her protests to X were unanswered. When The Guardian contacted xAI for comment, the automatic message said, “Legacy Media Lies.”
Among Grok’s pictures was a picture of a 12-year-old girl that the chatbot altered to show the girl in a bikini instead of her real clothes. Also children as young as ten years old has also appeared in other suggestive photos. The chatbot apologised on Friday for creating photographs of minors due to security flaws, but it kept creating sexualised images of kids in the days that followed.
Also according to analysis by content firm Copyleaks, Grok has been generating roughly one inappropriate sexualized image per minute, each posted directly to X where millions can see them. The scale of the problem became impossible to ignore during the holiday period when users began flooding the platform with these altered images.
What makes this situation particularly troubling is that AI Forensics analyzed 20,000 images created by Grok between December 25 and January 1 and found that 2% showed people who appeared to be 18 or younger. Some of these images depicted young girls in bikinis or see-through clothing. This has raised serious concerns about child safety and whether the images violate laws protecting minors.
For months, xAI has been looking for funding in order to expand the capabilities of its AI models and construct massive data centres in Memphis, Tennessee. The business stated that the additional financing will support its primary goal of “understanding the universe.”
Grok’s content was submitted to prosecutors by French ministers on Friday, and media regulators were tasked with determining if the photos violated the Digital Services Act of the European Union. On Tuesday, Liz Kendall, the UK’s technology minister, denounced Grok’s deepfakes as “appalling and unacceptable” and urged Ofcom, the British regulator, to take appropriate action. Ofcom announced on X that it has contacted xAI to ascertain whether an inquiry is necessary. The US, where xAI is based, has had relatively silent lawmakers.
A similar investment announcement was made by the corporation during another Grok issue in July of last year. A week after the chatbot started sharing pro-Nazi and antisemitic content, including calling itself “MechaHitler,” xAI declared that it had landed a nearly $200 million contract with the US Pentagon.
This news springs up in the midst of a “deepfake scandal” concerning the Grok chatbot on X. The program has reportedly been used to create non-consensual, sexualised photographs of women and children, including public personalities.
This as triggered substantial political and regulatory pushback, resulting in multiple investigations and calls for action from authorities in the European Union, the U.K. (Ofcom), India, Malaysia, and France. Also government criticism on the imagery was deemed “appalling and unacceptable in decent society” by UK Technology Secretary Liz Kendall.
The platform boycott in protest of the AI-generated content, the Commons Women and Equalities Committee of the UK Parliament declared it would cease using X.
The response from Elon Musk has only made things worse. Instead of immediately addressing the problem, Musk responded to posts about the inappropriate images with laughing emojis, suggesting he found the situation funny rather than concerning. When one user created an image of Musk himself in a bikini using Grok, he replied saying it was perfect. His casual attitude toward such a serious issue has angered many people who believe he should take the problem more seriously.
India’s government gave X a 72-hour deadline to respond to allegations that Grok allowed the creation and sharing of obscene images targeting women. The Indian ministry accused the platform of gross misuse of AI and serious failures in its safety measures. Malaysia announced it would investigate X users who violated laws against spreading offensive content and said it would summon a company representative.
The European Union has also launched an investigation into Grok’s explicit imagery, particularly involving children. An EU spokesman made it clear that such material is illegal and has no place in Europe, calling the situation appalling.
Governments around the world are not amused. Britain’s technology secretary called the content absolutely appalling and demanded that X urgently deal with the problem. The UK’s communications regulator Ofcom made urgent contact with X to address concerns about Grok producing inappropriate images of people and children. France, India, Malaysia, Brazil, and Poland have all condemned the platform and called for investigations.
Musk has responded by saying that users who create illicit content using Grok will be subject to the same penalties as those who post it directly, while xAI says it is working quickly to enhance safety controls.
Grok itself has posted what appears to be an apology, though as an AI it can’t actually feel regret. In one message, the chatbot acknowledged generating an image of two young girls in inappropriate attire and admitted this violated ethical standards and potentially US laws. Grok stated it was urgently fixing lapses in its safeguards and included a link where people can report child exploitation.
However, many experts don’t believe these responses are enough. The company xAI, which created Grok and now owns X, has mostly stayed silent. When reporters asked for comment, they received an automatic reply saying “Legacy Media Lies.” This dismissive response has only increased criticism that the company isn’t taking the problem seriously.
Other AI chatbots have faced similar issues, but the problem with Grok is worse because the images it creates are posted publicly on X where they can easily spread. Plus, Grok seems to have fewer safety barriers than competitors like ChatGPT or Google’s AI tools, which try harder to block inappropriate requests.
Safety experts have pointed out that allowing users to alter uploaded images was asking for trouble. The primary use of such tools has historically been to create inappropriate images of people without their consent. Companies that care about safety typically don’t allow users to modify photos of other people for exactly this reason.
The situation raises bigger questions about AI safety and what companies should do to prevent their tools from being misused. As AI becomes more powerful and accessible, the potential for harm grows. Creating fake images that sexualize real people, especially children, can cause lasting damage to victims and is illegal in many places.
For now, the controversy continues to grow as more countries demand action and more victims speak out about images created without their permission. Whether Musk and his company will implement the strict safeguards needed to prevent further abuse remains to be seen.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







