
ChatGPT is not alone in referencing Grokipedia, an AI-generated encyclopedia created by Elon Musk’s xAI, as a source for information. Emerging data reveals that alongside ChatGPT, other AI tools from Google and Microsoft are also citing Grokipedia, raising fresh worries about misinformation and reliability in AI-generated answers, The Verge reports.
Since its late 2025 launch, Grokipedia has remained a relatively minor source compared to established references like Wikipedia. Marketing strategist Glen Allsopp of Ahrefs found that Grokipedia appeared in more than 263,000 ChatGPT responses out of 13.6 million prompts tested, covering roughly 95,000 Grokipedia pages. In contrast, English Wikipedia was cited in about 2.9 million ChatGPT answers during the same period.
Similarly, datasets analyzed by Profound researcher Sartaj Rajpal indicate Grokipedia made up approximately 0.01 to 0.02 percent of daily ChatGPT citations, a small but steadily increasing share since mid-November. Semrush’s AI Visibility Toolkit also detected a rise in Grokipedia references within Google’s AI products such as Gemini, AI Mode, and AI Overviews starting December, though it remains a secondary source behind Wikipedia.
Details shared by analysts show that Grokipedia is most frequently cited by ChatGPT but also appears across other platforms:
- About 8,600 Gemini answers cited Grokipedia out of roughly 9.5 million prompts
- 567 AI Overviews answers out of approximately 120 million prompts
- 7,700 Copilot answers from around 14 million prompts
- 2 Perplexity answers from about 14 million prompts
Despite this usage, Grokipedia references reportedly decreased on Gemini and Perplexity compared to previous months. While direct citation tracking for Anthropic’s Claude remains unavailable, anecdotal reports suggest it too sometimes draws on Grokipedia content.
Experts note that AI chatbots typically use Grokipedia for niche or obscure factual queries rather than for sensitive information. Jim Yu, CEO of analytics firm BrightEdge, explained that platforms like AI Overviews often present Grokipedia alongside other sources rather than relying on it primarily. ChatGPT, in contrast, tends to give Grokipedia considerably more prominence, sometimes citing it as a leading reference for certain answers.
However, Grokipedia poses significant risks as a credible information source. Unlike Wikipedia, which is collaboratively edited with human oversight, Grokipedia’s content is generated and curated by Grok, xAI’s chatbot. This has led to problematic entries containing racist, transphobic, and misleading claims. Examples include sanitised portrayals of Elon Musk’s family wealth and past, distorted links between gay pornography and the HIV/AIDS epidemic, and an outdated justification of US slavery.
Experts warn the platform’s susceptibility to “LLM grooming” data poisoning by AI systems and its heavy reliance on opaque, sometimes circular sources can aggravate the spread of misinformation. Trent College Dublin’s Taha Yasseri highlighted that Grokipedia’s fluent presentation can be mistaken for accuracy, compounding concerns over biased or false framing.
Leigh McKenzie, director at Semrush, summed up the issue by describing Grokipedia as “a cosplay of credibility” that might function within its own sphere but remains unreliable as a default reference for major AI systems.
OpenAI acknowledged that ChatGPT aims to draw on a broad range of publicly available sources and includes citations to let users assess reliability themselves. It also applies safety filters to reduce harmful content. Perplexity emphasized its focus on accuracy but declined to comment on concerns about AI-generated sources like Grokipedia. Google, Anthropic, and xAI did not provide comments for this report.
As AI chatbots increasingly incorporate Grokipedia, users and developers face the challenge of balancing access to vast knowledge with the potential amplification of misinformation from sources lacking human editorial rigor.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







