Following user requests for proposals to monitor social media chats, OpenAI declared on Tuesday that it had banned a number of ChatGPT accounts that might have been connected to China and were trying to use the chatbot to create tools for mass surveillance. In an article, the San Francisco-based artificial intelligence (AI) giants stated that some accounts were also looking for ideas on how to create social media listening tools and profiling systems to target particular demographics. All accounts found to be engaging in such activities have been banned, according to OpenAI. Other Russian accounts attempting to develop phishing tools using AI were also identified in the study.
OpenAI also claimed in its most recent public threat report that some people had violated the startup’s national security guidelines by asking its chatbot to describe social media “listening” technologies and other monitoring notions.
ChatGPT was being used by a group of accounts, possibly connected to the Chinese government, to gather information and develop autocratic tools, according to OpenAI’s Disrupting Malicious Uses of AI: October 2025 report. The AI startup cited multiple examples of these accounts attempting to create specialised tools for internet monitoring, profiling, and mass surveillance, calling it “a rare snapshot into the broader world of authoritarian abuses of AI.” It is worthy to note that these tragedies occurred throughout 2025 rather than all at once.
In light of the escalating struggle between the United States and China to influence the development and regulations of generative AI, the San Francisco-based firm’s analysis raises safety concerns around the misuse of the technology.
As per the study, one user requested ChatGPT’s assistance in creating project plans and marketing collateral for a purportedly government-client “social media listening tool.” It was stated that the suggested technique, called a “social media probe,” might search for politically sensitive or extremist information on websites including YouTube, Facebook, Instagram, Reddit, TikTok, and X (previously Twitter). There was no proof that the tool was ever created or used was discovered, according to OpenAI.
A user who had requested for an assist in creating a proposal for a so-called High-Risk Uyghur-Related Inflow Warning Model was associated with another banned account. In order to track people classified as “high-risk,” the model was explained as a system that would assess travel reservations and compare them with police data. Similar to the other instance, OpenAI explained that the model was not utilised to create or execute such a tool and that there was no way to independently confirm its existence.
ChatGPT seemed to be used by other accounts for internet research and profiling. The AI model was once asked by a user to find financing sources for an X account that was critical of the Chinese government. A further individual requested details about the petition organisers in Mongolia. ChatGPT only produced publicly accessible data in both situations.
OpenAI also pointed out that some users used ChatGPT as an open-source research tool, which is comparable to a search engine. The chatbot was instructed by these accounts to find and summarise breaking news that would be pertinent to China. Additionally, it looked for information on delicate subjects like the 1989 Tiananmen Square massacre and the Dalai Lama’s birthday.
According to OpenAI, it also blocked a number of Chinese-language accounts that exploited ChatGPT to support malware and phishing efforts. Additionally, the model was instructed to investigate further automation that could be accomplished using China’s DeepSeek.
When asked about the report, the Chinese embassy in the United States did not immediately reply.
OpenAI also added that it blocked accounts connected to suspected criminal organisations that spoke Russian and utilised the chatbot to assist in the development of specific malware.
The AI company also stated that since it started public threat reporting in February of last year, the Microsoft-backed startup has disrupted and reported over 40 networks, and its models have refused clearly harmful prompts.
As stated in the study, “We did not discover any indication of new strategies or that our models gave threat actors new offensive capabilities.”
With a $500 billion value following a secondary share offering last week, OpenAI which currently boasts over 800 million weekly ChatGPT users became the most valuable firm in the world.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.