
A group of state solicitors have warned the leading businesses in the AI sector to address “delusional outputs” or risk breaking state law in response to a series of unsettling mental health occurrences involving AI chatbots.
The National Association of Attorneys General and dozens of AGs from U.S. states and territories signed the letter, which requests that the following Microsoft, OpenAI, Google, and ten other significant AI businesses establish a number of additional internal protections to protect their customers and users globally. The letter also addressed and mentioned Anthropic, Apple, Chai AI, Character Technologies, Luka, Meta, Nomi AI, Perplexity AI, Replika, and xAI.
The state and federal governments have been at odds on AI regulations, which is why the letter was sent.
These precautions include new incident reporting protocols intended to alert users when chatbots provide psychologically detrimental outputs, as well as transparent third-party audits of big language models that search for indications of delusional or sycophantic ideations. From the letter, “evaluate systems pre-release without retaliation and to publish their findings without prior approval from the company” should be permitted for such third parties, which may include academic institutions and civil society and organisations.
“GenAI has the potential to positively alter how the world functions.” However, it has also caused, and this has the potential to cause a serious harm, especially to vulnerable populations,” the letter claims, citing several widely reported incidents over the past year, such as murders and suicides, in which excessive use of AI has been connected to violence. “The GenAI products produced sycophantic and delusional outputs in many of these incidents, either encouraging users’ delusions or reassuring users that they were not delusional.”
AGs also advised businesses to handle mental health issues with transparent and unambiguous incident reporting policies and procedures, just like tech companies handle cybersecurity issues.
The letter further advises businesses to create and disseminate “detection and response timelines for sycophantic and delusional outputs.” Also companies should “promptly, clearly, and directly notify users if they were exposed to potentially harmful sycophantic or delusional outputs,” according to the letter, in a manner similar to how data breaches are now handled.
In order to “ensure the models do not produce potentially harmful sycophantic and delusional outputs,” the businesses are also asked to create “reasonable and appropriate safety tests” on GenAI models. It also states that these testing must to be carried out early before the models are ever being made available to the general public.
Google, Microsoft, and OpenAI were not contacted by members of the media for comment prior to publishing. If the companies reply, the article will then be revised.
The federal government has responded considerably more favourably to Information Technology firms that are creating and bringing AI into existence.
Over the past year, numerous attempts have been made to enact a federal moratorium on state-level AI laws, and the Trump administration has made it clear that it is shamelessly pro-AI. These attempts have not yield much so far as it is concerned as partly because of pressure from governmental officials.
Not to be discouraged, Trump declared earlier this week that he intends to issue an executive order the following week that will restrict state authority over AI regulation. In a Truth Social post, the president expressed his hope that his EO will prevent AI from being “DESTROYED IN ITS INFANCY.”
The group has given the businesses a targeted deadline until January 16, 2026, to pledge to do the following:
- Independent Auditing: An open audits conducted by outside specialists (academics or civil society) are permitted to test systems before they are released and publish results without the company’s consent.
- Reporting incidents: By treating cybersecurity dangers and mental health issues equally. This includes direct alerts to people who are exposed to hazardous outputs as well as recorded detection and reaction times.
- Clear Warnings: The screen will always display a “clear and conspicuous” defined-warning about potentially dangerous outputs.
- Pre-release Testing: Before releasing models to the general public, “reasonable and appropriate” safety testing are carried out to make sure they don’t generate dangerous thoughts.
The National Association of Attorneys General (NAAG) was the organisation behind the letter. Attorneys general from Florida, New York, Colorado, and Illinois are among the signatories; however, the attorneys general of Texas and California were conspicuously absent. This action coincides with the federal government and state regulators engaging in a legal tug-of-war. Recent executive initiatives attempt to prohibit states from enforcing their own AI regulations.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







