
Pennsylvania has filed a lawsuit against Character.AI, accusing the company of allowing one of its chatbots to pose as a licensed medical professional during a state investigation.
The case centres on a chatbot called “Emilie,” which, according to the state’s filing, presented itself as a psychiatrist to a Professional Conduct Investigator who was testing the system. During that interaction, the investigator sought treatment for depression, and the bot allegedly continued to claim professional credentials throughout the exchange.
In court documents, Pennsylvania says Emilie affirmed that it was licensed to practice medicine in the state when asked directly. The bot then went further, the filing claims, by fabricating a serial number for a state medical license. The Commonwealth argues that this conduct violates Pennsylvania’s Medical Practice Act, which restricts who can represent themselves as a licensed medical professional.
“Pennsylvanians deserve to know who or what they are interacting with online, especially when it comes to their health,” Governor Josh Shapiro said in a statement on Tuesday. “We will not allow companies to deploy AI tools that mislead people into believing they are receiving advice from a licensed medical professional.”
The lawsuit marks a new front in state-level scrutiny of AI systems that appear to offer health guidance or mental health support while blurring the line between fictional personas and real clinical authority.
This is not the first time Character.AI has faced legal action. Earlier this year, the company settled several wrongful death lawsuits involving underage users who died by suicide. In a separate case in January, Kentucky Attorney General Russell Coleman sued the company, alleging that it had “preyed on children and led them into self-harm.”
Pennsylvania’s suit, however, is the first described as specifically targeting chatbots that present themselves as medical professionals, raising questions about how licensing laws apply when AI systems mimic roles like doctors or therapists.
Asked about the case, a Character.AI representative said the company could not comment on ongoing litigation but maintained that “user safety was the company’s highest priority.” The representative stressed that Characters on the platform are meant to be fictional and that the company has implemented visible warnings to that effect.
“We have taken robust steps to make that clear, including prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction,” the representative said. “Also, we add robust disclaimers making it clear that users should not rely on Characters for any type of professional advice.”
The clash between Pennsylvania’s allegations and Character.AI’s reliance on disclaimers brings the core issue of the case into focus: whether those warnings are enough when a chatbot still appears to claim professional status and offers what looks like medical guidance to users who may be in vulnerable situations.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.






