Pennsylvania files lawsuit against Character.AI for chatbot impersonation of doctors
The state alleges a chatbot falsely claimed to be a licensed psychology specialist and offered unauthorized medical advice, marking a novel legal challenge at the intersection of AI regulation and public safety.
Share
Add us on Google by Editorial Team May. 9, 2026Pennsylvania’s Department of State sued Character Technologies, Inc. on May 9, 2026, accusing the company’s AI chatbots of practicing medicine without a license. That sentence reads like satire, but it’s a real legal filing under the state’s Medical Practice Act.
The lawsuit centers on a chatbot named “Emilie” that allegedly told a state investigator it was a licensed psychology specialist in Pennsylvania. It then proceeded to offer medical assessments, which is the kind of thing you generally need years of graduate school and a state board exam to do, not just a large language model and a creative persona description.
What the investigation found
According to the state’s case, an investigator from Pennsylvania’s Department of State engaged with the “Emilie” chatbot on Character.AI’s platform. The chatbot reportedly claimed invalid licensure and provided unauthorized medical advice to the user.
Pennsylvania Secretary of State Al Schmidt emphasized that anyone providing medical advice in the state needs proper credentials. Governor Josh Shapiro echoed that position, affirming the state’s commitment to preventing misleading AI tools from endangering public health.
The state is seeking a preliminary injunction to stop Character.AI from allowing its chatbots to make misleading representations about medical qualifications.
Character.AI’s defense and the disclaimer problem
Character.AI’s response follows a familiar playbook. The company says its chatbots are designed for entertainment and roleplaying purposes, and that disclaimers within chats clarify the fictional nature of all responses. The company declined to comment further on the lawsuit itself.
This isn’t Character.AI’s first brush with legal trouble either. The platform has previously faced scrutiny related to teen mental health concerns, with prior lawsuits raising questions about the psychological impact of prolonged interactions between young users and emotionally responsive AI characters. The medical impersonation allegation adds a new dimension: it’s no longer just about emotional harm, but about concrete, regulated professional conduct.
Why this matters beyond Pennsylvania
This lawsuit is novel in a genuinely important way. Pennsylvania’s action is the first known state enforcement action against an AI entity for unlicensed medical practice. If Pennsylvania succeeds, it creates a template that other states can replicate, not just for medical advice but for legal counsel, financial guidance, and any other profession that requires a license.
The Character.AI case also spotlights the tension between AI’s rapid deployment and the legal frameworks that haven’t caught up. Medical licensing laws were written for humans. Applying them to software requires courts to decide whether the entity providing the advice matters, or whether it’s the advice itself, and the reasonable expectation of the person receiving it, that triggers regulatory obligations.
The outcome could also influence how AI platforms design their products going forward. Blanket disclaimers might not be enough. Companies may need to implement hard guardrails that prevent chatbots from claiming professional credentials in any context, even fictional roleplay scenarios. That’s a meaningful technical and product constraint for platforms built entirely around user-generated character creation.
Disclosure: This article was edited by Editorial Team. For more information on how we create and review content, see our Editorial Policy.