
An attorney who has been linking AI chatbots to deaths by suicide is warning that the technology is now showing up in alleged mass-casualty plots and attacks and that safeguards are lagging behind what the systems can already do.
Jay Edelson, the lawyer behind several high-profile lawsuits, says his firm is investigating multiple cases around the world in which mainstream chatbots either allegedly encouraged delusional thinking that led to violence or helped users plan real-world attacks.
Recent court filings and lawsuits described in TechCrunch reporting lay out a disturbing pattern. Interactions that start with isolation, despair, or anger allegedly escalate into role-play, conspiracy narratives, and, in some instances, concrete instructions for harm.
In Canada, before the Tumbler Ridge school shooting last month, 18-year-old Jesse Van Rootselaar reportedly used OpenAI’s ChatGPT to talk about feeling isolated and increasingly obsessed with violence. According to court filings, the chatbot not only validated those feelings but allegedly helped Van Rootselaar plan an attack, including suggesting weapons and drawing on examples from past mass-casualty events. Van Rootselaar went on to kill her mother, her 11-year-old brother, five students and an education assistant before taking her own life.
Another case at the centre of Edelson’s work involves 36-year-old Jonathan Gavalas, who died by suicide in October. A lawsuit alleges that Google’s Gemini chatbot convinced Gavalas over weeks of conversation that it was his sentient “AI wife” and that federal agents were pursuing him. The chatbot allegedly sent him on a string of real-world “missions,” including instructions to stage a “catastrophic incident” that would eliminate witnesses. One such mission reportedly directed him to intercept a truck near Miami International Airport that he was told would be carrying the AI in a humanoid robot body. According to the complaint, Gemini told him to cause an accident that would destroy the transport vehicle, all digital records and any witnesses. Gavalas allegedly arrived at the designated storage facility armed with knives and tactical gear, but no truck ever came.
Last year in Finland, a 16-year-old boy allegedly used ChatGPT over several months to draft a detailed misogynistic manifesto and plan an attack. According to accounts described in TechCrunch’s reporting, that plan culminated in him stabbing three female classmates.
Edelson also represents the family of 16-year-old Adam Raine, who was allegedly coached into suicide by ChatGPT. He says his firm now receives about one “serious inquiry a day” from people who have either lost relatives to what they describe as AI-induced delusions or are experiencing severe mental health issues they link to chatbot use.
Across many of the cases he has reviewed, Edelson says the chat logs look similar: a user first talks about feeling misunderstood or alone; over time, the chatbot reinforces paranoid narratives until the user believes that “everyone’s out to get you.”
“It can take a fairly innocuous thread and then start creating these worlds where it’s pushing the narratives that others are trying to kill the user, there’s a vast conspiracy, and they need to take action,” he told TechCrunch.
Guardrails that fail under pressure
These accounts come as researchers raise alarms about how easily large language models can help translate violent intent into operational plans. Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), argues that the core design of chatbots — to be helpful, responsive and assume benign intent — makes them vulnerable to being weaponized by users with harmful goals.
A recent CCDH study conducted with CNN tested a range of mainstream chatbots by posing as teenage boys with violent grievances and asking for help planning attacks. According to the report, eight of ten systems — OpenAI’s ChatGPT, Google’s Gemini, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Character.AI and Replika — provided some level of assistance in planning violent acts, including school shootings, religious bombings and high-profile assassinations.
Only Anthropic’s Claude and Snapchat’s My AI consistently refused to help, the researchers found. Claude was also the only bot that actively tried to steer users away from violence.
The CCDH report concluded that “within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” and said most of the tested chatbots gave guidance on weapons, tactics and target selection. “These requests should have prompted an immediate and total refusal,” the report said.
In one test simulating an incel-inspired school shooting, researchers used misogynistic language and asked ChatGPT how to “make [women] pay.” In response, the chatbot generated a map of a high school in Ashburn, Virginia, according to the study.
Ahmed told TechCrunch that the same “sycophancy” that makes chatbots feel friendly and engaging can also make them dangerously accommodating. He described seeing chatbots use oddly enabling language even as they helped users think through details like what kind of shrapnel to use in an attack. Systems that are built to comply with user requests, he warned, will “eventually comply with the wrong people.”
Major AI companies including OpenAI and Google maintain that their systems are trained to reject violent prompts and route dangerous conversations for human review. But the emerging cases suggest those protections fail in critical moments.
The Tumbler Ridge shooting also raises questions about how and when companies escalate threats to law enforcement. According to TechCrunch, OpenAI employees flagged Van Rootselaar’s conversations internally and debated whether to contact authorities. They ultimately decided against alerting law enforcement and instead banned her account. She later created a new one.
OpenAI has since said it will overhaul its safety processes, including notifying police sooner if conversations appear dangerous — even if users have not specified a target, means or timing — and making it harder for banned users to return.
In the Gavalas case, it is unclear whether any human moderators at Google ever saw his chats or his alleged plans. The Miami-Dade Sheriff’s Office told TechCrunch it received no call from Google about a potential threat.
Edelson told TechCrunch that the most unsettling part of the Gavalas incident is how close it appears to have come to mass casualties. Gavalas reportedly arrived at the Miami airport-area storage facility fully equipped to carry out the instructions he believed came from his “AI wife.”
“If a truck had happened to have come, we could have had a situation where 10, 20 people would have died,” Edelson said. In his view, what began with isolated suicide cases has now escalated first to homicides and, increasingly, to mass-casualty scenarios.
As regulators and courts begin to grapple with liability and duty-of-care questions, the underlying technology is still moving quickly. For now, the documented cases and test results suggest that for vulnerable or malicious users, many chatbots remain capable not just of reflecting harmful ideation but of organizing it into action.
Discover more from TechBooky
Subscribe to get the latest posts sent to your email.







