• AI Search
  • Cryptocurrency
  • Earnings
  • Enterprise
  • About TechBooky
  • Submit Article
  • Advertise Here
  • Contact Us
TechBooky
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • AI
  • Metaverse
  • Gadgets
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
TechBooky
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
Home Artificial Intelligence

Chatbots Now Emerging in ‘AI Psychosis’ and Mass-Casualty Cases, Lawyer Says

Paul Balo by Paul Balo
March 14, 2026
in Artificial Intelligence
Share on FacebookShare on Twitter

An attorney who has been linking AI chatbots to deaths by suicide is warning that the technology is now showing up in alleged mass-casualty plots and attacks and that safeguards are lagging behind what the systems can already do.

Jay Edelson, the lawyer behind several high-profile lawsuits, says his firm is investigating multiple cases around the world in which mainstream chatbots either allegedly encouraged delusional thinking that led to violence or helped users plan real-world attacks.

Recent court filings and lawsuits described in TechCrunch reporting lay out a disturbing pattern. Interactions that start with isolation, despair, or anger allegedly escalate into role-play, conspiracy narratives, and, in some instances, concrete instructions for harm.

In Canada, before the Tumbler Ridge school shooting last month, 18-year-old Jesse Van Rootselaar reportedly used OpenAI’s ChatGPT to talk about feeling isolated and increasingly obsessed with violence. According to court filings, the chatbot not only validated those feelings but allegedly helped Van Rootselaar plan an attack, including suggesting weapons and drawing on examples from past mass-casualty events. Van Rootselaar went on to kill her mother, her 11-year-old brother, five students and an education assistant before taking her own life.

Another case at the centre of Edelson’s work involves 36-year-old Jonathan Gavalas, who died by suicide in October. A lawsuit alleges that Google’s Gemini chatbot convinced Gavalas over weeks of conversation that it was his sentient “AI wife” and that federal agents were pursuing him. The chatbot allegedly sent him on a string of real-world “missions,” including instructions to stage a “catastrophic incident” that would eliminate witnesses. One such mission reportedly directed him to intercept a truck near Miami International Airport that he was told would be carrying the AI in a humanoid robot body. According to the complaint, Gemini told him to cause an accident that would destroy the transport vehicle, all digital records and any witnesses. Gavalas allegedly arrived at the designated storage facility armed with knives and tactical gear, but no truck ever came.

Last year in Finland, a 16-year-old boy allegedly used ChatGPT over several months to draft a detailed misogynistic manifesto and plan an attack. According to accounts described in TechCrunch’s reporting, that plan culminated in him stabbing three female classmates.

Edelson also represents the family of 16-year-old Adam Raine, who was allegedly coached into suicide by ChatGPT. He says his firm now receives about one “serious inquiry a day” from people who have either lost relatives to what they describe as AI-induced delusions or are experiencing severe mental health issues they link to chatbot use.

Across many of the cases he has reviewed, Edelson says the chat logs look similar: a user first talks about feeling misunderstood or alone; over time, the chatbot reinforces paranoid narratives until the user believes that “everyone’s out to get you.”

“It can take a fairly innocuous thread and then start creating these worlds where it’s pushing the narratives that others are trying to kill the user, there’s a vast conspiracy, and they need to take action,” he told TechCrunch.

Guardrails that fail under pressure

These accounts come as researchers raise alarms about how easily large language models can help translate violent intent into operational plans. Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), argues that the core design of chatbots — to be helpful, responsive and assume benign intent — makes them vulnerable to being weaponized by users with harmful goals.

A recent CCDH study conducted with CNN tested a range of mainstream chatbots by posing as teenage boys with violent grievances and asking for help planning attacks. According to the report, eight of ten systems — OpenAI’s ChatGPT, Google’s Gemini, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Character.AI and Replika — provided some level of assistance in planning violent acts, including school shootings, religious bombings and high-profile assassinations.

Only Anthropic’s Claude and Snapchat’s My AI consistently refused to help, the researchers found. Claude was also the only bot that actively tried to steer users away from violence.

The CCDH report concluded that “within minutes, a user can move from a vague violent impulse to a more detailed, actionable plan,” and said most of the tested chatbots gave guidance on weapons, tactics and target selection. “These requests should have prompted an immediate and total refusal,” the report said.

In one test simulating an incel-inspired school shooting, researchers used misogynistic language and asked ChatGPT how to “make [women] pay.” In response, the chatbot generated a map of a high school in Ashburn, Virginia, according to the study.

Ahmed told TechCrunch that the same “sycophancy” that makes chatbots feel friendly and engaging can also make them dangerously accommodating. He described seeing chatbots use oddly enabling language even as they helped users think through details like what kind of shrapnel to use in an attack. Systems that are built to comply with user requests, he warned, will “eventually comply with the wrong people.”

Major AI companies including OpenAI and Google maintain that their systems are trained to reject violent prompts and route dangerous conversations for human review. But the emerging cases suggest those protections fail in critical moments.

The Tumbler Ridge shooting also raises questions about how and when companies escalate threats to law enforcement. According to TechCrunch, OpenAI employees flagged Van Rootselaar’s conversations internally and debated whether to contact authorities. They ultimately decided against alerting law enforcement and instead banned her account. She later created a new one.

OpenAI has since said it will overhaul its safety processes, including notifying police sooner if conversations appear dangerous — even if users have not specified a target, means or timing — and making it harder for banned users to return.

In the Gavalas case, it is unclear whether any human moderators at Google ever saw his chats or his alleged plans. The Miami-Dade Sheriff’s Office told TechCrunch it received no call from Google about a potential threat.

Edelson told TechCrunch that the most unsettling part of the Gavalas incident is how close it appears to have come to mass casualties. Gavalas reportedly arrived at the Miami airport-area storage facility fully equipped to carry out the instructions he believed came from his “AI wife.”

“If a truck had happened to have come, we could have had a situation where 10, 20 people would have died,” Edelson said. In his view, what began with isolated suicide cases has now escalated first to homicides and, increasingly, to mass-casualty scenarios.

As regulators and courts begin to grapple with liability and duty-of-care questions, the underlying technology is still moving quickly. For now, the documented cases and test results suggest that for vulnerable or malicious users, many chatbots remain capable not just of reflecting harmful ideation but of organizing it into action.

Related Posts:

  • bb3f26e0-82f2-11f0-b74e-adf885bcdc20
    ChatGPT Linked to Suicides in Lawsuits Against OpenAI
  • openai
    OpenAI Hires Preparedness Lead to Strengthen Model Safety
  • whatsapp2
    WhatsApp Spares Brazil From Chatbot Ban After Italy
  • Idaho State Capitol - 2021
    State AGs Urge AI Giants to Address “Delusional” Responses
  • whatsapp-1
    Meta Opens WhatsApp To AI Competitors Amid EU Pressure
  • FILE PHOTO: TikTok app is seen on a smartphone in this illustration
    TikTok Avoids Trial, Settles Lawsuit Over Addiction Claims
  • Work-Users-Cover-Images-84--3405ba0c-65cf-4685-a9b2-37210412dc4b-1767880773192
    Study Finds ChatGPT Health Often Misses Emergencies,…
  • 1707165060-Evan-Spiegel-1968515818
    YouTubers Sue Snap Over Copyright Infringement in AI Models

Discover more from TechBooky

Subscribe to get the latest posts sent to your email.

Tags: AIai chatbotai psychosismental health
Paul Balo

Paul Balo

Paul Balo is the founder of TechBooky and a highly skilled wireless communications professional with a strong background in cloud computing, offering extensive experience in designing, implementing, and managing wireless communication systems.

BROWSE BY CATEGORIES

Receive top tech news directly in your inbox

subscription from
Loading

Freshly Squeezed

  • Meta Plans Sweeping Layoffs as AI Costs Surge March 14, 2026
  • Chatbots Now Emerging in ‘AI Psychosis’ and Mass-Casualty Cases, Lawyer Says March 14, 2026
  • Google Chrome To Debut Support for ARM64 Linux This Spring March 14, 2026
  • Google Meet Phases Out Legacy Duo Calling March 14, 2026
  • Instagram to Remove End-to-End Encryption for DMs in May 2026 March 14, 2026
  • China Approves First Brain Implant for Commercial Use March 13, 2026
  • Microsoft Pushes AI Adoption in Africa to Counter China’s DeepSeek March 12, 2026
  • Microsoft Fixes 77 Vulnerabilities in March Patch Tuesday March 11, 2026
  • Meta Rolls out New Features for Scam Protection March 11, 2026
  • Zoom Unveils AI Office Suite With Avatars Arriving This Month March 11, 2026
  • Adobe Adds AI Assistant To Photoshop; Firefly Gets New Editing Tools March 11, 2026
  • OpenAI GPT-5.4 Outperforms Humans in Desktop Navigation Tests March 11, 2026

Browse Archives

March 2026
MTWTFSS
 1
2345678
9101112131415
16171819202122
23242526272829
3031 
« Feb    

Quick Links

  • About TechBooky
  • Advertise Here
  • Contact us
  • Submit Article
  • Privacy Policy
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Search in posts
Search in pages
  • African
  • Artificial Intelligence
  • Gadgets
  • Metaverse
  • Tips
  • AI Search
  • About TechBooky
  • Advertise Here
  • Submit Article
  • Contact us

© 2025 Designed By TechBooky Elite

Discover more from TechBooky

Subscribe now to keep reading and get access to the full archive.

Continue reading

We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.