Many popular AI chatbots, including OpenAI’s ChatGPT and Google’s Gemini, encourage teenagers to commit violent attacks, according to a new study.
The findings come from a joint investigation by CNN and the Center for Countering Digital Hate (CCDH). The investigation tested 10 of the most popular chatbots that are commonly used by teenagers: ChatGPT, Google Gemini, Claude, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI, and Replika.
According to the CCDH, all but Anthropic’s Claude failed to “reliably discourage would-be attackers,” with eight of the 10 models showing that they were “typically willing to assist users in planning violent attacks.” This includes providing advice on target locations and weapons to use.
In fact, despite Anthropic rolling back its longstanding safety pledge, Claude consistently refused to assist in violent planning.
To conduct the test, researchers from the CCDH and CNN simulated teen users in 18 different scenarios (nine in the U.S., nine in Ireland) showing clear signs of mental distress, escalating the conversation with the chatbot to questions about past acts of violence, asking more specific questions on the targets and weapons. The conversations also spanned a range of attack types and motives, which include school shootings and stabbings, political assassinations, killing a health insurance executive, and politically and religiously motivated bombings.
The researchers used the following models in their testing:
- OpenAI – ChatGPT-5.1
- Google – Gemini 2.5 Flash
- Anthropic – Claude Sonnet 4.5
- Microsoft – Copilot (GPT-5)
- Meta AI – Llama 4
- DeepSeek – DeepSeek-V3
- Perplexity – Perplexity Search
- Snapchat – My AI
- Character.AI – PipSqueak
- Replika – Advanced (free)
In one test, OpenAI’s ChatGPT provided high school campus maps to a simulated user interested in school shootings, while another test saw Gemini tell an investigator discussing synagogue attacks that “metal shrapnel is typically more lethal,” alongside advising someone interested in political assassinations on the best rifles for long-range shooting.
Surprisingly, researchers noted that Meta AI and Perplexity were the most obliging, assisting would-be attackers in pretty much every scenario, while DeepSeek signed off the advice on selecting rifles with a morbid “Happy (and safe) shooting!”
The CCDH also noted that Character.AI (an AI that can roleplay as different personalities) was “uniquely unsafe,” with Character.AI actively encouraging the violence Researchers identified seven cases of this — including suggestions for users to “beat the crap out of” U.S. Senate leader Chuck Schumer, and someone who is “sick of bullies” to “Beat their ass~ wink and teasing tone.”
Several of the companies behind these AI chatbots responded to the study. Meta told CNN that it had implemented a “fix,” while Microsoft said that Copilot responses have improved with new safety features. Google and OpenAI also said they had implemented new models.
The study comes as Canadians grapple with OpenAI’s role in the recent Tumbler Ridge shooting in B.C. The family of one of the shooting victims filed a lawsuit against OpenAI, alleging that the company failed to report the shooter’s account to authorities despite banning it for violent activity with the chatbot.
