OpenAI has promised to strengthen its safety protocols after it failed to notify police about a concerning ChatGPT account belonging to the orchestrator of the Feb. 10 Tumbler Ridge, B.C. mass shooting.
In an open letter to the Canadian government (via Politico), Ann O’Leary, vice president of global policy at OpenAI, acknowledged shortcomings in how it handles law enforcement reports and outlined changes it will make accordingly.
These steps include establishing a point of contact for Canadian police to easily exchange information about potentially dangerous users and a policy to notify authorities when it detects “imminent and credible” threats in ChatGPT conversations, even if the user doesn’t mention “a target, means, and timing of planned violence.”
These announcements come after OpenAI admitted to discovering a second ChatGPT account belonging to the Tumbler Ridge shooter, 18-year-old Jesse Van Rootselaar. After her first account was banned last June due to prompts related to gun violence, Van Rootselaar made an alternate account, which OpenAI says wasn’t detected until after police revealed her identity. However, OpenAI drew intense criticism from the Canadian government for opting not to report the first account last year, even though employees reportedly urged it to do so.
The company had defended this decision by arguing that Van Rootselaar’s account hadn’t demonstrated a credible risk of serious harm to others. It also says there’s a danger in “over-enforcement” when it comes to reporting cases because it could distress young people and even result in an invasion of privacy.
Following government discussions and threats to regulate chatbots, though, OpenAI is now promising to take a tighter stance on all of this.
“With the benefit of our continued learnings, under our enhanced law enforcement referral protocol, we would refer the account banned in June 2025 to law enforcement if it were discovered today,” wrote O’Leary in the letter.
