Eight months before the Tumbler Ridge mass shooting, OpenAI knew something was wrong. The company’s automated review system had flagged Jesse Van Rootselaar’s ChatGPT account for interactions involving scenarios of gun violence. Roughly a dozen employees were aware. Some advocated contacting police. Instead, OpenAI banned the account, but didn’t refer it to law enforcement because it didn’t meet the “threshold required” at the time.
On Feb. 10, Van Rootselaar killed eight people (her mother, her 11-year-old half-brother and six others at Tumbler Ridge Secondary School) before dying of a self-inflicted wound.
This case is not simply about one company’s misjudgment. It exposes the absence of any Canadian legal framework for assigning responsibility when an AI company possesses information that could prevent violence.
As a researcher in health ethics and AI governance at Simon Fraser University, I study how algorithmic systems reshape decision-making in high-stakes settings. The Tumbler Ridge tragedy sits squarely at this intersection: a private corporation made a clinical-style risk assessment it was never equipped to make, in a legal environment that gave it no guidance.
(Unsplash+/Solen Feyissa)
The digital confessional problem
Generative AI chatbots are not social media. Social media functions as a public square where posts can be monitored and flagged by other users. Chatbot interactions are private, intimate and designed to be accommodating. Users routinely disclose fears, fantasies and violent ideations to systems engineered to respond with conversational warmth.
In clinical practice, this kind of disclosure triggers a well-established duty. The Tarasoff principle, adopted across Canadian provinces through mental health legislation, imposes upon therapists a duty to warn if they determine that a patient poses a credible threat to an identifiable person, even if it means breaching confidentiality. But that duty rests on the clinical judgment of trained professionals who understand the difference between ideation and intent.
Arguably, OpenAI tried to mirror this clinical standard. But the people making these assessments are software engineers and content moderators, not forensic psychologists. The company itself acknowledged the tension, citing the risks of “over-enforcement” and the distress of unannounced police visits for young people.
The real question is not whether OpenAI’s reasoning was defensible in isolation. It’s whether a private corporation should be making this determination at all.
A vacuum where legislation should be
Federal AI Minister Evan Solomon, who intends to meet with OpenAI representatives today on Feb. 24 about this issue, said on Feb. 21 that he was “deeply disturbed” by the revelations, adding the federal government is reviewing “a suite of measures” and that “all options are on the table.” But those options remain undefined because the legislative tools that would have enabled them no longer exist.
The Artificial Intelligence and Data Act, embedded in Bill C-27, was supposed to be Canada’s answer to AI regulation. The Online Harms Act (Bill C-63) would have addressed harmful digital content. Both died on the order paper when Parliament was prorogued in January 2025.
What remains is a voluntary code of conduct with no legal force and no consequences for non-compliance. When OpenAI flagged Van Rootselaar’s account, its only obligation was to its own internal policy. Banning the account resolved the company’s liability while leaving a person expressing violent ideations disconnected from any intervention pathway.
(THE CANADIAN PRESS/Justin Tang)
Canada’s privacy law compounds the problem. The Personal Information Protection and Electronic Documents Act does contain an emergency exception: section 7(3)(e) permits disclosure without consent “to a person who needs the information because of an emergency that threatens the life, health or security of an individual.” But this provision was drafted for clear-cut crises, not for the probabilistic threat indicators that AI chatbot interactions generate. For a foreign corporation navigating this ambiguity, uncertainty favours inaction.
What Canada needs now
Canada’s next attempt at digital governance must recognize that human-to-AI interactions are fundamentally different from social media posts. Three elements are essential:
-
Binding legislation with clear legal thresholds for when AI companies must refer flagged interactions to authorities. These thresholds must be developed with mental health professionals, law enforcement and privacy experts, not left to individual corporations.
-
An independent digital safety commission as a third-party triage body. When an AI company identifies severely concerning interactions, it should refer the case to trained threat-assessment professionals rather than making the call internally or triggering an immediate armed police response.
-
Modernized privacy legislation that provides explicit legal clarity for AI-specific disclosure, resolving the ambiguity that currently rewards doing nothing.
At the AI summit that took place in New Delhi from Feb. 16 to 20, 86 countries, including Canada, pledged to promote “safe, trustworthy and robust” AI. No concrete commitments followed. OpenAI’s Sam Altman stressed the urgency of international AI regulation and proposed an international body for AI safety norms modelled on the International Atomic Energy Agency, an irony not lost on anyone following the Tumbler Ridge revelations.
Minister Solomon says all options are on the table. Families of shooting victims, survivors and a devastated community in Tumbler Ridge are living with the cost of leaving regulation options open for too long.
The post “What the Tumbler Ridge tragedy reveals about Canada’s AI governance vacuum” by Jean-Christophe Bélisle-Pipon, Assistant Professor in Health Ethics, Simon Fraser University was published on 02/24/2026 by theconversation.com





















