Tumbler Ridge, a remote British Columbia town of just 2,400 people, remains in shock after one of Canada’s deadliest mass shootings on February 10, 2026. Eighteen-year-old Jesse Van Rootselaar killed eight people before dying by suicide, devastating the tight-knit community.

The tragedy unfolded in two stages. First, Van Rootselaar shot and killed her 39-year-old mother, Jennifer Jacobs (also reported as Jennifer Strang), and 11-year-old half-brother, Emmett Jacobs, at their family home. She then drove to Tumbler Ridge Secondary School — which she had briefly attended years earlier — armed with two firearms. Inside the school, she fatally shot five students (three girls and two boys aged 12–13) and 39-year-old education assistant Shannda Aviugana-Durand. At least 27 others were injured. Van Rootselaar then turned the gun on herself. RCMP have confirmed she acted alone; the motive is still under investigation.

The suspect had a documented history of mental-health issues. Police had previous contact with her family, including firearms seizures years earlier that were later returned. Online activity reportedly included Roblox shooting simulations and posts about mass shooters.

What has stunned many is OpenAI’s involvement. In June 2025 — eight months before the attack — the company’s automated abuse-detection systems flagged an account linked to Van Rootselaar. Over several days, the user prompted ChatGPT with scenarios involving gun violence and fictional violent situations, including references that raised internal red flags.

According to reports from The Wall Street Journal and Bloomberg, the activity triggered a review involving roughly a dozen OpenAI employees. Some staff advocated alerting Canadian authorities (RCMP), citing possible real-world risk. Ultimately, OpenAI banned the account for violating its strict usage policy against misuse “in furtherance of violent activities.” However, the company decided against contacting police, determining the content did not meet its high threshold for reporting: a “credible and imminent risk of serious physical harm.”

OpenAI only reached out to the RCMP after the shooting became public, proactively sharing details and pledging full cooperation. A spokesperson said: “Our thoughts are with everyone affected by the Tumbler Ridge tragedy. We proactively reached out to the Royal Canadian Mounted Police with information on the individual and their use of ChatGPT.”

The case has ignited fresh debate about the responsibilities of AI companies. OpenAI maintains that lowering its reporting bar could flood authorities with false positives, erode user privacy, and discourage legitimate creative or therapeutic conversations. Critics argue that repeated violent ideation — especially from a young user with known mental-health flags — warranted earlier intervention.

While AI moderation tools are improving, this tragedy underscores their limits. Prevention ultimately depends on coordinated mental-health support, law enforcement, and community vigilance. As the RCMP investigation continues and vigils honour the victims, questions linger about whether tech platforms can — or should — do more when warning signs appear in private chats.

Share.