OpenAI Commits to Stronger AI Safety Standards for Canada
The Canadian government is demanding more transparency and stricter safety rules from artificial intelligence companies after a horrific mass shooting in Tumbler Ridge, British Columbia, according to Global News.
On Wednesday, Federal AI Minister Evan Solomon met with OpenAI CEO Sam Altman to discuss the role the popular chatbot, ChatGPT, played in the lead-up to the attack. The meeting follows the discovery that the shooter, 18-year-old Jesse Van Rootselaar, had been interacting with ChatGPT for months before killing eight people and herself on February 10.
While OpenAI had actually flagged and banned the shooter’s account in June 2025 for “violent” activity, the company did not alert the Royal Canadian Mounted Police at that time. OpenAI later admitted that their systems failed to prevent the shooter from simply opening a second account to continue using the service.
During the virtual meeting with Minister Solomon, Altman agreed to a series of concrete steps to ensure such an oversight does not happen again. OpenAI has committed to providing a detailed report on new systems designed to identify high-risk offenders. They will also establish a direct point of contact with the RCMP to speed up the reporting of potential threats.
Solomon noted that the tragedy “demands answers and stronger safeguards when powerful AI technologies are involved.” As part of the new agreement, the Canadian AI Safety Institute will examine OpenAI’s models and provide technical advice to the government. Furthermore, OpenAI has promised to apply its new, stricter safety standards retroactively. This means the company will go back and review previously flagged cases to see if any other potential threats were missed.
One of the most concerning revelations in this case was the shooter’s ability to evade a permanent ban. Even after her first account was disabled, Van Rootselaar successfully opened a second account that OpenAI only discovered after her name was released by the police.
To fix this, OpenAI says it is strengthening its detection systems to stop people from evading safeguards. The company is also working with mental health and law enforcement experts to better understand the Canadian context of these interactions.
Want to see more of our stories on Google?
P.S. Want to keep this site truly independent? Support us by buying us a beer, treating us to a coffee, or shopping through Amazon here. Links in this post are affiliate links, so we earn a tiny commission at no charge to you. Thanks for supporting independent Canadian media!
