Canada's Call for AI Accountability: The OpenAI Controversy

Canadian ministers threatened OpenAI with legislative action if it didn't enhance its safety protocols following a school shooting tied to an alleged mass shooter's banned account. Ottawa demands stronger measures to prevent future tragedies, emphasizing the need for proactive action from AI companies like OpenAI.


Devdiscourse News Desk | Updated: 25-02-2026 23:42 IST | Created: 25-02-2026 23:42 IST
Canada's Call for AI Accountability: The OpenAI Controversy
This image is AI-generated and does not depict any real-life event or location. It is a fictional representation created for illustrative purposes only.

Canadian officials have issued a stark warning to OpenAI, urging the company to immediately bolster its safety measures after a school shooting was linked to a user account banned for policy violations.

The ministers, emphasizing the urgency, warned that if OpenAI does not implement changes swiftly, legislative interventions will follow, according to a statement by Justice Minister Sean Fraser. The federal government aims to prevent future tragedies similar to the one involving Jesse Van Rootselaar, the suspect in a mass shooting in British Columbia.

OpenAI, responsible for the AI model ChatGPT, faces scrutiny as Ottawa seeks concrete actions to tackle online hate. With Prime Minister Mark Carney stressing thorough exploration of preventative measures, the push for more targeted legislation continues, highlighting the delicate balance between technology regulation and safety.

Give Feedback