Canada Confronts OpenAI Over ChatGPT Safety Protocols Amid Mass Shooting Concerns
Canada will question OpenAI about its safety measures following revelations it did not alert authorities about a banned account tied to an alleged mass shooter. The firm had suspended the account for policy breaches, believing it did not warrant police reporting.
The Canadian government plans to interrogate OpenAI over its safety protocols after it was discovered that the company had not reported an account linked to a suspected mass shooter. The account was banned for policy violations that failed to meet the criteria for law enforcement notification, according to OpenAI.
Federal Minister Evan Solomon is gathering OpenAI's top safety officials for a meeting in Ottawa to gain a clearer understanding of their protocols and how they address potential threats. Solomon seeks transparency to assure Canadians about safety measures related to AI technologies.
The move is part of a broader effort by the Liberal government to manage online dangers. Previously, attempts to legislate against digital hate were halted due to criticisms of over-breadth, but new efforts are planned for this year. Solomon emphasized that AI chatbot regulation could include various measures.
ALSO READ
-
Turbulence in the Sky: Canada's Jet Certification and Trump's Trade Threats
-
Canada Approves Gulfstream Jets Amid Trade Tensions
-
India and Canada to Resume FTA Talks Amidst Global Trade Changes
-
Canada Strengthens Support for Ukraine with Significant Military Aid Package
-
Canada Pledges C$300 Million Military Aid to Ukraine Amid Sanctions on Russian Vessels