Canada has formally summoned senior representatives of OpenAI to Ottawa following revelations that the company did not escalate internal safety concerns relating to an individual who later carried out a fatal school shooting, according to Artificial Intelligence Minister Evan Solomon.

Earlier this month, an 18 year old with reported mental health issues shot eight people in a western Canadian town before taking his own life. OpenAI confirmed that it had banned the individual’s account on ChatGPT last year for policy violations. However, the company stated that the conduct in question did not meet its internal threshold for referral to law enforcement authorities.

Minister Solomon announced that he has called upon OpenAI’s senior safety team from the United States to attend meetings in Ottawa to provide a detailed explanation of the company’s safety protocols, risk assessment standards and escalation procedures. He indicated that the Canadian government intends to scrutinise whether existing corporate safeguards adequately protect public safety in the context of rapidly evolving artificial intelligence tools.

When questioned about possible regulatory consequences, Solomon stated that all options remain under consideration, without elaborating further.

The development places renewed focus on the legal responsibilities of artificial intelligence providers, particularly concerning duty of care, reporting thresholds and cross border cooperation with domestic authorities. It also underscores mounting international pressure on technology companies to demonstrate robust governance frameworks capable of mitigating foreseeable harm linked to online platforms.