The family of a young girl critically injured in a mass school shooting in Canada has filed a civil lawsuit against OpenAI, alleging the company failed to alert authorities despite being aware of warning signs related to the suspected attacker.
The lawsuit was filed after 12-year-old Maya Gebala was severely wounded during a shooting at a school in Tumbler Ridge on February 10. Gebala was shot in the neck and head during the attack and remains hospitalized with serious injuries.
One of Canada’s Deadliest School Shootings
The incident left eight people dead, including five young children and the suspect’s mother, making it one of the deadliest mass shootings in Canadian history.
The suspected attacker, identified as 18-year-old Jesse Van Rootselaar, is alleged to have discussed violent scenarios with ChatGPT in the months leading up to the attack.
According to the lawsuit, the suspect used a ChatGPT account prior to turning 18, which is permitted with parental consent. However, the plaintiffs claim that no age verification was conducted during the registration process.
Lawsuit Alleges Warning Signs Were Ignored
The civil suit was filed by Maya’s mother, Cia Edmonds, who claims the suspect viewed the chatbot as a “trusted confidante” and described potential gun violence scenarios over several days during late spring or early summer of 2025.
The lawsuit alleges that twelve OpenAI employees flagged the conversations internally, identifying them as indicating a potential risk of serious harm to others and recommending that Canadian law enforcement be informed.
However, according to the legal filing, the recommendation to notify authorities was rejected, and the only action taken by the company was to ban the suspect’s account in June 2025.
OpenAI has previously stated that it did not alert police because the conversations did not meet the company’s internal threshold for a credible or imminent threat of serious physical harm.
Second Account Allegedly Allowed Continued Planning
The lawsuit further claims the suspect was able to create a second ChatGPT account after the initial ban, enabling continued discussions about violent scenarios.
The plaintiffs argue that the company had “specific knowledge of the shooter’s long-range planning of a mass casualty event” but failed to take further action to prevent potential harm.
According to the lawsuit, Maya Gebala suffered a catastrophic brain injury after being shot three times while attempting to lock a library door in an effort to prevent the shooter from entering.
OpenAI Responds to the Allegations
In a statement, OpenAI described the incident as an “unspeakable tragedy”, expressing condolences to the victims, their families, and the affected community.
The company stated it is committed to improving safeguards and working with authorities to prevent similar incidents.
“OpenAI remains committed to working with government and law enforcement officials to make meaningful changes that help prevent tragedies like this in the future,” a spokesperson said.
Policy Changes and Government Engagement
Following the incident, OpenAI CEO Sam Altman held a virtual meeting on March 4 with Evan Solomon and David Eby.
According to reports, Altman pledged to strengthen company protocols for reporting potentially harmful interactions to law enforcement and to apologize to the Tumbler Ridge community.
In a letter sent to Canadian officials on February 26, OpenAI said it had already begun implementing several changes, including:
- Consulting mental health and behavioural experts to assess high-risk cases
- Making criteria for notifying police more flexible
- Strengthening systems to detect attempts to circumvent safety safeguards
- Establishing a direct communication channel with Canadian law enforcement
The company stated that under the updated guidelines, the suspect’s account activity would likely have been reported to authorities.
Government Calls for Clear Implementation Plans
Despite these commitments, Canada’s AI minister Evan Solomon noted that officials are still waiting for detailed implementation plans.
While acknowledging the company’s willingness to improve safety procedures, Solomon said lawmakers have not yet seen concrete details on how the proposed changes will be carried out in practice.
The lawsuit raises broader questions about the responsibilities of artificial intelligence companies in identifying and responding to potentially dangerous behavior online, particularly as AI tools become more widely used worldwide.
