Scoopfeeds — Intelligent news, curated.
Canadian officials claim OpenAI violated federal and provincial privacy laws
tech

Canadian officials claim OpenAI violated federal and provincial privacy laws

Engadget · May 6, 2026, 9:24 PM

Key takeaways

  • Iryna Tolmachova/Shutterstock Philippe Dufresne, the Privacy Commissioner of Canada, has found Open AI was "not compliant with" Canadian federal and provincial privacy laws in the training of its AI models.
  • Warnings in ChatGPT note that interactions with the AI could be used in training, but third-party data OpenAI has purchased or scraped also includes personal details people likely aren't even aware of.
  • Make its data exports tools easier to understand and use, and better explain how users can challenge the accuracy of the information ChatGPT provides.

Iryna Tolmachova/Shutterstock Philippe Dufresne, the Privacy Commissioner of Canada, has found Open AI was "not compliant with" Canadian federal and provincial privacy laws in the training of its AI models. Following an investigation, Dufresne and his counterparts in Alberta, Quebec and British Columbia say Open AI's approach to things like data collection and consent stepped on multiple laws, including Canada's Personal Information Protection and Electronic Documents Act (PIPEDA), which governs how companies collect and use personal information during the normal course of business.

The commissioners participating in the investigation identified multiple privacy issues with OpenAI's approach, including that the company "gathered vast amounts of personal information without adequate safeguards to prevent use of that information to train its models," and that it failed to acquire consent to collect and use that personal information in the first place. Warnings in ChatGPT note that interactions with the AI could be used in training, but third-party data OpenAI has purchased or scraped also includes personal details people likely aren't even aware of. The fact that ChatGPT users have no way to access, correct or delete that data was another issue that the commissioners identified, according to a summary of the investigation's findings, along with OpenAI's lackluster attempts to acknowledge the inaccuracy of some of ChatGPT's responses.

Canada's Privacy Commissioner contends that OpenAI was open and responsive to the investigation, and has already committed to making multiple changes to ChatGPT to follow Canadian privacy laws. OpenAI has retired earlier models that violated Canadian privacy regulation, and now uses "a filtering tool to detect and mask personal information (such as names or phone numbers) in publicly accessible internet data and licensed datasets used to train its models," the Commissioner says. The company has also agreed within the next three months to add a new notice to the signed-out version of ChatGPT explaining that chats can be used for training and sensitive information shouldn't be shared, and within the next six months:

Article preview — originally published by Engadget. Full story at the source.
Read full story on Engadget → More top stories
Aggregated and edited by the Scoop newsroom. We surface news from Engadget alongside other reporting so you can compare coverage in one place. Editorial policy · Corrections · About Scoop