OpenAI Got Caught Collecting Too Much Data on Canadians — Here’s What Had to Change
OpenAI has made significant changes to how it handles personal data in Canada after a joint investigation by federal and provincial privacy regulators found that the original launch of ChatGPT broke several privacy laws.
The probe involved watchdogs from Ottawa, Quebec, British Columbia, and Alberta. They found that OpenAI was collecting more personal information than necessary, wasn’t being upfront about how that data was used to train its models, and wasn’t doing enough to address factual errors in its responses.
As part of the resolution, OpenAI has pulled back on the amount of personal and sensitive data used to train its newer models, made it easier for Canadians to find, correct, or delete their information, and committed to being clearer about the privacy risks of using the chatbot.
Privacy Commissioner Philippe Dufresne said the original complaint was justified but considers the matter resolved given the changes OpenAI has made. He framed the investigation as a signal to the broader industry that privacy can’t be an afterthought when launching AI products.
“Addressing the privacy impacts of technologies, such as artificial intelligence, is of utmost importance,” Dufresne said on Wednesday, adding that Canadians shouldn’t have to give up their fundamental right to privacy to benefit from new technology.
The Commissioner’s office said it will keep an eye on OpenAI to make sure the company follows through on its commitments. Regulators also used the occasion to flag what they see as a growing problem: Canada’s privacy laws are overdue for an update and weren’t built with AI in mind.
Want to see more of our stories on Google?
P.S. Want to keep this site truly independent? Support us by buying us a beer, treating us to a coffee, or shopping through Amazon here. Links in this post are affiliate links, so we earn a tiny commission at no charge to you. Thanks for supporting independent Canadian media!
