The New York Times Just Made ChatGPT Less Private for Everyone

The New York Times is forcing OpenAI to store all ChatGPT user data going forward as the former escalates its copyright infringement lawsuit against the AI giant.
Back in late 2023, The New York Times sued OpenAI for allegedly using the publication’s content to train its AI technologies without consent. Now, OpenAI is being forced by court order to retain all consumer and enterprise ChatGPT data, including users’ conversations with the popular chatbot, indefinitely as part of the legal proceedings.
“This fundamentally conflicts with the privacy commitments we have made to our users. It abandons long-standing privacy norms and weakens privacy protections,” said OpenAI COO Brad Lightcap.
“We strongly believe this is an overreach by the New York Times. We’re continuing to appeal this order so we can keep putting your trust and privacy first.”
The order covers “all output data” from both ChatGPT and OpenAI API users. According to OpenAI, anyone with a ChatGPT Free, Plus, Pro, or Team subscription and users of the OpenAI API (without a Zero Data Retention agreement) will be affected. OpenAI is even being forced to store users’ deleted ChatGPT conversations and API content that would normally be automatically deleted within 30 days, per the company.
The New York Times hopes that ChatGPT user data will support its claims that the AI chatbot was trained on its copyrighted content. OpenAI explained that any data stored as part of the order will be under legal hold, meaning it can only be used to comply with legal obligations and nothing else.
OpenAI is actively fighting the data preservation order, asking the Magistrate Judge to reconsider it and appealing it before the District Court Judge. Check out OpenAI’s complete response to the order for more information on what’s going on with the lawsuit and how the proceedings may impact ChatGPT users.
OpenAI this week expanded ChatGPT’s memory capabilities to free-tier users, bringing more personalized and context-aware interactions to a larger portion of its user base.
Want to see more of our stories on Google?
P.S. Want to keep this site truly independent? Support us by buying us a beer, treating us to a coffee, or shopping through Amazon here. Links in this post are affiliate links, so we earn a tiny commission at no charge to you. Thanks for supporting independent Canadian media!
NYT is not forcing them to do anything. They do not possess such power. It’s the judge and the courts who are forcing OpenAI to do this.
What an incredible spin you put on this. It's like saying the police didn't charge anyone with a crime, it was the judge.
I am stating literally what is happening. The only spin here is being put by OpenAI and the author of this article on iPhone in Canada.
Neither the police nor judges charge people with crimes nor do they have the power to. The crown is the only body with the power to charge people with crimes, and often the police and judges are not happy with the crown’s decision to prosecute or not to prosecute, but they can’t do anything about it because they don’t have that power.
It’s a legal hold. I am fine with it if it means we will get to the bottom of the question if ChatGPT and others stole copyrighted content.
Who's "we"? Ren and Stimpy?
I mean, we already know they did. They have brazenly admitted as much. Now it’s only a question as to what extent and how extensive the damages are and should be.
There should be no damages. It's absurd to consider this material stolen. Their whole arguments rely on judges not understanding that an AI training is no different than a person reading something.
Copyrights do more harm than good.
They did, it's not even a question 🤦♂️ Facebook got cought red handed stealing terrabite of torrented material
Good for them. Most copyrights are absurd. The whole process should be abolished and reworked so as to actually have its intended goals
Its absurd to pretend anything was stolen. Reading publically available material is not theft of that material. AI should be trained on copyrighted material, and copyright holders should be ignored.
There is no good reason to consider training an AI any different than a person reading.
If an artificial intelligence reads the New York Times, has "opinions" about its content, and may quote articles sometimes… what does that really answer? If a school class is trained on New York Times articles… and then they have to write a report on the content is there a difference?
Well yes… but SHUSH! It would be fun to see the lawyers make the connection.
I mean, it’s a data harvesting, surveillance company. No different than Google or Facebook. No one made it less private. It’s definitionally a company and product without privacy.