US Court Orders OpenAI to Preserve All ChatGPT User Chats in NYT Lawsuit

A U.S. court has ordered OpenAI to preserve all ChatGPT user conversations, including deleted ones, amid a copyright lawsuit by The New York Times. OpenAI opposes this as a "privacy nightmare," arguing it erodes user trust and complicates data policies. The ruling could set precedents for AI data handling worldwide.
US Court Orders OpenAI to Preserve All ChatGPT User Chats in NYT Lawsuit
Written by Tim Toole

In a landmark ruling that underscores the growing tensions between artificial intelligence innovation and legal accountability, a U.S. federal court has mandated that OpenAI preserve all user conversations with its ChatGPT platform, including those previously deleted or marked as temporary. This order stems from an ongoing copyright infringement lawsuit filed by The New York Times against OpenAI and Microsoft, accusing the AI company of using copyrighted material to train its models without permission. The directive, issued by Magistrate Judge Ona T. Wang in the Southern District of New York, requires OpenAI to “preserve and segregate all output log data” from ChatGPT, effectively overriding user deletion requests and standard data retention policies.

The implications for user privacy are profound, as millions of ChatGPT interactions—ranging from casual queries to sensitive personal disclosures—must now be retained indefinitely. OpenAI, which processes billions of interactions monthly, had previously assured users that conversations in “temporary chat” mode or those explicitly deleted would be purged after 30 days. However, the court’s order, detailed in a Ars Technica report from June 2025, forces the company to maintain these logs for potential evidentiary use in the litigation, raising alarms about data security and consent.

The Privacy Nightmare Unfolding

OpenAI has vehemently opposed the ruling, arguing in court filings that it constitutes a “privacy nightmare” for its hundreds of millions of users. Company executives, including CEO Sam Altman, have publicly warned that treating AI chats like public therapy sessions erodes trust in the technology. In a recent interview highlighted by Hindustan Times, Altman compared current AI privacy standards to “having therapy sessions in public,” emphasizing that deleted chats could still be accessed for legal reasons. This stance aligns with broader industry concerns, as developers building on OpenAI’s API now face challenges in honoring their own data deletion promises.

The order extends to all tiers of ChatGPT users—free, Plus, Pro, and Team—meaning even anonymized or API-driven interactions are subject to preservation. As reported in a HackerNoon analysis, this could jeopardize compliance with global regulations like the EU’s GDPR, which mandates data minimization and user rights to erasure. Legal experts suggest the ruling sets a precedent for how courts might treat AI-generated data in future disputes, potentially forcing other tech giants to rethink their logging practices.

Industry Ripples and User Backlash

Public sentiment, as gauged from posts on X (formerly Twitter), reflects widespread unease. Users and developers have expressed frustration over the inability to truly delete sensitive data, with some likening it to permanent digital surveillance. One prominent post noted that apps using OpenAI’s API “simply cannot honor” retention policies anymore, amplifying fears of data breaches or misuse in unrelated legal contexts. This backlash is echoed in a TechRadar piece from June 2025, which advises treating ChatGPT “less like a therapist and more like a coworker who might be wearing a wire.”

For industry insiders, the ruling highlights a critical vulnerability in AI deployment: the clash between scalable data practices and litigation demands. OpenAI’s fight against the order, as covered by ADWEEK, underscores the company’s push for more granular controls, such as toggles for anonymous modes or alerts for preserved conversations. Yet, without appellate relief, this could chill innovation, as startups hesitate to integrate AI amid fears of indefinite data hoarding.

Looking Ahead: Legal and Ethical Horizons

As the lawsuit progresses—now over 17 months old, per insights from Mashable—analysts predict ripple effects across the sector. A Medium post from July 2025 details how the May 13 ruling mandates segregation of logs, potentially straining OpenAI’s infrastructure and costs. Ethically, it prompts questions about informed consent: Should users be notified upfront that their “deleted” chats might endure for court purposes?

Ultimately, this case may catalyze regulatory reforms, pushing for AI-specific privacy laws that balance innovation with user rights. For now, OpenAI’s predicament serves as a cautionary tale, reminding the tech world that in the age of AI, no conversation is truly ephemeral when the courts come calling.

Subscribe for Updates

GenAIPro Newsletter

News, updates and trends in generative AI for the Tech and AI leaders and architects.

By signing up for our newsletter you agree to receive content related to ientry.com / webpronews.com and our affiliate partners. For additional information refer to our terms of service.
Get the WebProNews newsletter delivered to your inbox

Get the free daily newsletter read by decision makers

Subscribe
Advertise with Us

Ready to get started?

Get our media kit

Advertise with Us