OpenAI Adds Privacy Controls to ChatGPT

OpenAI has added privacy controls to ChatGPT as the company faces increased scrutiny over the use of user data....
OpenAI Adds Privacy Controls to ChatGPT
Written by Staff
  • OpenAI has added privacy controls to ChatGPT as the company faces increased scrutiny over the use of user data.

    Large-scale AI models rely on massive quantities of data for training and fine-tuning. This has led to questions about how user data is handled, with Italy even going so far as to ban ChatGPT until concerns can be addressed. The ban could end up serving as a template for the rest of the EU, representing a major obstacle for OpenAI to overcome.

    The company has now responded, adding privacy controls to its AI chatbot. The company announced the changes in a blog post:

    We’ve introduced the ability to turn off chat history in ChatGPT. Conversations that are started when chat history is disabled won’t be used to train and improve our models, and won’t appear in the history sidebar. These controls, which are rolling out to all users starting today, can be found in ChatGPT’s settings and can be changed at any time. We hope this provides an easier way to manage your data than our existing opt-out process. When chat history is disabled, we will retain new conversations for 30 days and review them only when needed to monitor for abuse, before permanently deleting.

    The company says it is also working on a ChatGPT Business subscription that will give businesses even more control over how their data is used:

    We are also working on a new ChatGPT Business subscription for professionals who need more control over their data as well as enterprises seeking to manage their end users. ChatGPT Business will follow our API’s data usage policies, which means that end users’ data won’t be used to train our models by default. We plan to make ChatGPT Business available in the coming months.

    This last step is particularly important for ChatGPT to continue being used in business and enterprise settings. There have already been incidents involving sensitive data being leaked via ChatGPT, such as when a Samsung employee leaked proprietary semiconductor software data to the chatbot.

    OpenAI says it will also roll out an export function that will give users insight into exactly how their data is being used.

    Get the WebProNews newsletter delivered to your inbox

    Get the free daily newsletter read by decision makers

    Subscribe
    Advertise with Us

    Ready to get started?

    Get our media kit