@Details Of API Data Privacy: Protecting User Data Is A Fundamental Mission. OpenAI Offers It's Models Two Different Ways And More #api #data #privacy #protecting #user #data #openAI

API data privacy
At OpenAI, protecting user data is fundamental to our mission. We do not train our models on inputs and outputs through our API.
OpenAI offers its models two different ways:

(a) First party consumer applications like the ChatGPT app
(b) A robust API platform for developers and businesses that includes our most powerful models (GPT-4, GPT-3.5 Turbo, embeddings, fine-tuning, etc.), enabling organizations everywhere to incorporate OpenAI models directly into their products, applications, and services
This page describes our data privacy policies for (b)—our developer API platform and future business service offerings—not for the consumer applications described in (a), which includes Plugins that are designed to connect the ChatGPT web app to third-party applications

Our commitment
At OpenAI, protecting customer and user data is fundamental to our service offering and mission as a company. Our platform upholds the rigorous data security and privacy standards that organizations expect.

Our commitment:

We do not train on any user data or metadata submitted through any of our API, unless you as a user explicitly opt in.
We strive to be transparent about our data handling practices and policies.
We are dedicated to maintaining robust enterprise-grade security, and to continually improving our security measures

Inputs and outputs to our API (directly via API call or via Playground) for model inference do not become part of our models.

Specifically:

We make our models available through our API after they finish training. 
Models deployed to the API are statically versioned: they are not retrained or updated in real-time with API requests.
Your API inputs and outputs do not become part of the training data unless you explicitly opt in

Your API inputs and outputs do not become part of the training data unless you explicitly opt in.

Sources of data for training our models may includeA:

Publicly available data.
Data licensed from third-party providers.
Data created by human reviewers.
Data submitted to the OpenAI API before March 1, 2023 (unless organizations opted out).

Our models are not trained onB:

Data submitted through the OpenAI API after Mar 1, 2023 (unless explicitly opted in), including input, output, and file uploads.

Fine-tuning

Fine tuning a model allows you to adapt it to more specific tasks by training it further on a specialized dataset. Our API allows you to fine-tune certain models using prompt–completion pairs uploaded through training files.

The training file uploaded by your organization is used solely to fine tune a model for your organization’s use. It is not used by OpenAI, or any other organization, to train other models.
Inputs and outputs for inference by the fine-tuned models are not used to train your or any other models.
While OpenAI retains ownership of fine-tuned models, only your organization can access the models fine-tuned with your data.
You can delete your organization’s training file or fine-tuned models at any time via an API call.

Model outputs
The output generated by our models is a prediction built from the understanding it has gained from its training. These outputs are not directly lifted from the training data.

As between OpenAI and its customers, customers retain all rights to their inputs and own all outputs. We don’t restrict customer use of outputs as long as they comply with the OpenAI Usage Policies and the OpenAI Terms of Use.

Compliance
We maintain a security program to implement and maintain enterprise-grade security on our products and services.

SOC 2 Type 2 compliance against the Trust Services Criteria for security
GDPR
CCPA and other state privacy laws
HIPAA (via Business Associate Agreement). We are able to sign Business Associate Agreements in support of customers’ compliance with the Health Insurance Portability and Accountability Act (HIPAA). Please reach out to our sales team if you have a qualifying use-case.

How can I get a Data Processing Addendum (DPA)?

Complete our DPA form to execute our Data Processing Addendum.

Addressing policy violations

If we identify violations of our Usage policies, we may ask the customer to make necessary changes. Repeated or serious violations may result in further action, including suspending or terminating the customer’s account.

Footnotes
Sources of data for training our models may include (1) data submitted through our first party ChatGPT web or iOS app (unless individual users turn off chat history) and (2) data submitted through our first party DALL·E app (unless individual users opt out).↩︎

Models are also not trained on data submitted through our first party ChatGPT web and iOS apps when chat history is turned off.

Comments

Popular posts from this blog

@All About - Hard shoulder. #all #about #hard #shoulder

@Ask To Get The Best: I need to throw a dinner party for 6 people who are vegetarian. Can you suggest a 3-course menu with a chocolate dessert?

@A Complete Answer To An Interesting Question: How do I optimize my website for lead generation? #optimize #website #lead #generation