OpenAI’s Reinforced Commitment to Data Privacy: Insights from Legal Interact
Legal Interact, one of the proud partners who harness OpenAI’s technology, stands at the forefront of innovative solutions in the legal landscape. Today, we aim to shed light on the company’s recently updated data privacy policies, and what it means for our users.
RELATED ARTICLE: South Africa’s First AI Lawyer is Here
OpenAI’s 2 Models of Offering
1. First-party consumer applications, like the ChatGPT app, and
2. A robust API platform for developers and businesses. This includes powerful models such as GPT-4 and GPT-3.5 Turbo, which businesses worldwide can embed into their services.
Today, we focus on the latter, OpenAI’s developer API platform, and its implications on data privacy.
OpenAI has built its reputation on a bedrock of trust. Their policy is clear: they do not train on any user data or metadata funneled through their APIs unless users explicitly opt-in. This transparency, coupled with OpenAI’s ceaseless pursuit of fortified security measures, has always given partners like us at Legal Interact, a sense of assurance.
Notably, the data inputs and outputs to OpenAI’s API, whether directly or via the Playground, do not contribute to the model’s learning. OpenAI ensures their models are statically versioned, meaning they are neither retrained nor updated in real-time based on API requests.
OpenAI CEO, Sam Altman, reinforced this sentiment:
seeing a lot of confusion about this, so for clarity:
openai never trains on anything ever submitted to the api or uses that data to improve our models in any way.
— Sam Altman (@sama) August 15, 2023
The post on X (previously known as Twitter) unequivocally demystifies misconceptions, underscoring the company’s unwavering commitment to data privacy.
Further drilling into the data training aspect: OpenAI sources data from multiple channels, including publicly available data, licensed data, human reviewers, and data from the OpenAI API until March 1, 2023. It’s important to note that no data submitted post this date becomes part of their training set unless there’s an explicit opt-in.
As a unique offering, OpenAI provides a fine-tuning mechanism. This means organisations can adapt models to more specific tasks. The data used for fine-tuning remains exclusive to that particular organisation and is not employed by OpenAI for training other models. This sort of delineation fosters a sense of proprietary ownership, crucial for businesses like Legal Interact.
Model outputs, the predictions made based on prior training, are not extracts from the training data. Customers retain rights over these outputs, keeping in line with OpenAI’s Usage Policies and Terms of Use.
Addressing the elephant in the room – data retention: OpenAI maintains a zero data retention policy. After a period of 30 days, API inputs and outputs are irrevocably deleted, barring any legal prerequisites.
To conclude, at Legal Interact, our collaboration with OpenAI isn’t just technical but also hinges on the shared values of transparency, integrity, and commitment to data privacy. OpenAI’s reinforced policies bolster our confidence and reflect our mutual goal: to serve our users with utmost trustworthiness.
Find out more about this by calling us on +27 11 719 2000, emailing us on
in**@le***********.com
.