OpenAI Opt-In Rules, New Practices for Data Privacy


ChatGPT and natural language processing AI’s are the talk of the workplace. You can generate emails, blog posts, even PowerPoint presentations. A major barrier to enterprise-use is OpenAI’s Terms of Use A major shift in their terms today (the OpenAI opt-in scheme) should remove the barriers for many businesses, leaving behind only the most stringent.

Privacy Concerns

When ChatGPT was released, some employers quickly banned ChatGPT from the workplace. There is certainly some unease and distrust of AI, and there is no doubt that answers aren’t necessarily perfect. From a lawyer’s point of view, the genuine concern is over data privacy and protection.

Broadly speaking, a business has a lot of sensitive and confidential information. This is not legal advice, and your organization may be different, but I see three legal obligations preventing the sharing of information with OpenAI:

  • Confidentiality
    • When you work with other businesses or even some discerning customers, you likely agree to keep some things confidential. For example, you might sign a non-disclosure agreement where you agree not to share business information, or in a general contract provision, you agree to keep things confidential. This type of information can be broad, maybe marketing plans, business strategies, contact information, new designs, contact information, etc.
  • Privacy Laws
    • Privacy law tends to care about “personally identifiable information” only. This means full name, email or physical addresses, and any other information that can identify a person. Privacy law is highly regulated, particularly in the European Union. Your company needs to be up front about its practice in a privacy policy, put in a variety of controls to ensure security, and put in other red tape like processes.
  • Transparency and Expectations
    • Another regulated area of the law is consumer protection. Consumers should expect some security practices. For example, if I share my data with a company, I should be able to expect (unless otherwise told) that they will not broadcast my data publicly. They can also expect that your promises made should be kept. Most major businesses will have a privacy policy that promises that use of data is limited in some fashion.

Look at OpenAI’s previous terms on the Wayback Machine, the biggest gripe is embedded in Section 3(c) which states:

To help OpenAI provide and maintain the Services, you agree and instruct that we may use Content to develop and improve the Services.

Section 3(c) of OpenAI’s Terms of Use, dated February 27, 2023

Let me spell out the problem for you. The user may enter just about anything into OpenAI’s chatbot. They could feed the chatbot a contract with confidential terms and pricing; they could include an email which mentions full names and phone numbers; they may even flat out throw in a top-secret document hoping to summarize it or derive some question and answer session with it. Those are all useful, but you’ve just shared the document with a third party.

In the modern age, this is perfectly natural. Your business can’t run everything, so your emails and documents are likely already in the hands of Google or Microsoft who provide your word processing and email software. Contractor or consultants may get their hands on the data. Long story short, chances are, third parties already have access to it. However, those third parties agree to the same legal concerns, and they are only authorized to use the information to help provide the services, whatever they may be.

This is where OpenAI’s terms cause issue. By using OpenAI, you’ve granted OpenAI the right to use data for their purposes, not yours. If they take your data and improve the service, that doesn’t help your clients at all. It is also a very nebulous area that gives OpenAI lots of room to pass your data to quality assurance folks, run it through other models, send it to other third party developers, etc. Long story short, OpenAI is a potential leak in your business that can be a breach of all three of the considerations above. Needless to say, these ChatGPT bans start to make some sense.

OpenAI opt-in, better, but not perfect

Today, we have new OpenAI opt-in rules. If we look at the new Terms of Use:

We do not use Content that you provide to or receive from our API (“API Content”) to develop and improve our Services. API Content is only used to provide and maintain our API Services. We may use Content from Services other than our API (“Non-API Content”) to help develop and improve our Services…

If you do not want your Non-API Content used to improve Services, you can opt out by filling out this form

Section 3(c) of OpenAI’s Terms of Use, dated March 1, 2023

OpenAI removed their ability to develop and improve their services through the API. We’ll talk about the exceptions in a moment, but this plugs up the leak we talked about above. OpenAI is suddenly just another third party provider like Microsoft or Google. Piecing it together with their privacy policy, they will use the data to provide the services, prevent fraud and security issues, comply with the law, and other generic obligations.

Limits of OpenAI opt-in, ChatGPT is still a risk

Their privacy policy hasn’t been updated yet, and I have some lingering concerns based on Section 2. I suspect I’m putting this post earlier than they can update their terms, and they’ll do better soon.

  • To provide, administer, maintain, improve and/or analyze the Services;
  • To conduct research, which may remain internal or may be shared with third parties, published or made generally available;
Section 2 of OpenAI’s Privacy Policy, dated September 22, 2022

Let’s come back to the exceptions. Non-API content includes DALL-E and ChatGPT. These are user friendly applications that non programmers can use without coding; like talking to a chatbot. ChatGPT is NOT covered by the new terms, which means that content through ChatGPT can still be used to improve OpenAI’s services. It still presents a risk to your business, at least, until you opt-out.

ChatGPT’s user friendliness is what made it accessible to the masses. However, API’s are basically everything else. They are likely making the calculus that regular users on ChatGPT will give them enough material to train on.

If your company has made an in-house tool, or if you are using it through command line, those are all acceptable. In fact, any large scale business use is likely through the API anyways for better automation. The real silver lining here is that virtually all of the spin-offs you see on the internet that help with small tasks, will be acceptable to use in your business. For example, I run a side project at PolitePost.net. It is narrower than an all purpose chatbot, but suddenly, it is far more appropriate to use in the workplace. The reference.legal demo AI contract tool is also under that same umbrella.

Actually no, ChatGPT is fine, sort of

Prior to all of this, I was already looking at ways to protect reference.legal users from legal jeopardy. While everyone was talking about the dangers of using ChatGPT, OpenAI already provided an “opt-out” program that virtually no one talked about.

The new form to opt-out now only applies to ChatGPT and DALL-E: https://docs.google.com/forms/d/e/1FAIpQLScrnC-_A7JFs4LbIuzevQ_78hVERlNqqCPCt3d8XqnKOfdRdQ/viewform. And you can read more about it on their FAQ.

This is really important. If you want to use ChatGPT in your organization, you should fill out the opt-out form because new OpenAI opt-in rules only apply to APIs.

Even so, the process is imperfect. When I opted-out for reference.legal, the process feels extremely manual, and you just get an email confirmation. It might be fine for a small business like mine, but certainly not great for a large organization. I imagine my colleagues in the European Union would want something much more substantial, but it is a good step nonetheless.

OpenAI opt-out confirmation email

Conclusion

OpenAI’s previous Terms of Use set the bar low for data privacy compliance. It creates risks for organizations that choose to use ChatGPT. On March 1st, the new OpenAI opt-in rules help to limit that risk, but only if you’ve opted-out. Even then, it is imperfect.

If you are a large organization, I highly recommend looking at Microsoft Azure OpenAI Service, which gives your business access to OpenAI through Microsoft. Microsoft will have enterprise-ready agreements that are much friendlier to large organizations, while OpenAI is doing the bare minimum to keep up. Check out our earlier article on how to apply for Microsoft Azure OpenAI Service.


One response to “OpenAI Opt-In Rules, New Practices for Data Privacy”