ChatGPT made AI accessible to the masses, but what about an enterprise ChatGPT? Asides from skeptics, the other major hurdle to enterprise-wide adoption was OpenAI’s privacy policy. Microsoft has a solution.
Almost immediately after ChatGPT’s release, tech-savvy folks began using it in organizations and in their jobs. It’s far from perfect, but if you want it to rewrite an email or sketch a first draft, ChatGPT can be a time saver in any white collar job. It is a tool that should be embraced by organizations, but they should also be aware of the risks of using ChatGPT in the workplace.
Overview of Microsoft’s Enterprise ChatGPT
In a previous post, I walked through the application process to get access to Microsoft Azure OpenAI Service. This time, we’re going to focus on why it’s better; primarily their terms and conditions. We can summarize the benefits as:
- Microsoft’s ability to use your data is far more restricted than OpenAI, with an opt-out to remove even those limited permitted uses
- Azure is designed to work in specific regions, so you will have the option to restrict cross-border transfers of your data
- Leveraging your existing relationship with Microsoft
Keep in mind however, that the two services, while sharing the same features, are not the same. OpenAI makes it incredibly easy for businesses and individuals to engage with their ChatGPT. You just need a web browser and you can begin chatting with an AI. Azure OpenAI Services opens up an application programming interface (API) to engage with. You will need to develop your own tools and create your own webpage to run your own enterprise ChatGPT. While not overly difficult, you will need someone familiar with the Azure Portal and programmers who know how to interact with an API to use Azure OpenAI Services. This solution is really geared for enterprises who can spare those resources.
This is NOT legal advice. We’re going to walk through some broad strokes of Microsoft’s privacy practices (as they currently exist), but you should consult a professional to determine whether Microsoft Azure OpenAI is suitable for you.
The Documents
Microsoft does not make it easy to find their OpenAI terms. From poking around the internet and running through the various Microsoft blog posts, I’ve consolidated it to 5 key documents:
- Informal Overview: https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy
- General Privacy Practices (Data Protection Addendum, frequently updated): https://www.microsoft.com/licensing/docs/view/Microsoft-Products-and-Services-Data-Protection-Addendum-DPA
- OpenAI Specific Terms (scroll down to Azure OpenAI Service): https://www.microsoft.com/licensing/terms/productoffering/MicrosoftAzure/MCA#ServiceSpecificTerms
- General Terms (important, but not for this discussion): https://azure.microsoft.com/en-ca/support/legal/subscription-agreement/
- Order Form (the document your business signed, which may contain terms specific to your business): no URL
What does Microsoft do differently?
Existing relationship
Microsoft provides Windows and Office to some of the most regulated industries. There’s a good chance your organization already has an Order Form with Microsoft that links to their General Terms and the General Privacy Practices. While Azure OpenAI Service falls outside some those General Terms because it is classified as a “Limited Access Service”, your existing relationship should make Microsoft more attractive than OpenAI. For example, you may have an enhanced support package that gives you 24/7 access to real live support staff. If you have negotiated specific privacy practices, that is even better.
Permitted use of your data
Unlike OpenAI, Microsoft has clearly stated their privacy practices. The Informal Overview is helpful, but most of our discussion is around the OpenAI Specific Terms which provide the contractual language Microsoft is bound by. To begin, Microsoft can only make limited use of your data:
Microsoft will process and store Customer Data submitted to the service, as well as Output Content, for purposes of (1) monitoring for and preventing abusive or harmful uses or outputs of the service; and (2) developing, testing, and improving capabilities designed to prevent abusive use of and/or harmful outputs from the service.
Azure OpenAI Service Specific Terms, retrieved March 28, 2023
OpenAI has only recently pivoted from using customer data to improve the product. In an earlier post about OpenAI’s terms, our major concern is that your business’ confidential information might be used to train OpenAI’s model and may (theoretically) reappear as someone else’s response. In a worst case scenario, this can become a very serious breach. Microsoft has ruled that out. Customer Data is only used to monitor the service, and improving systems to prevent abuse. In short, they are simply using it for content moderation purposes. Out of the box, your exposure to privacy breach is far lower.
Data sovereignty
Microsoft also takes care of data sovereignty issues; where you are obligated to keep your data within some geographic region. While spinning up an Azure OpenAI resource, you are given an option to select a region to keep your data. Furthermore, Microsoft specifically calls out their practices in the European Economic Area, that use of Customer Data will remain within the European Economic Area even for content moderation purposes.
In both cases, for customers who have deployed Azure OpenAI service in the European Economic Area, the authorized Microsoft employees will be located in the European Economic Area.
Azure OpenAI Service Specific Terms, retrieved March 28, 2023
Opt-out of any use by Microsoft
Even better, Microsoft has a Modified Content Filtering and/or Abuse Monitoring option. Though terms are not specific, the informal overview discusses how a business can completely opt out of any use of its data by Microsoft:
Customers who meet Microsoft’s Limited Access eligibility criteria and have a low-risk use case can apply for the ability to opt-out of both data logging and human review process.
Data, privacy, and security for Azure OpenAI Service, dated Jan 30, 2023
You can initiate the process at: https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu.
How is this different from OpenAI?
OpenAI is just less mature. It’s geared for small businesses and individuals. If you aren’t prepared to set up your own Azure OpenAI Services resource and have the IT staff familiar with the Azure Portal, you will be using something like ChatGPT.
OpenAI updated their privacy policy in March 2023, proudly stating that they will no longer use customer data to train their models. The big caveat is that this exclusion does not apply to ChatGPT or Dall-E. I suspect most businesses opting for OpenAI will be using ChatGPT, not their APIs, so the risk that your confidential information may show up in someone else’s output remains a possibility.
To be fair, OpenAI has an opt-out process as well: https://docs.google.com/forms/d/e/1FAIpQLScrnC-_A7JFs4LbIuzevQ_78hVERlNqqCPCt3d8XqnKOfdRdQ/viewform. I don’t think the process is well documented after going through the process myself. If your business is audited or accredited, OpenAI’s incredibly informal email response is pretty flimsy proof that may not satisfy your auditors.
OpenAI doesn’t provide any assurances for data sovereignty either. And they don’t have the same support or sophistication of Microsoft.
What other risks are there for enterprise ChatGPT?
In my opinion, Microsoft does a great job of trying to address the major concerns from a privacy perspective. The caveat in their OpenAI Specific Terms is the following provision:
Customer is responsible for responding to any third-party claims regarding Customer’s use of the Azure OpenAI Service in compliance with applicable laws (including, but not limited to, copyright infringement or other claims relating to Output Content output during Customer’s use of the Azure OpenAI Service).
Azure OpenAI Service Specific Terms, retrieved March 28, 2023
What this provision says is that Microsoft isn’t taking any responsibilities for the output of GPT. For example, if GPT responds with an excerpt from Harry Potter, Microsoft is saying it is the customer’s responsibility. Since GPT can output factually incorrect information or copyrighted information, you should pay attention and screen the responses. The risk is highly dependent on your usage. If you use GPT to rewrite some emails, the risk exposure is limited. However, if you publish the content on your company blog, that is an entirely different matter.
Conclusion
We’re still in the early days of artificial intelligence. I think it’s an amazing technology that businesses should hop on immediately. However, it isn’t without legal risk. Microsoft may not offer exactly the same product as OpenAI, but they understand the needs of enterprises and took some major steps to address them. If your business is considering using GPT, you should take a look at both company’s offerings and their terms. Do you need something easy to use, or do you want an enterprise ChatGPT?