Microsoft 365 Copilot is Here. What Are the Legal Risks of Using It?
Wednesday, January 3rd, 2024
Microsoft 365 Copilot is here. Will it be a big timesaver for your business? Does using it present significant legal risks?
Copilot adds generative AI capability to core Microsoft Office applications, such as Word, Outlook, Excel, Teams, and PowerPoint. It can be used to create, summarize, and analyze things in those applications. For example, purportedly, it can summarize what was said in a Teams video conference or an email chain in Outlook.
It became available on November 1 only to large enterprise users – ones for which a minimum of 300 Copilot individual licenses are purchased. It operates on top of Microsoft 365. Microsoft charges $30 per user per month for adding Copilot. Microsoft says it intends to roll out Copilot to small 365 users in 2024 but hasn’t set a schedule.
Because my firm is small, I have not been able to try it yet. Still, in theory, it could be a powerful productivity booster for businesses using Microsoft.
For each individual user, to generate its output, Copilot can draw upon everything in the company’s Microsoft ecosystem to which the user has at least viewing rights. It might do what generally available AI chatbots cannot do: produce an output that draws upon your materials. For example, let’s say I want to have an AI draft a contract. It would be a game changer if a generative AI could consider contracts I previously drafted in producing its output.
What are the legal risks of using Copilot?
The biggest concern is confidentiality. With many generally available generative AIs, such as ChatGPT, anything you put in a prompt is used in the AI’s training. That creates a risk that your input could appear in someone else’s output. Also, the AI provider can see your input and output.
Microsoft promises that, with Copilot, your inputs and outputs are kept confidential. It says it will not use your input or output to train its AI for its other customers, and your input will not show up in the output of other Copilot users (at least outside of your company).
But there is a major catch: Microsoft says it captures and may access your Copilot prompts and outputs for 30 days. It operates an abuse monitoring system to review that material for violations of its code of conduct and “other applicable product terms.” Microsoft says its customers with special needs regarding inputs containing sensitive, confidential, or legally regulated input data can apply to Microsoft for an exemption from this abuse monitoring.
There are two other important caveats. Microsoft says your data may leave the Microsoft 365 service boundary when you allow Copilot to reference public web content via Bing. Microsoft also says your data may leave the service boundary when using plugins with Copilot. Be sure to check with your techies on those issues.
Microsoft also offers copyright-infringement coverage for the output from Copilot, with certain qualifications. The concern is that the AI might produce an output highly similar to something it ingested in training, such as something scraped off the Internet without the content creator’s permission. If that happened, the output might be a copyright infringement.
There are important limitations on the copyright-infringement coverage Microsoft provides for Copilot. Have your legal counsel and technology team check them.
There are other risks to consider. Here are the tips of some of those icebergs:
• Many companies are far too permissive, often unintentionally, in giving Microsoft privileges to individual employees. Because of this, Copilot could pull information into an employee’s Copilot output that the company didn’t intend for the employee to have access to.
• Carefully review all output to ensure it doesn’t violate confidentiality obligations. For example, perhaps Copilot might pull data from documents about Client A in building a sales pitch for Client B.
• While the law is just coming into shape, the early returns are that material generated by an AI can’t be anyone’s copyright property. If it’s important to your company that it owns the copyright to the output, using Copilot may undercut that ownership.
• There will still be a problem with hallucinations, which is an AI stating in its output something that sounds authoritative but is wrong.
• Also, bias or discrimination claims could arise from using Copilot. For example, all sorts of statutes, regulations, and case law create the basis for bias or discrimination claims in employment and public accommodation settings, such as job applicant screening. Using an AI such as Copilot in such circumstances must conform to those laws.
My biggest concern is whether small users will have the technical and legal support to use Copilot without undue risk. It’s a lot to manage properly.
Written on December 19, 2023
by John B. Farmer
© 2023 Leading-Edge Law Group, PLC. All rights reserved.