Published on November 16, 2023, 4:56 pm

Microsoft Expands Policy to Protect Customers from Copyright Infringement Lawsuits Arising from the Use of Generative AI

In an effort to provide greater support and protection to its commercial customers, Microsoft has announced the expansion of its policy regarding copyright infringement lawsuits linked to the use of generative AI. With the company’s fully managed service, Azure OpenAI Service, which includes governance layers on top of OpenAI models, Microsoft has pledged to defend and compensate customers for any “adverse judgements” resulting from copyright infringement claims while utilizing the service or the outputs it generates.

Generative AI models like ChatGPT and DALL-E 3 are trained using vast amounts of data including e-books, art pieces, emails, songs, audio clips, voice recordings, and more. While a significant portion of this training data is derived from public websites and falls within the public domain, certain elements require citations or specific forms of compensation due to licensing restrictions. The legal implications surrounding vendors training on data without permission are currently being debated in courts. However, one potential concern for generative AI users is when a model produces an exact replica or “regurgitation” of a training example.

It’s important to note that Microsoft’s expanded policy will not automatically apply to all Azure OpenAI Service customers. To qualify for these enhanced protections, subscribers must implement “technical measures” and comply with specific documentation requirements aimed at mitigating the risk associated with generating infringing content using OpenAI models. Although Microsoft did not provide further details about these measures ahead of the announcement during Ignite, TechCrunch inquired about them.

Additionally, it remains uncertain whether these protections extend to products still in preview such as GPT-4 Turbo with Vision within Azure OpenAI Service. Furthermore, it is unclear if Microsoft offers indemnity against claims pertaining to the training data utilized by customers when fine-tuning OpenAI models; further clarification has been requested.

Responding via email, a Microsoft spokesperson stated that the expanded policy applies to all products in paid preview and those offered by Microsoft, but not to a customer’s training data.

This newest policy follows Microsoft’s September announcement stating that they would cover legal damages on behalf of customers using some of its AI products if they were faced with copyright infringement lawsuits. Similar to the Azure OpenAI Service protections, customers must utilize the “guardrails and content filters” integrated into Microsoft’s AI offerings to remain eligible for coverage.

Interestingly, OpenAI recently revealed its decision to financially assist customers facing IP claims over work generated by its tools. Microsoft’s expansion of protection through Azure OpenAI Service seems to align closely with this initiative undertaken by OpenAI.

In addition to indemnity policies, one possible solution to address the regurgitation issue involves granting content creators the ability to remove their data from training sets used for generative models or alternatively providing adequate credit and compensation. OpenAI has expressed its intention to explore this approach in future text-to-image models as part of a potential follow-up to DALL-E 3. However, Microsoft has not committed explicitly to opt-out or compensation strategies. Instead, the company has reportedly developed technology capable of identifying when AI models generate material leveraging third-party intellectual property and content. This particular feature is part of Microsoft’s Azure AI Content Safety tool, which is currently available in preview.

While specifics regarding how Microsoft’s IP-identifying technology functions were not disclosed during initial inquiries, interested parties are advised to refer to an upcoming high-level blog post detailing the process announced at Ignite for further information.

Share.

Comments are closed.