Published on November 16, 2023, 6:18 pm

Ed Newton-Rex, the head of the audio team at Stability AI, recently made headlines as he resigned from his position. The reason behind his departure is a disagreement over the application of ‘fair use’ when training generative AI models.

In a public statement, Newton-Rex shared his concerns about the company’s stance on training generative AI models using copyrighted works under the banner of ‘fair use’. He firmly stated that he doesn’t agree with this viewpoint. While acknowledging the thoughtful approach and support of Stability AI in developing Stable Audio, an AI music generator based on licensed training data that shares revenue with rights holders, Newton-Rex emphasized that it did not change his opinion on fair use within the company.

The issue of fair use in generative AI gained attention when the US Copyright Office sought public comments on this subject. Among many companies that responded, Stability AI submitted a 23-page document stating their belief that AI development constitutes acceptable and transformative use protected by fair use.

Fair use is a legal doctrine allowing limited usage of copyrighted material without obtaining permission from rights holders. According to Newton-Rex, a crucial factor in determining whether copying falls under fair use is “the effect of the use upon the potential market for or value of the copyrighted work.” He argues that today’s generative AI models can generate works that directly compete with their copyrighted counterparts, which challenges the notion that training these models qualifies as fair use.

In addition to concerns about fair use, Newton-Rex firmly believes that training generative AI models without permission is morally wrong. He expressed his distress over billion-dollar companies exploiting creators’ works by training AI models without their consent, potentially undermining their livelihoods.

Despite disagreeing with Stability AI on the fair use perspective, Newton-Rex remains an advocate for generative AI after working in the field for 13 years. However, he makes it clear that he can only support generative AI if it does not exploit creators by training models on their work without permission.

Newton-Rex hopes that others within generative AI companies will also voice their concerns about fair use and push for a change in how creators are treated in the development of generative AI technology.

Stability AI is not the only company to submit comments on fair use in generative AI. Meta, Google, and OpenAI have also shared their thoughts with the US Copyright Office. They argue that training AI models with copyrighted material falls under fair use and does not infringe copyright holders’ rights. Meta even compared generative AI to other transformative technologies like printing presses, cameras, and computers, warning against high licensing fees that could hinder generative AI’s progress.

Google and OpenAI advocate for a flexible interpretation of fair use and caution against premature legislation that may stifle innovation and limit the potential of AI technology.

As the debate over fair use in generative AI continues, it is clear that there are differing opinions within the industry. While some believe training models on copyrighted works falls under fair use, others like Newton-Rex emphasize the need for permission and ethical practices when utilizing creators’ works in generative AI models.

Share.

Comments are closed.