Published on November 8, 2023, 4:11 pm
Professional artists and photographers are expressing frustration with generative AI firms that use their work without consent or compensation to train their technology. These firms, such as OpenAI with its ChatGPT chatbot, rely on large amounts of data scraped from the web to train their models. Similarly, generative AI tools that produce images from text prompts also rely on scraping published online images for training.
To address this issue, a team of researchers has developed Nightshade, a tool that confuses training models by adding invisible pixels to artwork before it is uploaded online. This “poisons” the training data and can disrupt the output of image-generating AI models, rendering their results inaccurate. The research behind Nightshade has been submitted for peer review and has the potential to empower artists and photographers by deterring tech firms from ignoring copyright and intellectual property rights.
The University of Chicago professor Ben Zhao, who led the research team behind Nightshade, believes this tool can shift the balance of power back to content creators by offering a defense against those who disrespect their rights. The team plans to release Nightshade as an open-source tool so that others can improve upon it.
OpenAI has recently taken steps to address concerns raised by artists by allowing them to remove their work from its training data with considerable effort. This move aims to discourage content creators from resorting to tools like Nightshade, which could create further challenges for OpenAI and other companies in the long run.
In other news related to AI development, OpenAI envisions its GPT-4 model taking over as an online moderator across forums and social networks. By leveraging artificial intelligence instead of human moderators, OpenAI expects faster policy changes and more consistent labeling on digital platforms. Additionally, OpenAI appears poised to release the next version of its DALL-E text-to-image generator through a series of leaked alpha tests known as DALL-E 3.
However, not all developments have been successful. OpenAI recently discontinued its AI Classifier, a tool designed to detect content created by AI rather than humans, due to its low rate of accuracy. ChatGPT and similar services have faced backlash as concerns arise about potential misuse, such as students using the technology to pass off AI-generated essays and assignments as their own.
As generative AI continues to advance, it is critical to find a balance that respects the rights of content creators while exploring the possibilities of this innovative technology.