Published on November 4, 2023, 5:42 am

TLDR: OpenAI's new image AI, DALL-E 3, has improved its ability to generate images in different styles but has raised concerns among artists about the future of their profession. OpenAI offers an opt-out option for artists, but it may not effectively protect their rights. Artists are exploring methods like Glaze and Nightshade to combat generative AI and protect their work from unauthorized scraping. However, these methods can undermine the reliability of models and may not have a substantial impact. Model makers have greater resources to counter such efforts. Future advancements in AI training may provide alternative solutions for licensing and copyright disputes. The future of AI models depends on technological advancements and striking a balance between artists' interests and model makers' goals.

OpenAI’s latest image AI, called DALL-E 3, has made significant strides in the field of Generative AI. This new system has improved its ability to follow prompts and produce images that align well with various styles. While this development is remarkable, it has also amplified concerns among graphic designers and artists about the future of their profession.

OpenAI does provide an option for artists to have their images and graphics removed from the training material. However, this only applies to the training of the next model. In order for this measure to be effective in preserving artists’ rights, a significant number of artists would need to refuse to use AI models, resulting in a noticeable decline in the quality of the technology.

Reports have surfaced about artists finding the opt-out process cumbersome and ineffective, deeming it more of a “charade.” OpenAI has not disclosed how many complaints they have received so far but assures that they are actively gathering feedback and working towards improving the process.

For artists seeking ways to combat generative AI for images, there are currently two options available. First, they can hope that international courts will recognize their copyright claims and hold model providers accountable. However, legal proceedings like these can often be protracted, lasting several years before any resolution is reached. The outcome of such cases also remains uncertain.

Another approach gaining attention is sabotaging AI models using techniques like Glaze. These methods involve adding invisible pixels to original images in order to trick AI systems into misinterpreting the style. By applying Glaze or similar tools, hand-drawn images can be translated into 3D renderings while preserving their unique style.

Nightshade is another interesting tool that works by manipulating pixels within an image with the intention of confusing and damaging AI models. For example, instead of recognizing a train, an AI system manipulated with Nightshade would perceive a car instead. A mere collection of fewer than 100 “poisoned” images can corrupt an image AI model like Stable Diffusion XL. The Nightshade team intends to incorporate this tool into Glaze, considering it the “last defense” against web scrapers that disregard scraping restrictions.

Although Nightshade currently exists only as a research project, it offers content owners a means to protect their intellectual property from unauthorized scrapers who ignore copyright notices and scraping/crawling instructions. Movie studios, book publishers, game producers, and artists may find systems like Nightshade useful in deterring unauthorized scraping.

However, it’s important to acknowledge that the use of Nightshade and similar tools could have negative consequences. These methods have the potential to undermine the reliability of generative models and hinder their ability to produce meaningful images.

Despite hypothetical scenarios where artists collectively sabotage training data, it is unlikely that such efforts would have a substantial impact. Model makers possess the ability to employ countermeasures, such as filtering out corrupted files or developing more sophisticated technology. In most cases, model makers have greater resources and flexibility compared to artists or researchers aligned with artists’ interests, including the creators of Glaze and Nightshade.

Additionally, future advancements in AI model training are likely to result in increased efficiency and improved generalization abilities. As highly capable AI systems become more adept at generating higher-quality images with less data required, they may be used as synthetic training material. This potential development could provide model providers with an alternative solution for licensing and copyright disputes.

In conclusion, OpenAI’s latest image AI, DALL-E 3, has shown remarkable progress in generating well-matched images across various styles. However, concerns remain among artists about protecting their intellectual property rights and maintaining the integrity of their creative work within generative AI systems. Various strategies such as opting out of training material and employing tools like Glaze or Nightshade are being explored by both artists and researchers alike. Nevertheless, the future of AI models will likely depend on advancements in technology, legal developments, and the ability to strike a balance between the interests of artists and model makers.


Comments are closed.