Artists Push Back Against AI Scraping as Dall-E 3 Proves Its Superiority
OpenAI has developed Dall-E 3, a cutting-edge software capable of generating images depicting a wide range of subjects. With minimal input, this advanced program can create various visuals, such as a watercolor representation of a mermaid, a customized birthday message, or even a simulated photograph featuring Spider-Man enjoying a slice of pizza.
The new version of the tool, released in September, represents a “leap forward” in AI-generated images, OpenAI says. Dall-E 3 offers better details and the ability to render text more reliably. It also increases illustrators’ fear of being replaced by a computer program imitating their work.
We are now on WhatsApp. Click to join.
Rapid improvements in image creation have encouraged artists to cut back on generative AI companies that ingest vast amounts of internet data to create content such as images or text. It hasn’t helped much that OpenAI’s new process for artists who want to withdraw their information from the system is time-consuming and complicated. Some have sued generative AI companies. Others have used a growing number of digital tools that allow artists to monitor whether AI has taken advantage of their work. And still others have resorted to mild sabotage.
The goal is to resist losing business and commissions to machines that copy them, a common sentiment in the art world. “My art has been grossly insulted,” said illustrator and watercolorist Kelly McKernan. “And I know so many artists who feel the same way.”
‘Feels like an adventure’
Some people have found that they have limited access to how AI systems use their work. McKernan is part of a trio of visual artists suing image-creating startups Stability AI, Midjourney, and DeviantArt—all of which, like Dall-E 3, produce detailed and often beautiful images. The lawsuit claims their work was used to train AI image generators without permission or payment. The companies have denied being guilty of wrongdoing. And traditionally, collecting online content for AI software training has been protected under the fair use doctrine of US copyright law. In late October, the judge in the case threw out several of the defendants’ arguments and allowed the copyright infringement suit to move forward.
The challenges have increased the legal risks faced by AI companies, but it will likely take years for the matter to be resolved.
In the meantime, artists concerned about their material being used to train Dall-E 3 can follow the process outlined by OpenAI itself. This means filling out a form asking them to exclude the images from the company’s datasets so it won’t be used to train future AI systems.
The recently introduced opt-out process has sparked controversy because it can be time-consuming and cumbersome to use, and it doesn’t necessarily prevent programs from imitating an artist’s style. When testing Dall-E 3 through ChatGPT Plus, Bloomberg News found that the software refused to produce images for a prompt containing copyrighted characters, instead offering to create a more generic alternative — one that could still produce an image that looked like the copyrighted character. .
For example, ChatGPT refuses to use Dall-E 3 to create an image of Spider-Man. But at Bloomberg’s request, it offered to create a character that looks very similar because “the spider-based superhero wears a red and blue suit.” Similarly, although the tool does not create images in the style of living artists, it is possible to create images that evoke certain styles through detailed descriptions.
“It feels scary, the surface-level way it seems to be doing the right thing,” said Reid Southen, a concept artist and illustrator who has worked on films such as The Hunger Games and Matrix Resurrections.
Southen said he will not go through the opt-out process because he estimates it will take him months to complete. The system asks artists to upload images to OpenAI that they would like to exclude from future practice, along with a description of each piece. For Southen, it’s designed to encourage people not to remove their data from the company’s training processes.
Asking people to give them copies of their work so that OpenAI can avoid training about it in the future is ridiculous, said Calli Schroeder, senior counsel at the Electronic Privacy Information Center, or EPIC. He also doesn’t think artists trust the company to keep its word. “Because they’re the ones who benefit from all this information, they need to make sure that they can actually legally and ethically use this information for their training series,” Schroeder said.
Reached for comment, OpenAI is still evaluating the process to give people more control over how their data is used, and did not say how many people have completed the opt-out process so far. “It’s early days, but we’re trying to gather feedback and want to improve the experience,” the spokesperson said.
The poison pill
For artists who are not satisfied with official channels, there are other options. One company, Spawning Inc., created a tool called “Have I Been Trained” that allows artists to see if their work has been used to train some AI models and aims to help them weed out future datasets. Another service, Glaze, changes the pixels of an image ever so slightly to make it look computer like it’s a different art style. Released in August, Glaze has been downloaded 1.5 million times (there are also 2,300 online accounts for the invite-only online service).
The creator of Glaze is Ben Zhao, a professor at the University of Chicago, and his next project goes even further. In the coming weeks, Zhao plans to introduce a new tool called Nightshade, which will act as a kind of AI antidote that he hopes artists will use to protect their work, while potentially blocking AI models that exploit such data.
It works by modifying the image slightly so that the AI system appears to be something completely different. For example, an image of a castle whose pixels have been tuned through Nightshade is still visible to a person photographing the same castle – but an artificial intelligence system familiar with the image classifies it as something else, for example a truck. The hope is to prevent rampant digital scraping by making some images harmful to the model instead of helpful.
Zhao doesn’t think Nightshade is the solution to artists’ problems, but he hopes it can give them a sense of control over their work online and change the way AI companies collect educational data.
“I’m not particularly malicious, I want to do harm to any company,” Zhao said. “I believe that good things are being done in many places. But it’s about coexistence and good behavior.
One more thing! ReturnByte is now on WhatsApp channels! Follow us by clicking the link to never miss any updates from the world of technology. Click here to join now!