Artists are teaming up with university researchers to protect their work from AI copycat activity. (AFP)AI 

Artists employ technological tools to combat AI imitators

University researchers have joined forces with artists who are being targeted by artificial intelligence (AI) that analyzes their work and imitates their artistic styles, in an effort to combat this replication.

US illustrator Paloma McClain went on the defensive after learning that several AI models had been “trained” using her art without being sent credit or compensation.

“It bothered me,” McClain told AFP.

“I believe that truly meaningful technological development happens ethically and uplifts all people, rather than working at the expense of others.”

The artist turned to a free software called Glaze created by researchers at the University of Chicago.

Glaze essentially thinks of AI models as they train, adjusting pixels in ways that humans can’t perceive but that make the digitized artwork look dramatically different from the AI.

“We’re basically providing technical tools to help protect human creators from invasive and abusive AI models,” said computer science professor Ben Zhao of the Glaze team.

Created in just four months, Glaze developed the technology used to disrupt facial recognition systems.

“We worked at a super-fast speed because we knew the problem was serious,” Zhao said as he rushed to defend artists from software trackers.

“A lot of people were in pain.”

The generative AI giants have agreed to use the data for training in some cases, but most digital images, audio and text have been used to shape the way super-intelligent software thinks they’ve been scraped from the Internet without express permission.

Since its release in March 2023, Glaze has been downloaded more than 1.6 million times, according to Zhao.

Zhao’s team is working on a Glaze enhancement called Nightshade, which improves defenses by confusing the AI, such as making it interpret a dog as a cat.

“I think Nightshade will have a significant impact if enough artists use it and put enough toxic images into nature,” McClain said, referring to the readily available online.

“According to Nightshade’s research, it doesn’t require as many poisoned images as you might think.”

According to the Chicago academic, several companies interested in using Nightshade have approached Zhao’s team.

“The goal is for people to be able to protect their content, whether it’s individual artists or companies with a lot of intellectual property,” said Zhao.

Viva voce

Startup Spawning has developed Kudurru software that detects attempts to collect large amounts of images from an online location.

According to Spawning founder Jordan Meyer, the artist can then block access or submit images that don’t match what’s being requested.

More than a thousand websites have already been integrated into the Kudurru network.

Spawning has also launched haveibeentrained.com, which includes an online tool to determine whether digitized works have been fed into an AI model and allows artists to opt out of such use in the future.

As image protection increases, researchers at the University of Washington in Missouri have developed AntiFake software to prevent artificial intelligence from copying sounds.

AntiFake enriches digital recordings of people speaking, adding sounds that humans can’t hear but that make it “impossible to synthesize the human voice,” said Zhiyuan Yu, the doctoral student behind the project.

The program aims to go beyond stopping the unauthorized training of artificial intelligence and prevent the creation of “deepfakes” – fake soundtracks or videos of celebrities, politicians, relatives or others that make them appear to be doing or saying something they didn’t do.

According to Zhiyuan Yu, the popular podcast recently contacted AntiFake’s team for help in preventing its production from being hijacked.

The freely available software has so far been used to record speeches, but it can also be applied to songs, the researcher said.

“The best solution would be a world where all data used for AI is subject to consent and payment,” Meyer argued.

“We hope to get developers in that direction.”

Related posts

Leave a Comment