The technique introduces nearly invisible "perturbations" to throw off algorithmic models.AI 

MIT Develops ‘PhotoGuard’ to Shield Photos from AI Manipulation

The emergence of generative AI systems like Dall-E and Stable Diffusion has sparked a wave of innovation in the chatbot industry. Companies such as Shutterstock and Adobe are now equipping their chatbots with the ability to not only create images but also edit them. However, this advancement in AI technology also brings along some well-known challenges, such as the unauthorized alteration or theft of existing online artwork and images. To address these issues, watermarking techniques can be employed to minimize theft, while MIT CSAIL has developed a new technique called “PhotoGuard” that aims to prevent unauthorized manipulation.

PhotoGuard works by changing selected pixels in an image in such a way that they interfere with the AI’s ability to understand what the image is. These “disruptions”, as the research team refers to them, are invisible to the human eye, but easily read by machines. The “Encoder” attack method for introducing these artifacts targets the algorithmic model’s latent representation of the target image – the complex math that describes the location and color of each pixel in the image – preventing the AI from understanding what it’s looking at.

A more advanced and computationally intensive “diffusion” attack method disguises the image as a different image in the eyes of artificial intelligence. It defines the target image and optimizes the disturbances in the image so that they resemble the target. Any edits the AI tries to make to these “immunized” images are to the fake “target” images, resulting in an unrealistic looking generated image.

“”An encoder attack makes the model think that the input image (to be edited) is some other image (e.g. a gray image),” MIT PhD student and lead author of the article Hadi Salman told ReturnByte. “While a diffusion attack forces the diffusion model to make edits towards some target image (which can also be some gray or random technique that can be protected against reverse image playback). possibly adding digital noise, cropping or rotating the image.

“Collaboration involving model developers, social media platforms, and policy makers provides robust protection against unauthorized image manipulation. Working on this pressing issue is extremely important today,” Salman said in the release. “And while I’m happy to contribute to this solution, a lot of work is needed to make this protection practical. Companies developing these models need to invest in robust immunizations against threats posed by these AI tools.”

Related posts

Leave a Comment