Learn about PhotoGuard, the tool created by MIT researchers to combat deepfakes and AI manipulation
A tool developed by researchers at MIT could help address the issue of deepfakes, which have become a prominent concern due to their misuse in the field of artificial intelligence. Exploiting AI editing tools, malicious individuals have been generating counterfeit visuals of individuals and organizations. Instances have been reported where criminals have created fake explicit images of individuals, subsequently blackmailing them for money under the threat of publicizing these photos online. However, MIT researchers have now introduced a solution to tackle this problem.
According to a report in MIT Technology Review, researchers have created a tool called PhotoGuard that alters images to protect them from manipulation by artificial intelligence systems. Dr. Hadi Salman, a researcher at the institute, said: Right now, “anyone can take our image, edit it however they want, put us in very bad-looking situations and blackmail us… (PhotoGuard is) trying to solve the problem of these models manipulating our images harmfully.”
A special watermarking tool to protect photos from artificial intelligence
Traditional protections are not enough to recognize images created by artificial intelligence, because they are often used as a stamp on the image and can easily be edited out.
This new technique is added as an invisible layer on top of the image. It cannot be removed, whether cropped or edited, or even when adding filters. Although they do not interfere with the image, they stop bad Actors from trying to alter the image to create deep fakes or other manipulative iterations.
It should be noted that while specific watermarking techniques also exist, this technique is different because it uses pixel manipulation as a way to protect images. While watermarking allows users to detect changes using detection tools, this technology prevents people from using artificial intelligence tools to tamper with images.
Interestingly, Google’s DeepMind division has also created a watermarking tool to protect images from AI manipulation. In August, the company launched SynthID, a tool for watermarking and identifying images created by artificial intelligence. This technology embeds the digital watermark directly into the pixels of the image, making it imperceptible to the human eye, but recognizable.