This AI image watermark is said to help with the issue of deepfakes. (Google DeepMind)AI 

Google Introduces Unremovable Watermark for AI-Generated Images

On Tuesday, August 29, Google DeepMind, the AI division of the company, unveiled a groundbreaking tool called SynthID. This tool not only has the ability to identify images created using artificial intelligence, but also adds a watermark to them. This development is crucial in combating the issue of deepfakes, where distinguishing between real and artificially generated images can be challenging. By using SynthID, individuals can now detect fake images and avoid falling victim to cybercriminals.

In announcing the tool, the DeepMind team said in a blog post: “Today, in collaboration with Google Cloud, we are releasing the beta version of SynthID, a tool for watermarking and identifying images created by artificial intelligence. This technology embeds a digital watermark directly into the pixels of the image, making it invisible to the human eye, but recognizable.”

Because it’s still in beta testing, it’s rolling out to a limited number of Google Cloud Vertex AI customers using Image, the company’s original text-to-image AI model.

Google fights deep counterfeiting with SynthID

Traditional watermarks are not enough to identify images created by artificial intelligence, because they are often affixed to the image as a stamp and can easily be edited out.

This new watermarking technique is added as an invisible layer on top of the image. It cannot be removed, whether cropped or edited, or even when adding filters. Although they do not interfere with the image, they are visible in detection tools.

The best way to understand them is to think of them as a lamination of physical photographs. They don’t get in the way of viewing the photo, and you can’t crop or edit them out. SynthID basically creates a digital version of lamination.

“While generative AI can unlock enormous creative potential, it also comes with new risks, such as enabling content creators to spread misinformation – either intentionally or unintentionally. Identifying AI-generated content is critical to informing people when they interact with generated media, and help prevent the spread of misinformation,” the post added.

Related posts

Leave a Comment