The Invisible Ink of AI: What is Google’s SynthID

SynthID is a tool that adds an invisible watermark to AI-generated photos, videos, and audio. The watermark cannot be seen or heard by people, yes heard and we will explain later why. This watermark doesn’t change the image or the sound. But special software can detect it. Think of it as a hidden signature inside the file or as a signal that says: “This was created by AI.”

Author: NarativAI 

How hard is it for you today to tell if a video or a photo is real or fake? If you’re being honest you would probably say that is very hard, and you’re not alone. Anyone with a smartphone today can create a photo, video, or audio clip that looks completely real but was never captured in the real world. 

AI generated images look like they were taken with a professional camera and deepfake voices copy the way someone speaks. Even a short video can be fully artificial and still go viral as “hard news.”

Journalists, editors, and fact-checkers face the same challenge. The line between real and artificial content has never been thinner. That is why Google DeepMind introduced SynthID, a new system that tries to make digital content more trustworthy.

“This was created by AI”

SynthID is a tool that adds an invisible watermark to AI-generated photos, videos, and audio. The watermark cannot be seen or heard by people, yes heard and we will explain later why. This watermark doesn’t change the image or the sound. But special software can detect it. Think of it as a hidden signature inside the file or as a signal that says: “This was created by AI.”

How Does It Work?

When an AI model generates an image or an audio clip, SynthID quietly embeds a pattern inside the file. You can’t see it. You can’t hear it. But it’s there. Later, if someone uploads that file to social media or sends it to a newsroom, another tool can scan it and show one of three results:

  • AI-generated
  • Probably AI-generated
  • No AI watermark found


It works even after the file is resized, compressed, cropped, or screenshotted. When creating images and videos it doesn’t just overlay a logo, it imprints a signal directly into the pixels of an image or the frames of a video. For audio, the technology converts audio waves into a visual map of sound and embeds the hidden signature. Its used also for text because Large Language Models (LLMs) generate text by predicting the next likely word so SynthID subtly nudges these predictions, favoring certain word choices over others in a pattern that is statistically impossible to replicate by accident but invisible to the reader. 

Why Does This Matter?

Because every day, fake videos and photos spread faster than real news. A single deepfake can mislead millions before fact-checkers catch up. Imagine the harm they can do during election campaigns, protests, conflicts and wars or celebrity scandals.

SynthID gives journalists, researchers, and the public in general a way to quickly check where a file came from. It doesn’t solve the whole problem, but it helps restore some trust.

What Can SynthID Mark Today?

Right now, it works with:

  • AI-generated images
  • AI-generated audio
  • Some forms of AI-generated video


Google says more formats are coming, and they plan for it to become an industry standard. SynthID is an important tool, but it is not perfect. It works only on content created by models that use it. People using other AI tools can still make fake content without any watermark. Skilled attackers might find ways to hide or break the watermark in the future. Still, experts say having a watermark is better than having nothing.

What This Means for Newsrooms

For journalists, SynthID could become part of the daily workflow:

  • Checking suspicious videos sent by readers
  • Verifying photos during breaking news
  • Knowing whether a viral audio clip is real or generated
  • Stopping deepfakes before they go mainstream

As one editor put it: “We used to ask who filmed something. Now we must ask what created it.”

A Step Toward More Honest AI

While SynthID cannot stop misinformation on its own, it gives us something we need right now: a simple way to check whether a digital file is authentic. In a world where anything can be faked, even this small layer of transparency is a big deal. 

(AI assistance was used in developing portions of this text. Our team reviewed, edited, and approved all content.)