How to Check If an Image Is Real
- Hamburg, Germany
Dieser Artikel ist auch auf Deutsch verfügbar.
Three years ago, many things were simpler. Today we face problems that were unthinkable back then, or at least not seriously, because the idea that someone could generate a photorealistic image with just a few words was still science fiction in 2022. Now it’s everyday life.
AI-generated images initially had a certain look, fingers pointing in the wrong direction, eyes that didn’t match, text that made no sense. Midjourney version 1 was impressive and simultaneously obviously synthetic. Since then, the models have gotten better, significantly better, and the obvious errors are disappearing.
Most people still rely on their intuition, they look at an image and decide by gut feeling whether it looks real. This worked as long as AI images had that synthetic, slightly waxy look, that somewhat too perfect lighting and the strangely smooth skin textures. But exactly that is disappearing now.
There’s now a way to check whether an image was generated by Google AI1. The Gemini app has had a feature since late 2025 based on SynthID, Google’s watermarking technology that embeds invisible signals in AI-generated content. You upload an image, ask “Was this created with Google AI?” or simply type @synthid, and Gemini checks whether the watermark is present.
This also works with videos, you upload a file, maximum 100 MB and no longer than 90 seconds, and Gemini scans both the video and audio tracks2. And for videos, it even shows which segments contain AI-generated elements.
How to check an image with SynthID:
Upload image to the Gemini app, type @synthid or ask “Was this created with Google AI?” - Gemini provides an answer.
How SynthID Works
SynthID is not a retroactive marking like a stamp or metadata that can be easily removed, but a watermark embedded directly into the content during generation3.
For images, two neural networks work together, the first modifies individual color values so minimally that the human eye sees no difference, and the second recognizes these changes even after typical edits, meaning after cropping, compression or screenshots4.
For text, it works differently. Language models generate words one at a time, and each possible next word has a probability. In a sentence like “My favorite fruits are mango and…”, “bananas” has a higher probability than “airplanes”, and SynthID adjusts exactly these probabilities so that a pattern emerges, invisible to readers but recognizable to a trained detector5.
For videos, each individual frame is treated like an image, for audio the waveform is converted into a spectrogram and marked there, inaudible to human ears and resistant to noise, compression or speed changes6.
Detection is probabilistic, the detector gives three possible answers: with watermark, without watermark, or uncertain7.
The most important limitation is obvious. SynthID only recognizes content created with Google AI. An image from Midjourney, DALL-E or Stable Diffusion won’t be detected8. The system doesn’t guess whether something looks AI-generated. It searches exclusively for its own watermark.
This sounds like a big gap, and it is. But it’s a start. Google plans to extend the verification to C2PA9, an open standard that works across providers.
Despite this limitation, the watermarks are robust against cropping, compression and filters, meaning exactly the edits that happen in everyday use10. For the image that someone shares in a WhatsApp group or that appears in an article, there’s now for the first time a quick way to verify. The question “Is this real?” won’t disappear, but it’s becoming increasingly answerable.
What Is C2PA?
C2PA stands for Coalition for Content Provenance and Authenticity, a consortium of Adobe, Microsoft, BBC, Intel and others11, and the standard solves a problem that SynthID cannot solve. Provenance verification that works independently of the provider.
The idea behind it: Every image, video or audio carries cryptographically signed metadata, so-called Content Credentials, that document where the content comes from, which tool was used to create it and how it was edited. The signature is tamper-proof because it’s based on public-key cryptography.
The difference to SynthID is that C2PA is an open standard, not a proprietary watermark, and any manufacturer can implement it. Adobe Photoshop, Lightroom and Firefly already support it, Microsoft has integrated it into Bing Image Creator, and camera manufacturers like Leica and Sony are working on generating the signature directly in the camera12.
The weakness is that C2PA metadata can be removed. A screenshot, a re-upload to social media, and the credentials are gone. SynthID survives such edits in many cases because the watermark is embedded in the pixels. Both approaches complement each other.
-
Google, How we’re bringing AI image verification to the Gemini app , 2025 ↩︎
-
Google, You can now verify Google AI-generated videos in the Gemini app , 2025 ↩︎
-
Google DeepMind, Identifying AI-generated images with SynthID , 2023 ↩︎
-
Dathathri, S. et al., Scalable watermarking for identifying large language model outputs , Nature, 2024 ↩︎
-
Hugging Face, Introducing SynthID Text , 2024 ↩︎
-
Tech Buzz, Google Gemini Can Now Spot AI Fakes - But Only Its Own , 2025 ↩︎
-
Google, How we’re bringing AI image verification to the Gemini app , 2025 ↩︎
-
Webpronews, Google Gemini App Verifies AI Images with SynthID, But Limits Spark Criticism , 2025 ↩︎
-
C2PA, Coalition for Content Provenance and Authenticity , 2024 ↩︎