Image Verifier & Forensics — Authenticity Check
Forensic image analysis — verify file format vs declared extension, extract full EXIF / XMP / ICC / IPTC metadata, run Error Level Analysis (ELA), histogram, pixel-anomaly detection, structural artifact check, JPEG quantization fingerprinting, C2PA / AI-generator signature scan, and compression history. 100% client-side.
How to Use
- Drag and drop an image, or click to select from your device.
- Every check runs locally in your browser — your image is never uploaded.
- Read the verdict at the top, then drill into each analysis tab for details.
What each test means
Limitations & honest caveats
- ELA only works on JPEG. PNG / WebP / lossless formats won\'t produce useful ELA output.
- AI-detection is heuristic. Models change weekly; detection lags. A green "no signature" result does NOT prove an image is real — only that the operator didn\'t leave a trail.
- PRNU matching requires a reference set from the same camera. This tool reports sensor-noise uniformity only — useful as a tampering hint, not as identity verification.
- Deepfakes / face-swaps need ML models too heavy to ship as a web tool. Use a dedicated forensics service for that level of analysis.
- The verdict is a guideline. Don\'t use it as the sole basis for a legal / journalistic claim. Reproduce findings in another tool before publishing.
About
The history of image forensics traces back to the 1990s when courts started accepting digital photos as evidence. Hany Farid at Dartmouth (now Berkeley) is generally credited with formalizing the field — his lab built the first ELA, copy-move, and re-sampling detection tools in the early 2000s. The C2PA standard (2021) is the industry response to AI-generated content: a cryptographically signed provenance manifest that travels with the file across crops and re-saves.
This tool is not a replacement for professional forensics software (FotoForensics, Amped, or law-enforcement-grade Cellebrite). It\'s a fast, private first-pass that runs entirely in your browser, surfaces the obvious red flags, and gives you ground truth on what the file actually is — extension, format, embedded data, basic statistics — before you make any claims about it.
Frequently Asked Questions
What does "claimed format vs actual format" mean?
Every file format has a unique signature in its first bytes (magic bytes). PNG starts with <code>89 50 4E 47</code>, JPEG with <code>FF D8 FF</code>, etc. If a file is renamed (cat.jpg → cat.png) or generated by software that lies about format, the magic bytes won't match the claimed extension. This tool reads the actual bytes and reports any mismatch.
What is Error Level Analysis (ELA)?
ELA re-saves a JPEG at a known quality level then compares pixel-by-pixel against the original. Areas with consistent error (same compression history) appear uniform; areas that were recently pasted, edited, or had different compression show as noticeably brighter regions. Originally invented to spot photo manipulation. Effective on JPEG only — PNG / lossless formats yield no signal.
Can it detect AI-generated images?
Sort of. The tool looks for: (1) C2PA / Content Credentials manifests embedded by Adobe Firefly, OpenAI, Microsoft, etc. (2) XMP / EXIF software fields naming AI tools (3) PNG tEXt / iTXt chunks containing Stable Diffusion-style prompts and parameters (4) Statistical signatures common to current generative models. Detection is improving but never definitive — a careful operator can strip every signature.
What is PRNU?
Photo-Response Non-Uniformity — the unique noise fingerprint a camera sensor leaves on every image it captures. PRNU matching needs <em>reference</em> images from the same camera, so a single-image tool can't verify "this is from camera X" without that database. What this tool reports is sensor-noise <em>uniformity</em>: whether the noise pattern is consistent (likely a real photo) or has discontinuities (likely composite or AI).
What about deepfakes / face-swap detection?
That requires ML models — convolutional networks trained on millions of fake images. They're too heavy to ship in a single web tool. This tool focuses on signal-processing forensics: structural patterns that surface when an image has been edited or generated, regardless of subject matter.
Are my images uploaded?
No. Everything runs in JavaScript in your browser. The image never leaves your device. The page works offline once cached.
Common Use Cases
News / journalism fact-check
Check whether a viral photo has been edited. ELA spotlights pasted regions; metadata reveals if it was processed.
Catfish / dating-app vetting
See if a profile photo was downloaded from elsewhere (compression history) or AI-generated (signature scan).
Insurance / fraud claim review
Identify staged or manipulated damage photos via ELA and metadata cross-checks.
Stock photo authenticity
Confirm a "real photo" claim by checking camera EXIF and absence of AI signatures.
Forensic chain-of-custody
Document an image's declared format, hash, dimensions, EXIF, and compression history for legal records.
Last updated: