AI

Former Google Brain and NVIDIA Researchers Want to Make Photos Provably Real

Lidia Yadlos · Apr 28, 2026
Keep reading to earn more!
BUX
Your Earnings +0.0 BUX
Former Google Brain and NVIDIA Researchers Want to Make Photos Provably Real

The internet is rapidly losing its ability to tell what’s real.

Just days after OpenAI’s latest image generation systems made AI-created photos nearly indistinguishable from real ones, Succinct has launched a new app designed to attack the problem from an entirely different angle: proving authenticity at the moment a photo is captured.

The company today introduced ZCAM, a free iOS camera app and SDK that uses cryptographic proofs to verify that an image was taken by a real device, at a real time and place, without modification.

Instead of trying to detect fake images after they spread online, ZCAM focuses on proving what is real from the start.

Built by Researchers Who Helped Create the Problem

Succinct was founded by former Google Brain researcher Uma Roy and former NVIDIA engineer John Guibas — both of whom previously worked on the AI systems now powering today’s generative media boom.

Roy helped train large language and generative AI models at Google Brain, while Guibas worked on foundation models at NVIDIA.

Now they’re trying to build infrastructure for a world where synthetic media has become nearly impossible to distinguish from reality.

“Right now, a war correspondent and a teenager with a prompt can produce the same JPEG,” Roy said. “Nothing in the file tells you which is which.”

Detection Alone Isn’t Working

The deepfake detection industry has exploded over the last several years as AI-generated content has flooded the internet.

But according to Succinct, detection systems are already failing under even minor image modifications.

The company says its internal AdversIm benchmark tested seven major commercial AI detection tools across more than 15,000 images.

While many systems initially identified AI-generated images with over 90% accuracy, performance reportedly collapsed after simple edits like compression, blur, or added noise — in some cases falling as low as 11%.

A recent The New York Times investigation reached a similar conclusion: detection tools may raise suspicion, but they cannot reliably prove authenticity.

Succinct argues that the industry’s current model is fundamentally backwards.

Instead of asking whether an image looks fake, the company believes platforms need a way to mathematically prove authenticity directly inside the content itself.

How ZCAM Works

ZCAM uses the secure enclave hardware already built into modern iPhones to generate a cryptographic proof when a photo is captured.

That proof verifies:

  • The image came from a specific device

  • The time and location of capture

  • That the image has not been modified since capture

Importantly, Succinct says the proof remains attached to the image even after editing, screenshotting, or resharing.

The company describes the system as complementary to the growing C2PA provenance standard backed by companies including Adobe, Google, Samsung, and the BBC.

While C2PA largely relies on metadata — which can often be stripped or altered when files move across platforms — ZCAM adds a cryptographic verification layer designed to survive distribution across the internet.

The verification process itself does not require trust in a centralized authority.

Why Platforms May Need This Soon

The launch comes as regulators and major platforms increasingly acknowledge that AI-generated media is becoming a systemic problem.

Instagram head Adam Mosseri has publicly argued that cryptographic content signing will likely become necessary for social platforms.

Meanwhile, Europe’s upcoming AI regulations are moving in the same direction.

The EU AI Act’s draft Code of Practice — expected to become enforceable later this year — specifically argues that no single labeling system is sufficient on its own and points toward multilayered approaches that include cryptographic proofs embedded directly into media.

Beyond Social Media

Succinct is also positioning ZCAM far beyond content authenticity on social networks.

The company says the SDK can be integrated into systems where photos function as evidence or proof of action, including:

  • Insurance claims

  • Delivery confirmations

  • Expense reimbursements

  • Marketplace disputes

  • Identity verification

  • Newsroom sourcing

One recent example cited by the company involved a delivery driver allegedly using AI-generated images to fake a completed delivery.

As AI-generated content becomes cheaper and easier to create, systems that rely on photographic evidence may increasingly need ways to verify not just what an image shows — but whether it was ever real in the first place.

For Succinct, that future likely requires cryptography, not detection algorithms.

And in a world where synthetic media is becoming effectively infinite, proving reality may become more valuable than spotting fakes.