The new forensic science of proving what’s real
As deepfakes blur the line between truth and fiction, we’ll need a new class of forensic experts to determine what’s real, what’s fake and what can be proved in court
January 24, 2026
By Deni Ellis Béchard edited by Eric Sullivan
Imagine this scenario. The year is 2030; deepfakes and artificial-intelligence-generated content are everywhere, and you are a member of a new profession—a reality notary. From your office, clients ask you to verify the authenticity of photos, videos, e-mails, contracts, screenshots, audio recordings, text message threads, social media posts and biometric records. People arrive desperate to protect their money, reputation and sanity—and also their freedom.
All four are at stake on a rainy Monday when an elderly woman tells you her son has been accused of murder. She carries the evidence against him: a USB flash drive containing surveillance footage of the shooting. It is sealed in a plastic bag stapled to an affidavit, which explains that the drive contains evidence the prosecution intends to use. At the bottom is a string of numbers and letters: a cryptographic hash.
The Sterile Lab
Your first step isn’t to look at the video—that would be like traipsing through a crime scene. Instead you connect the drive to an offline computer with a write blocker, a hardware device that prevents any data from being written back to the drive. This is like bringing evidence into a sterile lab. The computer is where you hash the file. Cryptographic hashing, an integrity check in digital forensics, has an “avalanche effect” so that any tiny change—a deleted pixel or audio adjustment—results in an entirely different code. If you open the drive without protecting it, your computer could quietly modify metadata—information about the file—and you won’t know whether the file you received was the same one that the prosecution intends to present. When you hash the video, you get the same string of numbers and letters printed on the affidavit.
Next you create a copy and hash it, checking that the codes match. Then you lock the original in a secure archive. You move the copy to a forensic workstation, where you watch the video—what appears to be security camera footage showing the woman’s adult son approaching a man in an alley, lifting a pistol and firing a shot. The video is convincing because it’s boring—no cinematic angles, no dramatic lighting. You’ve actually seen it before—it recently began circulating online, weeks after the murder. The affidavit notes the exact time the police downloaded it from a social platform.
Watching the grainy footage, you remember why you do this. You were still at university in the mid-2020s when deepfakes went from novelty to big business. Verification firms reported a 10-fold jump in deepfakes between 2022 and 2023, and face-swap attacks surged by more than 700 percent in just six months. By 2024 a deepfake fraud attempt occurred every five minutes. You had friends whose bank accounts were emptied, and your grandparents wired thousands to a virtual-kidnapping scammer after receiving altered photos of your cousin while she traveled through Europe. You entered this profession because you saw how a single fabrication could ruin a life.
