Courts aren't ready for AI-generated evidence

10 minute read

Author

Date

25/7/2025

Share

Photos, videos and audio — once courtroom gold — are becoming less reliable as deepfakes spread, digital forensics experts warn.

Why it matters: Courts can't keep up, and there aren't enough forensic analysts to verify AI-faked evidence.

The big picture: AI-generated evidence can take many forms. Consider the following hypotheticals:

  • A divorce case where one parent edits a photo's background to imply the child was in an unsafe environment.
  • A murder investigation where deepfake video falsely puts an innocent person at the scene the crime.
  • A wrongful termination lawsuit over the creation of a deepfake audio recording that contains fireable comments from a co-worker (something similar has already happened in Baltimore County).

Between the lines: As AI tools improve, proving that a photo, video or audio snippet was manipulated will get more challenging, if not impossible.

  • As with financial fraud cases that rely on expert testimony to unpack accounting records, courts will now depend on digital forensics experts to spot the telltale signs of tampered media.
  • But even expert analysis may not be enough to persuade a jury beyond a reasonable doubt.

Threat level: Photos, videos and audio have long been the gold standard for evidence in any legal case.

  • "Everybody has images, everybody has voice recordings, CCTV cameras, police body cameras, dash cams," Hany Farid, co-founder and chief science officer at GetReal Security, told Axios. "It's just everything. It's bonkers."

The other side: Defendants will also face challenges in proving their media hasn't been altered by AI.

  • For example, lawyers could argue that body camera footage from a police officer was tampered with, even if it wasn't.
  • "In parts of the country where the police are not trusted, that could gain some traction," Farid said.

The intrigue: This isn't the first time courts have had to adapt to new technology, Farid added.

  • When photo and video evidence first emerged, judges often required negatives or scrutinized timestamps.
  • But today's legal standards haven't caught up to the speed and sophistication of deepfake tools, which are evolving far faster than past media forms.

Zoom in: Even for forensics investigators, the technology isn't there yet to help track the chain of custody for deepfakes, Joseph Pochron, managing director of digital investigations and cyber risk at Nardello & Co., said at the Deepfake Resilience Symposium in San Francisco last week.

  • Each AI verification tool on the market is a black box in terms of how it determines what percentage of a piece of content was AI-generated, creating an opening for manipulation and misinterpretation of images, videos and audio, Pochron said.
  • Now, investigators have to get creative with how they prove something is or is not AI-manipulated.
  • Pochron's team has begun transcribing audio and analyzing sentence structure to see if it matches patterns popular with AI tools. But even that method will be moot within a year as deepfakes become more humanlike.

The bottom line: Experts urge people to preserve as many original files — voicemails, text messages, photos — on their devices as possible in case they're needed in court.

  • "We've had a couple [cases] where it's an email that's been emailed again, but where's the original?" Pochron said. "The metadata or any other supporting artifacts may be crucial to help us figure this out."

What's next: AI tools have already perfected the art of deepfake images and audio, and it will be less than two years until they fine-tune fake videos, too, Farid said.