We’re in the Age of Fake Photos and Videos—Here’s How to Spot Them
Photo by Jason Michael, via Twitter.
After Hurricane Harvey hit in August, images and video of the storm’s destruction began to emerge online. The world saw photos of stricken residents surveying their flooded homes, rescuers canoeing down suburban streets, and families camped out in shelters as the tragedy unfolded along the coast.
Then, there was the shark photo, which supposedly showed the animal calmly swimming down a flooded Texas highway. The image is a Photoshop job, and something of a “greatest hit” in the fake photo world, popping up with regularity whenever a major flood strikes. Amassing over 89,000 re-tweets and a reference on Fox News during Harvey’s aftermath, it is a prime example of the fictitious content that creates a false reality parallel to natural disasters, presidential elections, and other current events. These fake stories are partially told by photos—and, increasingly, video footage—that were either doctored or had nothing to do with the event they purport to illustrate.
The public’s appetite for fake news—like stories erroneously claiming the Pope endorsed Donald Trump’s candidacy, among countless others—is still strong. But for others, the sheer amount of inaccurate information has meant trusting images more than ever. After all: You can believe what you see with your own eyes, right? The problem is that humans are naturally bad at identifying fake images, and the technology for detecting doctored photos can’t keep up with the increasingly easy-to-use tools available to create them.
To complicate matters, the next frontier is already here: fake videos, where you can make the world leader or celebrity of your choice say anything, with just a webcam and some technical know-how. The rise of fake video has many experts and media insiders profoundly concerned that trust in the media, already polling as low as 32 percent, could erode completely.
Humans Can’t See the (Single) Light
“Here’s the thing about looking at photographs and trying to determine authenticity: It’s really hard,” said Hany Farid, a professor of computer science at Dartmouth College who has worked extensively on digital forensic techniques.
While people are generally good at recognizing faces or egregious Photoshops, overall our visual systems didn’t evolve to look at flat images. As a result, we’re “remarkably inept” at detecting relatively simple geometric inconsistencies in shadows, reflections, and image distortion, according to a 2010 study co-authored by Farid that tested human perception. In one experiment, 20 participants were given 140 different image renderings showing cones and their shadows. They were asked to determine if the shadows in a given image were “consistent with a single light.” Only when the cone’s shadows went in opposite direction were responses consistently correct.
There are some relatively simple tricks for determining the geometric accuracy of images. Since light travels in a straight line, for example, shadows and the object that produced them should also lie on a straight line, which can be traced back to the source. But Farid cautions against “armchair forensics.” Determining image authenticity is an extremely difficult process, he notes, akin to DNA fingerprinting.
Photo by Mark Makela/Getty Images.
Not only that, but an image doesn’t have to be Photoshopped to mislead. It could simply be used out of context, or perhaps linked to an event it actually has no relation to. One widely circulated news image shows Hillary Clinton innocuously stumbling while walking up a flight of stairs. That shot, taken by a Getty photographer, was used to falsely claim that the then-Presidential candidate was in poor health.
Why There’s No App for That
Suspicious about a photo’s veracity? Rather than going DIY CSI, put the image to a common-sense sniff test: Was it posted by a reputable news outlet, for example, or by a site like the trickily named Abcnews.com.co?
Farid suggests running a reverse image search using Google or TinEye. Upload the Houston shark picture to TinEye, for instance, and the first link is to an article on the Verge about fake photos. Plug in a picture purporting to show planes underwater in Houston and you find it’s actually a digital mockup of what flooding could do to New York’s LaGuardia Airport (the skyline should have been a dead giveaway).
There is also Izitru, founded by Farid, which can tell you if an image is a camera original; that is, if it has been modified at all. (This doesn’t always mean an image is fake, however, since it simply could have had its file format changed.)
So humans are lousy at detecting fakes. But surely computers, which created the doctored images to begin with, should be able to detect their own handiwork? They can, says Farid, but not in a way that is scalable, easy, or perfect. Such analysis, which interrogates both image data (in order to reveal modifications) and image geometry, usually requires the input of a human analyst who can assist the program in, say, specifying what is a shadow and what is a person.
To create a simple app that can easily detect fake images would require removing this human from the loop, Farid said. That would mean solving longstanding problems related to computer vision—essentially automating human visual systems. That isn’t easy. On top of that, image forensic techniques are scientific processes, dealing with physics, geometric, optics, and compression. The conclusion isn’t a simple thumbs up or thumbs down, real or fake. These results “require interpretation” from someone with expertise in digital photography, said Farid.
On top of that, Adobe—the creator of Photoshop—is a multi-billion dollar company with a financial incentive to create increasingly easy-to-use image modification tools that can achieve increasingly complex effects. Meanwhile, those working on the other end of things, creating applications and algorithms that detect fakes, hail mainly from academia. This imbalance exacerbates the already asymmetric nature of fake image detection.
“When you’re creating the fake, you only have to get it right once and it gets through,” said Farid. “I have to get it right every single time.”
Not Just Sharks Anymore
But what happens when fake video and audio become as pervasive as fake images—when there’s a Photoshop for moving images and sound? These “incipient technologies appear likely to soon obliterate the line between real and fake,” wrote Nick Bilton in Vanity Fair. “Advancements in audio and video technology are becoming so sophisticated that they will be able to replicate real news—real TV broadcasts, for instance, or radio interviews—in unprecedented, and truly indecipherable, ways.”
Essentially, the technology utilizes audio and video capture to allow users to make a world leader, or anyone else, believably appear to say anything. In a recent episode exploring the issue, Radiolab took the tech for a test spin. While it is far from perfect (simulating conversations remains particularly challenging), it is impressive.
Working with researchers from the University of Southern California, Radiolab reporter Simon Adler created a fake video in which President Obama appears to speak about golfing (in reality, it’s Simon doing the talking). Conservatives have leveled the criticism that the former President golfed instead of doing his job; one can imagine how such a fake video would have served to affirm that viewpoint, especially in an audience that already wants to believe it (though belief in fake news is certainly a bipartisan problem).
“That’s going to raise the whole issue of ‘fake news’ to an incredibly complex and terrifying new level,” said Farid. “This is not just fake shark images anymore.”
An image from a study done by Eric Kee, James O'Brien, and Hany Farid called "Exposing Photo Maniupulation with Inconsistent Shadows." Their algorithm reveals that the shadows and light source in this specific photo are accurate. Original image by NASA. Courtesy of Hany Farid.
This image shows a shadow analysis of the viral video “Golden Eagle Snatches Kid” (45 Million Views.) This analysis reveals that the shadows for the eagle/baby (magenta) are inconsistent with the shadows (blue) in the rest of the scene. Courtesy of Hany Farid.
However, there is some heartening news. In his article Bilton points to Star Wars: Rogue One (2016), a film which featured a computer-generated version of the actor Peter Cushing, who died in 1994. “Go and watch the latest Star Wars to see if you can tell which actors are real and which are computer-generated. I bet you can’t tell the difference,” writes Bilton.
The resemblance is uncanny, but to my eyes at least, the computer-generated actor still stood out like a sore thumb. And a 2017 study co-authored by Farid found that when people are trained and have incentive (for example, they are told their determination has consequences in the real world), they can “reliably distinguish between computer-generated and photographic portraits.”
Still, as is noted on Radiolab, people don’t even need to believe that a given video is real for it to undermine trust; being overly skeptical could do that instead. People only need to believe an audio clip or a piece of footage could be faked—that the technology out there is good enough to fool them—for their biases to write off image-based evidence that conflicts with their views. Fact-checking doesn’t stop politicians from spouting lies, but in a world where anything can be faked, it will be easier than ever to choose one’s own version of truth.
