AI Will Have the Biggest Impact on Photography since the Digital Camera
Sometime in the not-too-distant future, anyone will be able to take a picture without a camera. Instead, we will be able to generate photographs, indistinguishable from those made by a camera, using artificial intelligence (AI) software. You will be able to create an image by simply typing out a description of the scene, or describing it to (presumably) Siri. “Siri,” you’ll say. “I’d like an image of a red-haired woman walking through a park in autumn, the breeze blowing red, orange, and yellow leaves around her.” And—though it may require more detail than that—presto! Your phone will provide various options on the screen to choose from.
That is the future that Alex Savsunenko, head of Skylum AI Lab, described at a recent panel with Richard Carriere, senior vice president of CyberLink, and Greg Scoblete, technology editor of PDN, about the role AI will play within imaging. Scoblete began the talk by declaring that AI is the most important advancement in imaging technology since the digital camera. It’s a bold statement, considering the fact that digital imaging catalyzed the vast number of images we produce, and entirely shifted our relationship to imagery. (Editor’s note: Scoblete is a former colleague of the writer.)
But we are entering another shift; already, we see the implications of fake videos (called “DeepFakes”), facial recognition, and algorithm-generated art, all powered by AI. This fall, MIT released Deep Angel, a tool that, like the memory-doctoring tech in Eternal Sunshine of the Spotless Mind, can erase anything or anyone from your photos. Already, researchers are using powerful computers to generate fake images from scratch, as a team at NVIDIA did last year when they created a library of celebrity portraits—none of whom actually exist. Eventually, when all of our devices catch up in storage and processing power, we, too, will be able to do the same, forever changing the meaning of the famous
A brief history of AI
Backing up a bit, this isn’t the first time that AI technology has caused a flurry of media coverage. We have become starry-eyed over AI’s capabilities in cyclical fashion, starting with its inception at a 1956 conference at Dartmouth, where the term “artificial intelligence” was coined by computer scientist John McCarthy. The basic idea hasn’t changed much since then: to be able to teach software to make decisions on its own, and to eventually reach such lofty goals as replicating the human brain. In 1970, cognitive scientist Marvin Minsky told Life magazine that “[in] three to eight years we will have a machine with the general intelligence of an average human being.”
Obviously, that didn’t happen. In the 1980s, however, there were significant new advancements, including the proliferation of expert systems—computers that could solve problems through reason; and the birth of deep learning, in which a computer continually performs tasks and makes improvements each time. Again, the hype built up and died down.
So what causes these sinuous waves of progress? Generally, it comes down to computational power. Researchers will hit the limits of AI application, and then wait until computer processing speeds and storage catch up.
More recently, since 2012, there has been a boom in deep-learning applications. This is thanks to computer scientist Alex Krizhevsky, who used the concept of neural networks, based on the human brain, as the basis for an algorithm that could perform object detection and image classification at an enormous scale. Neural nets have led to speedy advancements in facial-recognition and image-editing software. When successfully trained, neural nets can recognize and classify subjects much faster than humans, especially when it comes to trickier classifications like dog breeds or age.
While the framework of all deep-learning neural nets is the same technology, Carriere explained that the application can vary widely, from restoring resolution in your images to identifying China’s civilians for the country’s new social-credit system.
How AI might affect photographers
In photography and video, AI tools are being incorporated into user-friendly software. Cyberlink’s PhotoDirector software can apply artistic styles to your images and videos, such as “
When technology advances even further, AI tools may begin to replace required technical skills in photography. Google has released an AI-powered camera, Google Clips, that can judge when the composition or lighting is aesthetically pleasing. And though we’re probably a long way off from a totally automated photographer (the reviews of Google Clips leave much to be desired), the seed has been planted. Plus, as image-generation technology hurtles forward, we can imagine a scenario where agencies can generate their own images, instead of hiring an advertising photographer. Could that become a reality?
Scoblete raised this question in the panel, but neither Savsunenko nor Carriere seemed to think AI posed a serious threat to photographers. They believe that AI will be used to enhance, not replace, traditional imaging. Savsunenko said it will be some time before AI images match the quality of in-camera images, though he said a photographer’s role could eventually shift, with less emphasis on technical prowess. “It’ll be a requirement to make an idea, to be creative,” he offered.
Savsunenko also believes that a computer will never truly understand the human experience, and thus, “it will never create a masterpiece.” He pointed to the famous photograph from
What AI images will mean for us
As humans, we add our own interpretations—and our biases—to images, without necessarily understanding the context. For nearly as long as images have been made, they’ve also been manipulated, dating as far back as 1860, when studio photographer Mathew Brady “artificially enlarged” then-presidential candidate Abraham Lincoln’s collar to assuage criticism of the Republican’s long neck. Photoshop was not the first tool to doctor photos, though it has made it easier for those with the technical skill to do so. AI will lower the bar for technical skill entirely, but it will also provide us the ability to better detect the very manipulated images it is helping to create.
Adobe is straddling both sides of the AI fence. On one hand, it is incorporating AI tools into Creative Cloud (which includes Photoshop and Lightroom) that speed up and improve image-editing and organization; on the other, it’s researching ways to easily detect manipulated images, as part of the broader Media Forensics program by U.S. government agency DARPA. Vlad Morariu, a senior research scientist at Adobe, has been part of the program since 2016. On Adobe’s blog, he outlined tools that can be used to defend against image manipulation (such as metadata, forensic tools, and watermarks), but his research involves finding quicker and easier ways to prove an image’s authenticity, allowing someone to determine in “seconds” whether an image has been doctored.
Those tools will be helpful for anyone who cares to know if an image is authentic, but it’s unlikely that everyone will opt to use them. A doctored image of Parkland survivor Emma González ripping up the constitution spread like wildfire in March, despite how easy it was to find the source, a Teen Vogue video that, in reality, showed her ripping a shooting-range target in half. Real photos can be used to mislead, as well; in these past few weeks, misattributed images have been shared on social media in an attempt to paint a caravan of asylum-seekers approaching the U.S.–Mexico border as violent and dangerous. Perhaps social media sites could eventually implement a warning that the image you’re looking at may be fake, as Facebook now does with news, but that will be a much larger challenge than creating, posting, and spreading offending photos.
Savsunenko doesn’t think that such warnings will be necessary. He presented a different view: that the proliferation of AI-powered image-generating software might be “good” because of the fact that everyone will have access to it, instead of a select few. “Imagine a world where you can make fake images and fake videos, instantly on demand, with zero technical skills,” Savsunenko mused over the phone. “The credibility of any image will go down, right? So you will doubt any image that you see on the internet—and that is why it will be harder for professionals to manipulate public opinion, because there will be less credibility to every image.” Proof and credibility will have to be presented in other ways, he believes—“so there’s no definite proof that it’s going to be bad.”
That seems like a very positive—if not idealistic—view: a world in which everyone will do their due diligence; in which people respond to the threat of fake images with more skepticism and deeper critical analysis.
It’s a big ask right now, when, in the U.S., we’ve seen the decline of critical reasoning when it comes to judging what counts as a responsible news source. For many, on both sides of the aisle, blogs and Twitter accounts have been accepted as viable media outlets, or valued more than publications with credible fact-checking teams and lengthy journalistic standards. When we must eventually judge all images based on where they came from, instead of their contents, what will happen?
AI-generated images are coming at a time when we may not be ready for them, and when we have few defenses in place. But we still have time to reframe how we view images before the next AI breakthrough hits. Whether or not we will rise to the task remains to be seen.
Jacqui Palumbo is Artsy’s Visual Culture Editor.