Savsunenko doesn’t think that such warnings will be necessary. He presented a different view: that the proliferation of AI-powered image-generating software might be “good” because of the fact that everyone will have access to it, instead of a select few. “Imagine a world where you can make fake images and fake videos, instantly on demand, with zero technical skills,” Savsunenko mused over the phone. “The credibility of any image will go down, right? So you will doubt any image that you see on the internet—and that is why it will be harder for professionals to manipulate public opinion, because there will be less credibility to every image.” Proof and credibility will have to be presented in other ways, he believes—“so there’s no definite proof that it’s going to be bad.”
That seems like a very positive—if not idealistic—view: a world in which everyone will do their due diligence; in which people respond to the threat of fake images with more skepticism and deeper critical analysis.
It’s a big ask right now, when, in the U.S., we’ve seen the decline of critical reasoning when it comes to judging what counts as a responsible news source. For many, on both sides of the aisle, blogs and Twitter accounts have been accepted as viable media outlets, or valued more than publications with credible fact-checking teams and lengthy journalistic standards. When we must eventually judge all images based on where they came from, instead of their contents, what will happen?
AI-generated images are coming at a time when we may not be ready for them, and when we have few defenses in place. But we still have time to reframe how we view images before the next AI breakthrough hits. Whether or not we will rise to the task remains to be seen.