Advertisement
Creativity

It’s Getting Hard to Tell If a Painting Was Made by a Computer or a Human

Rene Chun
Sep 21, 2017 2:58PM

Example of images generated by CAN, included in “CAN: Creative Adversarial Networks Generating ‘Art’ by Learning About Styles and Deviating from Style Norms.” Courtesy of Ahmed Elgammal.

Cultural pundits can close the book on 2017: The biggest artistic achievement of the year has already taken place. It didn’t happen in a paint-splattered studio on the outskirts of Beijing, Singapore, or Berlin. It didn’t happen at the Venice Biennale. It happened in New Brunswick, New Jersey, just off Exit 9 on the Turnpike.

That’s the home of the main campus of Rutgers University—all four square miles and 640 buildings of it, including the school’s Art and Artificial Intelligence Lab (AAIL). Nobody would mistake this place as an incubator for fine art. It looks like a bootstrap startup, all cubicles and gray carpet, with lots of cheap Dell monitors and cork boards filled with tech gibberish.

On February 14th of this year, it’s  where Professor Ahmed Elgammal ran a new art-generating algorithm through a computer and watched as it spit out a series of startling images that took his breath away. Two weeks later, Elgammal conducted a special Turing test to see how his digital art stacked up against dozens of museum-grade canvases.

In a randomized-controlled double-blind study, subjects were unable to distinguish the computer art from two sample sets of acclaimed work created by flesh-and blood artists (one culled from the canon of Abstract Expressionist paintings, the other from works shown at the 2016 edition of Art Basel in Hong Kong). In fact, the computer-made pictures were often rated by subjects as more “novel” and “aesthetically appealing” than the human-generated art. The ensuing peer-reviewed paper sparked an unsettling art world rumor: Watson had learned how to paint like Picasso.

Programming a computer to make unique and appealing art that people would hang on their walls is the culmination of an impressive body of work that stretches back to 2012, when the Rutgers Department of Computer Science launched the AAIL. The lab’s mission statement is simple: “We are focused on developing artificial intelligence and computer vision algorithms in the domain of art.” Over the years, the lab has developed several innovative algorithms that have piqued the interest of everyone from curators and historians to authenticators and auction houses. One algorithm, which incorporates the elements of novelty and influence, is used to measure artistic creativity. Another analyzes paintings and classifies them according to artist, period, and genre, similar to a Shazam for art. There’s even a forensics algorithm in the AAIL pipeline that identifies the subtle but distinct variations in the brushstrokes of different artists. In a business where forgeries are increasingly difficult to spot, that’s the kind of digital litmus test that insurance carriers, collectors, and galleries will beat a path to your lab door for.

The next step was obvious: a program that didn’t copy old art, but rather actually created new compositions. Elgammal “trained” his algorithm by feeding it over 80,000 digitized images of Western paintings culled from a timeline that stretched from the 15th to the 20th century. Using this immense corpus as the programming source material, he went about the task of creating a variation of the artificial intelligence system known as Generative Adversarial Networks. These so-called “GANs” are great at generating images of handbags and shoes, but not so great at generating original visual art. So Elgammal came up with his own proprietary image-generating system: Creative Adversarial Networks (CANs).

Reduced to the most elementary definition, a GAN is emulative and a CAN, as its name suggests, is creative. “The images generated by CAN do not look like traditional art, in terms of standard genres,” Elgammal wrote in his June 2017 paper, “CAN: Creative Adversarial Networks Generating ‘Art’ by Learning About Styles and Deviating from Style Norms.” “We also do not see any recognizable figures. Many of the images seem abstract. Is that simply because it fails to emulate the art distribution, or is it because it tries to generate novel images? Is it at all creative?”

When asked a similar question several months later, Elgammal no longer harbors any doubt. “The machine developed an aesthetic sense,” he says bluntly. “It learned how to paint.”

Example of images generated by CAN, included in “CAN: Creative Adversarial Networks Generating ‘Art’ by Learning About Styles and Deviating from Style Norms.” Courtesy of Ahmed Elgammal.

Advertisement

Like most splashy technological breakthroughs, the Rutgers art algorithm was actually borne of thousands of hours of tedious lab work. During the three weeks leading up to this pivotal moment, Elgammal and his two assistants made numerous tweaks to their finely calibrated algorithm, trying to coax the stubborn binary code into creating art that looked more human. Despite all the hard work, the 45-year-old AAIL director was initially frustrated. The pictures were neither good nor bad; they occupied the dreaded midpoint on the creativity bell curve.

The team got over this suboptimal hump by introducing more “stylistic ambiguity” and “deviations from style norms” into the algorithm, Elgammal explains. It’s a delicate balancing act. Stray too far from established painting styles and the resulting images will strike viewers as bizarre. Conversely, hew too closely to the traditional art canon and the computer will churn out lackluster pictures that are derivative and familiar, the computer equivalent of paint-by-numbers.

After writing some more style patches, Elgammal ran the algorithm one more time. “I was expecting to see images that were fuzzy and not clear, weird faces and weird objects,” he said. Surprisingly, though, that didn’t happen. Instead, the AAIL team absolutely nailed the formulation. “The compositions and colors were very nice,” says Elgammal, relishing the memory of that eureka moment. “We said “Wow! If this was in a museum, you would love it.”  

Pressed for a reason why the algorithm generated abstract art instead of, say, portraits and still lifes, Elgammal stresses the evolutionary nature of the Creative Adversarial Network.  “It makes sense,” he says matter-of-factly. “If you feed the machine art history from the Renaissance to the present and ask it to generate something that fits into a style, the natural progression would be something along abstract aesthetic lines.”  The A.I. art guru is on a roll: “Since the algorithm works by trying to deviate from style norms, it seems that it found the answer in more and more abstraction. That’s quite interesting, because that tells us that the algorithm successfully catches the progression in art history and chose to generate more abstract works as the solution. So abstraction is the natural progress in art history.”

In other words, the algorithm did exactly what many human artists would do given the same circumstances: It produced the kind of arresting images that would have a shot at catching the jaded eye of a Larry Gagosian or Charles Saatchi. Turning out art that smacked of Dutch Masters wouldn’t tickle the brain’s neural network enough, resulting in “habituation,” or a decreased arousal in response to repetitions of a stimulus. Simply put, art collectors are a bit like drug addicts: The visual stimuli the artwork projects must have enough “arousal potential” to trigger what psychologists refer to as the “hedonic response.”

The theories of psychologist Colin Martindale in this regard figure prominently in the DNA of Elgammal’s art algorithm. In his most popular text, The Clockwork Muse: The Predictability of Artistic Change (1990), Martindale suggests that successful artists incorporate novelty in their work. He hypothesized that this increase in arousal potential counteracts the viewer’s inevitable habituation response. This bump in creative novelty, however, must be minimized in order to avoid negative viewer reaction. Martindale also believed that artists embraced “style breaks” (a range of artistic phases) as a tool to make their work less predictable and more attractive to viewers over an extended period of time. It’s exactly the kind of thing  an art algorithm could be based upon. “Among theories that try to explain progress in art,” Elgammal notes in his paper, “we find Martindale’s theory to be computationally feasible.”  

Desmond Paul Henry, 1962. Courtesy of the D.P. Henry Archive.

The history of computer art, which constitutes the bedrock of the burgeoning academic field known on college campuses as the “digital humanities,” dates back to Desmond Paul Henry’s “Drawing Machines” of the early 1960s. The design of these contraptions was based on salvaged bombsight computers used by pilots during World War II to deliver munitions with pinpoint accuracy. The images Henry’s machines generated were abstract, curvilinear, and decidedly complex.

The computer art movement of the 1960s spawned more machine-made pictures, ranging from Alfons Schilling’s low-tech “spin art” (think of dripping paint on canvas attached to a giant potter’s wheel, which long predated Damien Hirst’s cynical “spin paintings”) to the early digital designs and animation produced at the legendary Bell Telephone Labs in Murray Hill, New Jersey.  Founded in 1966 by Bell engineers Billy Klüver and Fred Waldhauer and artists Robert Rauschenberg and Robert Whitman, Experiments In Art and Technology (E.A.T.) was the seminal Bell Labs project upon which all of today’s computer-generated art is founded. The creative process was extremely arduous. The art programs and data, for instance, had to be rendered via old school keypunch. Those punch cards were fed into a room-sized computer, one by one. The resulting still images then had to be manually transferred to a visual output medium like a pen or microfilm plotter, a line printer or an alphanumeric printout.

As new computer technology was introduced, new machine-made art quickly followed: dot matrix printer art (1970s), video game art (2000s), 3D-printed art (2010s). What makes Elgammal’s computer images unique is that this marks the first time where A.I. has completely expunged humans from the real-time creative loop.

Unlike DeepDream, Google’s much hyped 2015 bot-art project, human intervention is absolutely null in the Rutgers AAIL machine. Elgammal just turns on the computer and the algorithm does its thing. In stark contrast, DeepDream requires the human touch; Google programmers start with an image and apply texture (a.k.a. “style”). This means that the DeepDream composition is actually dictated by the input image or photo selected by a human entity.

Having an autonomous art algorithm seems to make all the difference. “The scores indicate that the subjects regard these paintings not just as art but as something appealing,” says Professor Elgammal. The scores that he’s referring to were tabulated after human subjects were asked to rate how intentional, communicative, visually structured, and inspiring the paintings were. The data revealed that subjects “rated the images generated by [the computer] higher than those created by real artists, whether in the Abstract Expressionism set or in the Art Basel set.”

The numbers went far beyond statistically significant. When asked to guess the authorship of actual artworks shown at Art Basel in Hong Kong in 2016, 59 percent of respondents inaccurately guessed that they were made by machines. In another portion of the survey, 75 percent of respondents assumed that paintings made by the algorithm were actually generated by humans. The computer-generated paintings squared off against comparable works by artists like Leonardo Drew, Andy Warhol, Heimo Zobernig, and Ma Kelu. Most of the contemporary artists whose work was used in the AAIL experiment declined to comment on Elgammal’s research paper, with one exception: Panos Tsagaris.

Panos Tsagaris
Untitled , 2016
Galerie Kornfeld

Panos Tsagaris, Untitled, 2015. Courtesy of the artist and Kalfayan Galleries Athens-Thessaloniki.

An untitled 2015 work by the Greek artist—a mixed media canvas tinged with gold leaf—was shown by Kalfayan Galleries at Art Basel 2016 in Hong Kong, and was included as a sample image for the AAIL tests. Tsagaris finds A.I. art “fascinating,” and considers the algorithm more of a peer rather than a disruptive threat. “I’m curious to see how this project will progress as the technology develops further,” he says. “How human-made paintings generated by a machine look like is one thing; bringing the A.I. artist to the level where it can create a concept, a series of emotions upon which it will base the painting that it will create is a whole other level.” He sounds more like Philip K. Dick than Clement Greenberg: “I want to see art that was generated in the mind and heart of the A.I. artist.”

Art historian and critic James Elkins is less sanguine. “This is annoying because [algorithms] are made by people who think that styles are what matter in art as opposed to social contexts, meaning, and expressive purpose,” he says. “One consequence of that narrow sense of what’s interesting is that it implies that a painting’s style is sufficient to make it a masterpiece.” Elkins doesn’t believe artists will go the way of cobblers and cabbies anytime soon either. “If human artists were to stop making art,” he argues, “so would the computers.”

Michael Connor, the artistic director of Rhizome, a non-profit that provides a platform for digital art, agrees. He describes the gap between silicon- and carbon-based artists as wide and deep: “Making art is not the sole role of being an artist. It’s also about creating a body of work, teaching, activism, using social media, building a brand.” He suggests that the picture Elgammal's algorithm generates is art in the same way that what a Monet forger paints is art: “This kind of algorithm art is like a counterfeit. It’s a weird copy of the human culture that the machine is learning about.” He adds that this isn’t necessarily a bad thing: “Like the Roman statues, which are copies of the original Greek figures, even copies can develop an intrinsic value over time.”

Elgammal is quick to point out that the learning curve of his algorithm perfectly conforms to the maturation process of the human artist. “In the beginning of their careers artists like Picasso and Cézanne imitated or followed the style of painters they were exposed to, either consciously or unconsciously. Then, at some point, they broke out of this phase of imitations and explored new things and new ideas,” he says. “They went from traditional portraits to Cubism and Fauvism. This is exactly what we tried to implement into the machine-learning algorithm.”  

And, just like a real emerging artist, the algorithm is about to have its first one-machine show. “Unhuman: Art in the Age of A.I.,” an exhibition in Los Angeles this October, will feature 12 of the original, A.I.-produced pieces used in the Rutgers study. And after this debut, Elgammal’s algorithm has plenty of room for career growth. That’s because the coders in the Rutgers lab haven’t exploited all the “collative variables” that can be used to jack up the “arousal potential” of the images the algorithm  generates. The higher the arousal potential (to a point), the more pleasing the A.I. art is to humans (and the more likely they are to buy it, presumably).

Despite all the A.I. art naysayers, here’s the thing that should make painters and the dealers who represent them nervous: Elgammal claims that the images his computer code generates will only get better over time. “By digging deep into art history, we will be able to write code that pushes the algorithm to explore new elements of art,” he says confidently. “We will refine the formulations and emphasize the most important arousal-raising properties for aesthetics: novelty, surprisingness, complexity, and puzzlingness.”

Surprisingness and puzzlingness—not exactly Artforum buzzwords. But allow the algorithm time to improve and compile a body of work, and they might be. Elgammal insists this technology is no one-hit wonder. He envisions an entire infrastructure developing to support his arousal-inducing digital art: galleries, agents, online auctions, even authenticators (a service that will undoubtedly be rendered by yet another AAIL algorithm).

Heimo Zobernig
Untitled (HZ 2015-080), 2015
Galería Juana de Aizpuru
Ma Kelu
Nature-Abstract No.1, 1984
Boers-Li Gallery

But before selling all your Warhols and investing heavily in an algorithm-generated art portfolio, consider this history lesson. In 1964, A. Michael Noll, an engineer and early computer pioneer at Bell Labs, did his own art Turing test. He programmed an IBM computer and a General Dynamics microfilm plotter to generate an algorithmic riff of the Piet Mondrian masterpiece Composition with Lines (1917). The digital image was projected on a cathode ray tube and photographed with a 35mm camera. A copy of that print, which Noll cheekily titled Computer Composition with Lines, was shown to 100 subjects next to a reproduction of the Mondrian painting. Only 28 percent of the subjects were able to correctly identify the IBM mimic. Even more stunning, 59 percent of the subjects preferred the computer image over the Mondrian original.

The following year, a collection of Noll’s digital art was exhibited at the Howard Wise Gallery in New York, marking the first time that computer-generated art was featured in an American art gallery. The New York Times gave the groundbreaking exhibition a rave review. According to Noll, though, the public response was “disappointing.” Not a single image from the show was sold.

That failed exhibition did nothing to diminish Noll’s optimism about the future of digital art. “The computer may be potentially as valuable a tool to the arts as it has already proven itself to be in the sciences,” he wrote in 1967. In the half-century since those words first appeared in print, that prophecy has yet to come true. But what should we make of a new algorithm that’s not so much a “tool” to assist artists, as it is a machine to replace them? It's the hoary cyberpunk plot unspooling in real life: Mad scientist invents a machine that becomes more human than humans.

Anyone who follows the contemporary art market will notice an additional wrinkle here. For a moment, the prevailing style—from art schools to the gallery circuit and the auction houses—was a breed of abstract painting that critics dubbed “Zombie Formalism” (aka Neo-Modernism, MFA Abstraction, and, more derisively, Crapstraction). Clinical, derivative, pretentious, and vertically formatted for convenient Instagram posting, this new genre, which is frequently digitized, filtered, and presented through a computer, is human art masquerading as algorithm art.

It’s the kind of exquisite irony that sparks conversations about creeping dystopia and the decline of culture: To regain their edge and pull higher scores on Professor Elgammal’s next Turing test, humans might have to start painting more like robots. If budding crapstractionists followed the lead of artificial intelligence—“deviating from the norm” and injecting a touch of “style ambiguity” into their work—their painting might actually improve.


Rene Chun