In the 1960s, revered British painter Harold Cohen swapped his brushes for computers, ultimately landing a spot as a visiting scholar at Stanford University’s Artificial Intelligence Laboratory in 1971. There, he invented AARON, one of the world’s first art-making software programs. Over the next four decades, AARON became progressively more autonomous in its ability to generate images. This quickly raised questions around authorship: Who is the artist, AARON or Cohen, and can artificial intelligence (AI) ever be creatively independent? Cohen, who died at age 87 in April, was skeptical that we would see entirely machine-created art in this century. But research into whether artificial intelligence can function creatively forges ahead.
On Wednesday, Google launched Magenta, a new initiative by the Google Brain team (the company’s machine intelligence arm) that will tackle whether machines can make music and art. The research group will produce open-source models and tools to allow individuals—artists, musicians, researchers, coders—to test the creative capacity of AI. While the project’s central goal is to create algorithms that can generate art and music, the group also hopes to assemble an active community that will push developments in AI creativity forward.
This project is the latest in a slew of recent AI initiatives enabled by software developers open-sourcing (or freely sharing with the developer community) their code. In November, Google launched TensorFlow, the open-source software library offering up its machine-learning code to the world on which Magenta is built. In the same week, Microsoft open-sourced its machine-learning toolkit DMTK. In December, Tesla’s Elon Musk and Y Combinator’s Sam Altman unveiled OpenAI, a billion-dollar artificial intelligence nonprofit that open-sources AI research to ensure that projects will benefit (and not threaten) humanity. That same month, Facebook open-sourced the blueprints for the server, Big Sur, it uses to train AI software. (In early 2015, Facebook had released a number of its AI tools on Torch, a machine-learning library also used by Google and Twitter.)
Thanks to these initiatives, AI research that was once esoteric and contained to labs is now developing at an exponential rate. “There’s been a big shift in terms of the accessibility of the research to people outside of research institutions,” said artist and programmer Gene Kogan, who works with emerging technologies in artistic contexts and who recently helped organize alt-AI, a conference exploring artistic research into AI. “There’s been a shortening of the pipeline between research and application for people like me. There’s a lot more dialogue between academics and artists and engineers than there used to be a couple of years ago; people can share code with each other instead of just ideas.”
Because of this, we’ve seen a sudden swell of artworks created using AI—and thanks to Google’s open-sourced Magenta project we can expect to see a lot more. Magenta was first announced at May’s Moogfest art, music, and tech conference in Durham, North Carolina. It was inspired by earlier initiatives from Google Brain like DeepDream, a tool that uses artificial neural networks—machine-learning models based on the central nervous system—to recognize patterns and “fill in the gaps” between existing images. (You may remember these morphing, psychedelic dogs and bizarre, hallucinogenic landscapes.) “We believe this area is in its infancy, and expect to see fast progress here,” writes research scientist Douglas Eck. “But there remain a number of interesting questions: How can we make models like these truly generative? How can we better take advantage of user feedback?”
Taking a similar tack to DeepDream, Magenta’s first public release will reportedly allow musicians and researchers to upload MIDI music files into TensorFlow. The hope is that arming machines with musical knowledge will allow them to create their own songs. (On Wednesday, the group shared an early sample of a 90-second melody from Google software engineer Elliot Waite, in which a program fed four notes—C, C, G, G, like “twin-kle” “twin-kle”—returns a machine-generated melody.) Though the group is beginning with audio, they will be regularly updating their GitHub, so watch for later releases for image and video. Within the primary goal of generating art and music, Eck points to major challenges including capturing attention and surprise (“Art is dynamic!” he says) and creating long-term narratives.
“We don’t know what artists and musicians will do with these new tools, but we’re excited to find out,” Eck continues, pointing to the long history of creative, and unexpected, uses of technology in artistic practice. “Daguerre and later Eastman didn’t imagine what Annie Leibovitz or Richard Avedon would accomplish in photography. Surely Rickenbacker and Gibson didn’t have Jimi Hendrix or St. Vincent in mind.” If Magenta successfully fosters a community built on open source, we could live in a world where wunderkind AIs are opening solo exhibitions at MoMA or selling out shows at Madison Square Garden. But can machines tap into the cognitive and physiological creative aspects heretofore unique to humans? Time will tell.
Regardless, some artists are already using AIs in lieu of studio assistants. “Rather than offloading what we call creativity to a machine, I’m interested in how we can use it to augment our own creativity and be in dialogue with the software,” said Kogan. This means allowing AIs to step in for time-consuming tasks, like organizing data, and skipping straight to the good part—making art.