A writer explores what happens to art when our muses become mechanical, when inspiration is not divine but digital.
IN SEPTEMBER OF LAST YEAR, a startling headline appeared on the Guardian’s website: “A robot wrote this entire article. Are you scared yet, human?” The accompanying piece was written by GPT-3, or Generative Pre-trained Transformer 3, a language-generating program from San Francisco–based OpenAI, an artificial intelligence research company whose founders include Tesla billionaire Elon Musk and Berkeley Ph.D. John Schulman. “The mission for this op-ed is perfectly clear,” the robotic author explained to readers. “I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could ‘spell the end of the human race.’ I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.
“I taught myself everything I know just by reading the internet, and now I can write this column,” it continued. “My brain is boiling with ideas!” It ended loftily, quoting Gandhi: “‘A small body of determined spirits fired by an unquenchable faith in their mission can alter the course of history.’ So can I.”
As I write this article, phantom text appears ahead of my cursor like a spouse completing my sentences. Often it is right. Which, like the spouse, is
Public reaction was fairly mixed. Some expressed excitement at this advancement in technology. Others were wary. “Bet the Guardian’s myriad lifestyle columnists are pretty nervous right now,” one commenter wrote. I didn’t know about the Guardian’s columnists, but I sure was nervous. How could a machine have written this well, and how long before my obsolescence? Scared yet? Absolutely.
But many of those who had a greater understanding of natural language processing systems argued that my fear was unfounded. To say that GPT-3 wrote the op-ed was misleading, they insisted: An article like this required an enormous amount of human labor, both to build the program itself and to stitch together the outputs. “Attributing [the Guardian article] to AI is sort of like attributing the pyramids to the Pharaoh,” programmer-poet Allison Parrish ’03 told my cohost Leah Worthington and me in a recent episode of California magazine’s podcast, The Edge. “Pharaoh didn’t do that. The workers did.”
And indeed, a few days after the original op-ed, the Guardian acknowledged as much in a follow-up letter called “A human wrote this article. You shouldn’t be scared of GPT-3.” The author, Albert Fox Cahn, argued that while GPT-3 is “quite impressive … it is useless without human inputs and edits.”
Liam Porr wouldn’t argue with that. Porr was the Berkeley undergraduate in computer science who fed GPT-3 the prompts necessary to generate the Guardian piece. As he explained on our podcast, “All the content in the op-ed was taken from output of GPT-3, but not verbatim. It generated several outputs. And then the Guardian editors took the best outputs and spliced them together into this one large op-ed.” Still, according to Porr, the Guardian editors reported that the process was easier than, or at least comparable to, working with a human writer.
What happens to art, I wondered, when our muses become
when inspiration is not divine, but digital?
Regardless of who put in the elbow grease, AI has brought about a new frontier of language generation, as many of us are aware. As I write this article, phantom text appears ahead of my cursor like a spouse completing my sentences. Often it is right. Which, like the spouse, is annoying. This is now standard, everyday word processing. But in recent years, some have been taking it further, actively using language-generating AI to craft literature. In 2016, a team of Berkeley graduate students created a sonnet-generating algorithm called Pythonic Poet and won second place in Dartmouth’s PoetiX competition in which poems are judged by how “human” they seem. At the time, poet and Cal grad Matthew Zapruder, rejected the idea that a machine could write great poetry. He told California, “You can teach a computer to be a bad poet, doing things you expect to be done. But a good poet breaks the rules and makes comparisons you didn’t know could be made.”
But Parrish, an assistant arts professor at NYU who uses AI to craft verse, argues that computer-generated poetry is a new frontier in literature, allowing for serendipitous connections beyond anything human brains can create.
It was an intriguing idea, but what happens to art, I wondered, when our muses become mechanical, when inspiration is not divine, but digital?
PARRISH HAS BEEN PROGRAMMING roughly since she was in kindergarten. In the third grade, her father gave her The Hobbit, by J.R.R. Tolkien. Tolkien was a philologist who invented a series of Elvish languages, which were constructed with aesthetic pleasure in mind, rather than just function. Parrish’s love of language blossomed under Tolkien’s influence. “I think sort of inevitably, as my interest in computer programming deepened, and as my interest in language deepened, they kind of came together to form this Voltron of being interested in computer-generated poetry,” she said. (Voltron, incidentally, is an animated robotic superhero, “loved by good, feared by evil,” that gains strength by combining with other robots.)
Now she creates poetry using a database of public domain texts called Project Gutenberg and a machine learning model that pairs lines of poetry with similar phonetics. In using computational technology to create prose, the idea, Parrish said, “is to create an unexpected juxtaposition. We are limited when we’re thinking about writing in a purely intentional way. We’re limited in the kinds of ideas that we produce. So instead, we roll the dice. We create a system of rules. We follow that system of rules in order to create these unexpected juxtapositions of words, phrases, lines of poetry that do something that we would be incapable of doing on our own…. There’s untapped potential for things that might bring us joy in what we can do with our linguistic capacity.”
The way AI writes, by finding patterns and connections between texts, is not that different from how we do it, but computers are quicker and can draw from a vast universe of digitized information.
What if, armed with beautiful machines, writers could transcend the idea of authorship, even unravel the mysteries of the creative process? That could be revolutionary.
“A single human can’t read the whole web, but a computer can,” said John DeNero, a former Ph.D. and assistant teaching professor at the Berkeley Artificial Intelligence Research Lab. We mere mortals must rely on the comparably small set of data points that we read or experience over the course of our brief lives. “Effectively, [GPT-3] is set up to memorize all the text on the web,” DeNero said. In this way, GPT-3 could be considered deeply human. It is drawing from a data set of countless human voices.
Algorithms (really just a set of rules) have a long history in creative writing. Parrish cites as an example the I Ching, an ancient Chinese divination text that describes flipping coins and interpreting the meaning of those coins. Or take Tristan Tzara’s instructions for making a Dada poem, in which he advised cutting an article into individual words, throwing the words into a bag, and drawing them out at random. Even in more formal, less strictly experimental writing, writers have often relied on somewhat random rules as a means to create, from sonnets’ use of iambic pentameter to haikus with their 5-7-5 syllable structure.
As any writer who has found herself frozen before a blank page knows, the creative process contains a contradiction: While total freedom can be paralyzing, structure can be freeing and rigid rules can result in groundbreaking work. I left our podcast conversation with Parrish feeling stodgy and unimaginative, but also eager to try my hand at something more experimental.
Writing, arguably, hasn’t experienced any major evolutionary steps since word processing sped up the transfer of thoughts from brain to page, or since the internet widened our access to information. What if, armed with beautiful machines, writers could push their artform beyond its current boundaries, transcend the idea of authorship, even unravel the mysteries of the creative process? That could be revolutionary. And yet, we’ve long accepted the idea that stories come from a “force” outside of us. By John Milton’s own account, he wasn’t the author of Paradise Lost. He claimed it was dictated to him by his “celestial patroness” while he slept. He would emerge from his slumbers with the fully formed epic poem ready to be announced to the closest person with a pen. When he tried to write while awake, without his muse, nothing came. The feeling of words and ideas flowing through you is one of the most gratifying experiences a writer can have. Who’s to say a muse couldn’t be mechanical?