It seems the future has arrived, folks. Not only are there currently computer programs that are able to write, most likely you have been reading their writing for some time now. A computer named “Hal” penned a poorly received romance novel way back in 1993 entitled Just This Once, though you may have missed it—only 35,000 copies were printed. The book was written in the style of writer Jacqueline Susann, famous for the uber-popular Valley of the Dolls. Hal was programmed to study Susann’s writing and mimic it, creating a new novel that reflected something she may have written. The results were mixed, with many critics declaring it about as good as most in the genre of “trashy romance.” And, understandably, the whole shebang got tied up in a copyright tussle as Susann had never met “Hal” or given permission to be his muse.
All of this Hal hubbub may seem outdated, but the fact that there actually are computer programs writing an astonishing amount of what we read every day is quite contemporary. In fact, as of 2014, 8.5 percent of the articles on Wikipedia have been written by a computer program created by a Swedish science teacher named Sverker Johansson. Known as Lsjbot, the program culls information from a variety of sources and pieces it together into short articles about a range of subjects. And it doesn’t stop there—half of all the edits on Wikipedia are also performed by these “bots.”
Surprisingly, Wikipedia isn’t the only information source to utilize bots as contributors. For the last few years the Associated Press, Forbes, and the Los Angeles Times have been using algorithms to write short articles, business reports, and up-to-the-minute news updates for publication. These stories are most often very straight-forward, data-driven pieces that convert overwhelming amounts of facts and figures into readable content. Pro-bot authorship enthusiasts contend that allowing these algorithms to produce the less substantial content frees up human journalists to perform more in-depth reporting. Opponents worry about the larger consequences if this practice becomes the new normal. How will this affect the quality of our authors and the standards of our readers? And what about job security for our human writers—there are already a lot of players out there, do we have room for writers who can produce 10,000 articles a day and don’t need food, sleep, or a paycheck?
The big question is whether or not these programs will ever be able to write anything of real merit, such as the type of good literature that requires an autonomous mind, unbounded imagination, and mastery of language. The myriad potentials of artificial intelligence have yet to be measured as developments in the field are still in their infancy, but there have been some recent and moderately successful attempts by these programs and their developers to write more creatively. In fact, there are writing contests for computer-generated literature every year, and they are bringing in thousands of entries.
One of the most popular contests is a version of the well-known “NaNoWriMo” challenge—short for National Novel Writing Month. The call is to complete an entire novel during the month of November (50,000 words in 30 days, to be exact). Started in 2013, NaNoGenMo is a copycat contest in which the entries come from algorithms and their creators. The only requirement is that the finished novel be 50,000 words long and submitted with a source code.
Many of the submissions source their content from the human world, drawing data from journals, blogs, and social media posts, for example. According to NaNoGenMo creator Darius Kazemi, the results can be pretty comical—one particular novel boasted 50,000 words of roommates arguing about cleaning their apartment—and Kazemi’s take on the whole thing is a compelling one. He suggests that the goal isn’t to imitate human literature—in fact, the more alien the results the better. The hope is to produce, in his words, “alien novels that astound us with their sheer alienness.” Computers writing novels for computers, essentially, communicating in a language within which humans will have a diminishing role.
The issues at hand feel like they were pulled straight out of science fiction, and because the field is developing as we speak it is difficult to grasp the implications. Glaringly apparent, though, is the fact that the production and use of algorithm-written material is on a steep rise, with some in the business suggesting that as much as 90% of the news could be computer generated as soon as 2030. Bots are already producing over 200,000 print-to-order academic books available on Amazon, though the subject matter can get quite obscure. A 144-page book about the economic outlook of washable bathmats in India, with an asking price of $495 anyone? Anyone?
If you would like to test your ability to tell the difference between human and bot-written material give this your best shot.