Book Read Free

The Formula_How Algorithms Solve All Our Problems... and Create More

Page 17

by Luke Dormehl


  It was cinema that was seized upon as the ideal medium for conveying popular, formulaic storytelling, representing the first example of what computational scholar Lev Manovich refers to as New Media. “Irregularity, nonuniformity, the accident and other traces of the human body, which previously, inevitably accompanied moving image exhibitions, were replaced by the uniformity of machine vision,” Manovich writes.12 In cinema, its pioneers imagined a medium that could apply the engineering formula to the field of entertainment. Writing excitedly about the bold new form, Soviet filmmaker and propagandist Sergei Eisenstein opined, “What we need is science, not art. The word creation is useless. It should be replaced by labor. One does not create a work, one constructs it with finished parts, like a machine.”

  Eisenstein was far from alone in expressing the idea that art could be made more scientific. A great many artists of the time were similarly inspired by the notion that stripping art down to its granular components could provide their work with a social function on a previously unimaginable scale, thereby achieving the task of making art “useful.” A large number turned to mechanical forms of creativity such as textile, industrial and graphic design, along with typography, photography and photomontage. In a state of euphoria, the Soviet artists of the Institute for Artistic Culture declared that “the last picture has been painted” and “the ‘sanctity’ of a work of art as a single entity . . . destroyed.” Art scholar Nikolai Punin went one step further still, both calling for and helpfully creating a mathematical formula he claimed to be capable of explaining the creative process in full.13

  Unsurprisingly, this mode of techno-mania did not go unchallenged. Reacting to the disruptive arrival of the new technologies, several traditionally minded scholars turned their attentions to critiquing what they saw as a seismic shift in the world of culture. For instance, in his essay “The Work of Art in the Age of Mechanical Reproduction,” German philosopher and literary critic Walter Benjamin observed:

  With the advent of the first truly revolutionary means of reproduction, photography, . . . art sensed the approaching crisis . . . Art reacted with the doctrine of l’art pour l’art, that is, with a theology of art. This gave rise to . . . “pure” art, which not only denied any social function of art but also any categorizing by subject matter.14

  Less than a decade later in 1944, two German theorists named Theodor Adorno and Max Horkheimer elaborated on Benjamin’s argument in their Dialectic of Enlightenment, in which they attacked what they bitingly termed the newly created “culture industry.” Adorno and Horkheimer’s accusation was simple: that like every other aspect of life, creativity had been taken over by industrialists obsessed with measurement and quantification. In order to work, artists had to conform, kowtowing to a system that “crushes insubordination and makes them subserve the formula.”

  Had they been alive today, Adorno and Horkheimer wouldn’t for a moment have doubted that a company like Epagogix could successfully predict the box office of Hollywood movies ahead of production. Forget about specialist reviewers; films planned from a statistical perspective call for nothing more or less than statistical analysis.

  Universal Media Machines

  In 2012, another major shift was taking place in the culture industry. Although it was barely remarked upon at the time, this was the first year in which U.S. viewers watched more films legally delivered via the Internet than they did using physical formats such as Blu-ray discs and DVDs. Amazon, meanwhile, announced that, less than two years after it first introduced the Kindle, customers were now buying more e-books than they were hardcovers and paperbacks combined.

  At first glance, this doesn’t sound like such a drastic alteration. After all, it’s not as if customers stopped watching films or reading books altogether, just that they changed something about the way that they purchased and consumed them. An analog might be people continuing to shop at Gap, but switching from buying “boot fit” to “skinny” jeans.

  However, while this analogy works on the surface, it fails to appreciate the extent of the transition that had taken place. It is not enough to simply say that a Kindle represents a book read on screen as opposed to on paper. Each is its own entity, with its own techniques and materials. In order to appear on our computer screens, tablets and smartphones, films, music, books and paintings must first be rendered in the form of digital code. This can be carried out regardless of whether a particular work was originally created using a computer or not. For the first time in history, any artwork can be described in mathematical terms (literally a formula), thereby making it programmable and subject to manipulation by algorithms.

  In the same way that energy can be transferred from movement into heat, so too can information now shift easily between mediums. For instance, an algorithm could be used to identify the presence of shadows in a two-dimensional photograph and then translate these shadows to pixel depth by measuring their position on the grayscale—ultimately outputting a three-dimensional object using a 3-D printer.15 Even more impressively, in recent years Disney’s R&D division has been hard at work on a research project designed to simulate the feeling of touching a real, tactile object when, in reality, users are only touching a flat touch screen. This effect is achieved using a haptic feedback algorithm that “tricks” the brain into thinking it is feeling ridges, bumps or potentially even textures, by re-creating the sensation of friction between a surface and a fingertip.16 “If we can artificially stretch skin on a finger as it slides on the touch screen, the brain will be fooled into thinking an actual physical bump is on a touch screen even though the touch surface is completely smooth,” says Ivan Poupyrev, the director of Disney Research, who describes the technology as a means by which interactions with virtual objects can be made more realistic.17

  There is also the possibility of combining different mediums in entirely new ways, something increasingly common in a world used to web pages, PowerPoint presentations, and mobile multimedia messages. It is no coincidence that the advent of the programmable computer in the 20th century saw the art world take its first tentative steps away from the concept of media specificity. As the computer became a multipurpose canvas for everything from illustration to composition, so too did modern artists over the past 50 years seek to establish formulas capable of bringing together previously separate entities, such as musical and visual composition.

  Scientists and artists alike have long been fascinated by the neurological condition of synesthesia (Greek for “joined perception”), in which affected individuals see words as colors, hear sounds as textures, or register smells as shapes. A similar response is now reproducible on computer, and this can be seen through the increasing popularity of “info-aesthetics”18 that has mirrored the rise of data analytics. More than just a method of computing, info-aesthetics takes numbers, text, networks, sounds and video as its source materials and re-creates them as images to reveal hidden patterns and relationships in the data.

  Past data visualizations by artists include the musical compositions of Bach presented as wave formations, the thought processes of a computer as it plays a game of chess, and the fluctuations of the stock market. In 2013, Bill Gates and Microsoft chief technology officer Nathan Myhrvold filed a patent for a system capable of taking selected blocks of text and using this information to generate still images and even full-motion video. As they point out, such technology could be of use in a classroom setting—especially for students suffering from dyslexia, attention deficit disorder, or any one of a number of other conditions that might make it difficult to read long passages of text.19

  To Thine Own Self Be True/False

  Several years ago, as an English graduate student, Stephen Ramsay became interested in what is known as graph theory. Graph theory uses the mathematical relationship between objects to model their connections—with individual objects represented by “nodes” and the lines that connect them referred to as “edges.” Looking around for something in l
iterature that was mathematical in structure, Ramsay settled upon the plays of William Shakespeare. “A Shakespearean play will start in one place, then move to a second place, then go back to the first place, then on to the third and fourth place, then back to the second, and so on,” he says. Intrigued, Ramsay set about writing a computer program capable of transforming any Shakespearean play into a graph. He then used data-mining algorithms to analyze the graphs to see whether he could predict (based wholly on their mathematical structure) what he was looking at was a comedy, tragedy, history or romance. “And here’s the thing,” he says. “I could. The computer knew that The Winter’s Tale was a romance, it knew that Hamlet was a tragedy, it knew that A Midsummer Night’s Dream was a comedy.” There were just two cases in which the algorithm, in Ramsay’s words, “screwed up.” Both Othello and Romeo and Juliet came back classified as comedies. “But this was the part that was flat-out amazing,” he says. “For a number of years now, literary critics have been starting to notice that both plays have the structure of comedies. When I saw the conclusion the computer had reached, I almost fell off my chair in amazement.”

  The idea that we might practically use algorithms to find the “truths” obscured within particular artistic works is not a new one. In the late 1940s, an Italian Jesuit priest named Roberto Busa used a computer to “codify” the works of influential theologian Thomas Aquinas. “The reader should not simply attach to the words he reads the significance they have in his mind,” Busa explained, “but should try to find out what significance they had in the author’s mind.”20

  Despite this early isolated example, however, the scientific community of the first half of the 20th century for the most part doubted that computers had anything useful to say about something as unquantifiable as art. An algorithm could never, for example, determine authorship in the case of two painters with similar styles—particularly not in situations in which genuine experts had experienced difficulty doing so. In his classic book Faster Than Thought: A Symposium on Digital Computing Machines, the late English scientist B. V. Bowden offers the view that:

  It seems most improbable that a machine will ever be able to give an answer to a general question of the type: “Is this picture likely to have been painted by Vermeer, or could van Meegeren have done it?” It will be recalled that this question was answered confidently (though incorrectly) by the art critics over a period of several years.21

  To Bowden, the evidence is clear, straightforward and damning. If Alan Turing suggested that the benchmark of an intelligent computer would be one capable of replicating the intelligent actions of a man, what hope would a machine have of resolving a problem that even man was unable to make an intelligent judgment on? A cooling fan’s chance in hell, surely.

  In recent years, however, this view has been challenged. Lior Shamir is a computer scientist who started his career working for the National Institutes of Health, where he used robotic microscopes to analyze the structure of hundreds of thousands of cells at a time. After that he moved on to astronomy, where he created algorithms designed for scouring images of billions of galaxies. Next he began working on his biggest challenge to date: creating the world’s first fully automated, algorithmic art critic, with a rapidly expanding knowledge base and a range of extremely well-researched opinions about what does and does not constitute art. Analyzing each painting it is shown based on 4,024 different numerical image content descriptors, Shamir’s algorithm studies everything that a human art critic would examine (an artist’s use of color, or their distribution of geometric shapes), as well as everything that they probably wouldn’t (such as a painting’s description in terms of its Zernike polynomials, Haralick textures and Chebyshev statistics). “The algorithm finds patterns in the numbers that are typical to a certain artist,” Shamir explains.22 Already it has proven adept at spotting forgeries, able to distinguish between genuine and fake Jackson Pollock drip paintings with an astonishing 93 percent accuracy.

  Much like Stephen Ramsay’s Shakespearean data-mining algorithm, Shamir’s automated art critic has also made some fresh insights into the connections that exist between the work of certain artists. “Once you can represent an artist’s work in terms of numbers, you can also visualize the distance between their work and that of other artists,” he says. When analyzing the work of Pollock and Vincent Van Gogh—two artists who worked within completely different art movements—Shamir discovered that 19 of the algorithm’s 20 most informative descriptors showed significant similarities, including a shared preference for low-level textures and shapes, along with a similar deployment of lines and edges.23 Again, this might appear to be a meaningless insight were it not for the fact that several influential art critics have recently begun to theorize similar ideas.24

  Bring on the Reading Machines

  This newfound ability to subject media to algorithmic manipulation has led a number of scholars to call for a so-called algorithmic criticism. It is no secret that the field of literary studies is in trouble. After decades of downward trends in terms of enrollments, the subject has become a less and less significant part of higher education. So how could this trend be reversed? According to some, the answer is a straightforward one: by turning it into the “digital humanities,” of course. In a 2008 editorial for the Boston Globe entitled “Measure for Measure,” literary critic Jonathan Gottschall dismissed the current state of his field as “moribund, aimless, and increasingly irrelevant to the concerns . . . of the ‘outside world.’” Forget about vague terms like the “beauty myth” or Roland Barthes’s concept of the death of the author, Gottschall says. What is needed instead is a productivist approach to media built around correlations, pattern-seeking and objectivity.

  As such, Gottschall lays out his Roberto Busa–like beliefs that genuine, verifiable truths both exist in literature and are desirable. In keeping with the discoverable laws of the natural sciences, in Gottschall’s mind there are clear right and wrong answers to a question such as, “Can I interpret [this painting/this book/this film] in such-and-such a way?”

  While these comments are likely to shock many of those working within the humanities, Gottschall is not altogether wrong in suggesting that there are elements of computer science that can be usefully integrated into arts criticism. In the world of The Formula, what it is that is possible to know changes dramatically. For example, algorithms can be used to determine “vocabulary richness” in literature by measuring the number of different words that appear in a 50,000-word block of text. This can bring about a number of surprises. Few critics would ever have suspected that a “popular” author like Sinclair Lewis—sometimes derided for his supposed lack of style—regularly demonstrates twice the vocabulary of Nobel laureate William Faulkner, whose work is considered notoriously difficult.

  One of the highest-profile uses of algorithms to analyze text took place in 2013 when a new crime fiction novel, The Cuckoo’s Calling, appeared on bookshelves around the world, written by a first-time author called Robert Galbraith. While the book attracted little attention early on, selling just 1,500 printed copies, it became the center of controversy after a British newspaper broke the story that the author may be none other than Harry Potter author J. K. Rowling, writing under a pseudonym. To prove this one way or the other, computer scientists were brought in to verify authorship. By using data-mining techniques to analyze the text on four different variables (average word length, usage of common words, recurrent word pairings, and distribution of “character 4-grams”), algorithms concluded that Rowling was most likely the author of the novel, something she later admitted to.25

  As Stephen Ramsay observes, “The rigid calculus of computation, which knows nothing about the nature of what it’s examining, can shock us out of our preconceived notions on a particular subject. When we read, we do so with all kinds of biases. Algorithms have none of those. Because of that they can take us off our rails and make us say, ‘Aha! I’d never noticed that before.’�


  Data-tainment

  A quick scan of the best-seller list will be enough to convince us that, for better or worse, book publishers are not the same as literary professors. This doesn’t mean that they are exempt from the allure of using algorithms for analysis, however. Publishers, of course, are less interested in understanding a particular text than they are in understanding their customers. In previous years, the moment that a customer left a bookshop and took a book home with them, there was no quantifiable way a publisher would know whether they read it straight through or put it on a reading pile and promptly forgot about it. Much the same was true of VHS tapes and DVDs. It didn’t matter how many times an owner of Star Wars rewound their copy of the tape to watch a Stormtrooper bump his head, or paused Basic Instinct during the infamous leg-crossing scene: no studio executive was ever going to know about it. All of that is now changing, however, due to the amount of data that is able to be gathered and fed back to content publishers. For example, Amazon is able to tell how quickly its customers read e-books, whether they scrutinize every word of an introduction or skip over it altogether, and even which sections they choose to highlight. They know that science fiction, romance and crime novels tend to be read faster than literary fiction, while nonfiction books are less likely to be finished than fiction ones.

  These insights can then be used to make creative decisions. In February 2013, Netflix premiered House of Cards, its political drama series starring Kevin Spacey. On the surface, the most notable aspect of House of Cards appeared to be that Netflix—an on-demand streaming-media company—was changing its business model from distribution to production, in an effort to compete with premium television brands like Showtime and HBO. Generating original video content for Internet users is still something of a novel concept, particularly when it is done on a high budget and, at $100 million, House of Cards was absolutely that. What surprised many people, however, was how bold Netflix was in its decisions. Executives at the Los Gatos–based company commissioned a full two seasons, comprising 26 episodes in total, without ever viewing a single scene. Why? The reason was that Netflix had used its algorithms to comb through the data gathered from its 25 million users to discover the trends and correlations in what people watched. What it discovered was that a large number of subscribers enjoyed the BBC’s House of Cards series, evidenced by the fact that they watched episodes multiple times and in rapid succession. Those same users tended to also like films that starred Kevin Spacey, as well as those that were directed by The Social Network’s David Fincher. Netflix rightly figured that a series with all three would therefore have a high probability of succeeding.26

 

‹ Prev