Super Crunchers

Home > Other > Super Crunchers > Page 6
Super Crunchers Page 6

by Ian Ayres


  Credit Indemnity sent out more than 50,000 direct-mail solicitations to former customers. Like CapOne’s mailings, these solicitations offered random interest rates that varied from 3.25 percent to 11.75 percent. As an economist, it was comforting to learn from Credit Indemnity’s experiment that yes, there was larger demand for lower priced loans.

  Still, price wasn’t everything. What was really interesting about the test is that Credit Indemnity simultaneously randomized other aspects of the solicitations. The bank learned that simply adding a photo of a smiling woman in the corner of the solicitation letter raised the response rate of male customers by as much as dropping the interest rate 4.5 percentage points. They found an even bigger effect when they had a marketing research firm call the client a week before the solicitation and simply ask questions: “Would you mind telling us if you anticipate making large purchases in the next few months, things like home repairs, school fees, appliances, ceremonies (weddings, etc.), or even paying off expensive debt?”

  Talk about your power of suggestion. Priming people with a pleasant picture or bringing to mind their possible need for a loan in a non-marketing context dramatically increased their likelihood of responding to the solicitation.

  How do we know that the picture or the phone call really caused the higher response rate? Again, the answer is coin flipping. Randomizing over 50,000 people makes sure that, on average, those shown pictures and those not shown pictures were going to be pretty much the same on every other dimension. So any differences in the average response rate between the two groups must be caused by the difference in their treatment.

  Of course, randomization doesn’t mean that those who were sent photos are each exactly the same as those who were not sent photos. If we looked at the heights of people who received photo solicitations, we would see a bell curve of heights. The point is that we would see the same bell curve of heights for those who received photos without solicitations. Since the distribution of both groups becomes increasingly identical as the sample size increases, then we can attribute any differences in the average group response to the difference in treatment.

  In lab experiments, researchers create data by carefully controlling for everything to create matched pairs that are identical except for the thing being tested. Outside of the lab, it’s sometimes simply impossible to create pairs that are the same on all the peripheral dimensions. Randomization is how businesses can create data without creating perfectly matched individual pairs. The process of randomization instead creates matched distributions. Randomization thus allows Super Crunchers to run the equivalent of a controlled test without actually having to laboriously match up and control for dozens or hundreds of potentially confounding variables.

  The implications of the randomized marketing trials for lending profitability are pretty obvious. Instead of dropping the interest rate five percentage points, why not simply include a picture? When Credit Indemnity learned the results of the study, they were about to start doing just that. But shortly after the tests were analyzed, the bank was taken over. The new bank not only shut down future testing, it also laid off tons of the Credit Indemnity employees—including those who had been the strongest proponents of testing. Ironically, some of these former employees have taken the lessons of the testing to heart and are now implementing the results in their new jobs working for Credit Indemnity’s competitors.

  You May Be Watching a Random Web Page

  Testing with chance isn’t limited to banks and credit companies; indeed, Offermatica.com has turned Internet randomization into a true art form. Two brothers, Matt and James Roche, started Offermatica in 2003 to capitalize on the ease of randomizing on the Internet. Matt is its CEO and James works as the company’s president. As their name suggests, Offermatica has automated the testing of offers. Want to know whether one web page design works better than another? Offermatica will set up software so that when people click on your site, either one page or the other will be sent at random. The software then can tell you, in real time, which page gets more “click throughs” and which generates more purchases.

  What’s more, they can let you conduct multiple tests at once. Just as Credit Indemnity randomly selected the interest rate and independently decided whether or not to include a photo, Offermatica can randomize over multiple dimensions of a web page’s design.

  For example, Monster.com wanted to test seven different elements of the employers’ home pages. They wanted to know things like whether a link should say “Search and Buy Resumes” or just “Search Resumes” or whether there should be a “Learn More” link or not. All in all, Monster had 128 different page permutations that it wanted to test. But by using the “Taguchi Method,” Offermatica was able to test just eight “recipe” pages and still make accurate predictions about how the untested 120 other web pages would fare.*1

  Offermatica software not only automates the randomization, it also automatically analyzes the Internet response. In real time, as the test was being conducted, Monster could see a continuously updating graph of not only how the eight recipe pages were faring, but also how untested pages would likely fare in generating actual sales. The lines of each alternative page stretch across the graph like a horse race with clear winners and clear losers. Imagine it—instantaneous information on 128 different treatments with tens of thousands of observations. This is randomization on steroids. Offermatica shows the way that Super Crunching often exploits technology to shorten the time between data collection, analysis, and implementation. With Offermatica, the time between the test and the marketing change can be just a matter of hours.

  By the way, if you think you have a good graphic eye, try to see which of these two boxes you think tested better:

  SOURCE: Monster.com Scores Millions, http://www.offermatica.com/stories-1.7.htm.

  I personally find the curved icons of the lower box to be more appealing. That’s what Monster thought too. The lower box is the box that Monster actually started with before the test. Yet it turns out that employers spent 8.31 percent more per visit when they were shown the top box. This translates into tens of millions of dollars per year for Monster’s ecommerce channel. Instead of trusting its initial instinct, Monster went out, perturbed the status quo, and watched what happened. It created a new type of data and put its initial instinct to the test.

  Jo-Ann Fabrics got an even bigger surprise. Part of the power of testing multiple combinations is that it lets companies be bolder, to take bigger risks with test marketing. You might not think that JoAnn.com would draw enough web traffic to make Internet testing feasible, but they pull over a million unique visitors a month. They have enough traffic to do all kinds of testing.

  So when JoAnn.com was optimizing their website, they decided to take a gamble and include in their testing an unlikely promotion for sewing machines: “Buy two machines and save 10 percent.” They didn’t expect this test to pan out. After all, how many people need to buy two sewing machines? Much to their amazement, the promotion generated by far the highest returns. “People were pulling their friends together,” says Linsly Donnelly, JoAnn.com’s chief operating officer. The discount was turning their customers into sales agents. Overall, randomized testing increased its revenue per visitor by a whopping 209 percent.

  In the brick-and-mortar world, the cost of running randomized experiments with large enough samples to give you a statistically significant result sometimes severely limits the number of experiments that can be done. But the Internet changes all this. “As the cost of showing a group of people a given experience gets close to zero,” Matt Roche, Offermatica’s CEO, says, “the number of experiences you can give is close to infinity.”

  Seeing how consumers respond to a whole bunch of different online experiences is what Offermatica is all about. It’s a wildly different model for deciding who decides. Matt comes alive when he talks about seeing the battle for corporate control of your eyeballs: “I go to meetings where you have all these people sitting around a t
able claiming authority. You have the analytic guy who has the amulet of historical data. You’ve got the branding guy who has this mystical certainty about what makes brands stronger. And of course you got the authority of title itself, the boss who’s used to thinking that he knows best. But what’s missing is the consumer’s voice. All these forms of authority are substituting for the customer’s voice. What we’re really doing at Offermatica is listening to what consumers want.”

  Offermatica not only has to do battle with the in-house analytic guy who crunches numbers on historical data, it also has to take on “usability experts” who run hyper-controlled experiments in university laboratories. The usability experts are sure of certain axioms that have been established in the lab—things like “people look at the upper left-hand corner first” or “people look at red more than blue.” Roche responds, “In the real world, an ad is competing against so many other inputs. There’s no such thing as a controlled experiment. They cling to a sandcastle of truth in a tsunami of other information.” It’s so cheap to test and retest alternatives that there’s no good reason to blindly accept the wisdom of academic axioms.

  It shouldn’t surprise you that the smarts at Google are also riding the randomization express. Like Offermatica, they make it easy to give consumers different ad experiences and then see which ads they like best. Want to know whether your AdWords ad for beer should say “Tastes Great” or “Less Filling”? Well, Google will put both ads in rotation and then tell you which ad people are more likely to click through. Since the order in which people run Google searches is fairly random, alternating the order of two ads will have the same effect as randomizing. Indeed, Google even will start by rotating your ads and then automatically shift toward the ad that has the higher click-through rate.

  I just ran this test to help decide what to name this book. The End of Intuition was the original working title for this book, but I wondered whether Super Crunchers might instead better convey the book’s positive message. So I set up a Google AdWords campaign. Anyone searching for words like “data mining” or “number crunching” would be shown either

  Super Crunchers

  Why Thinking-by-Numbers

  Is the New Way to Be Smart

  www.bantamdell.com

  OR

  The End of Intuition

  Why Thinking-by-Numbers

  Is the New Way to Be Smart

  www.bantamdell.com

  I found that random viewers were 63 percent more likely to click through on the Super Crunchers ad. (They also substantially preferred the current subtitle to “Why Data-Driven Decision Making Is the New Way to Be Smart.”) In just a few days, we had real-world reactions from more than a quarter of a million page views. That was good enough for me. I’m proud to be able to say that Super Crunchers is itself a product of Super Crunching.

  Who Is Usefully Creative?

  A common feature of all the foregoing random trials is that someone still has to come up with the alternatives to be tested. Someone has to have the idea to try to sell two sewing machines or the idea to have a research firm call a week in advance. The random trial method is not the end of intuition. Instead it puts intuition to the test.

  In the old days, firms would have to bet the ranch on a national television campaign. On the web, you can roll the dice on a number of different campaigns and quickly shift to the campaign that produces the best results. The creative process is still important, but creativity is literally an input to the testing process.

  In fact, the AdWords randomization feature could provide a great test of who can write the most effective ad. Ad agencies might think of testing applicants by seeing who is able to improve on a client’s Google ad. Imagine an episode of The Apprentice where the contestants were ranked on their objective ability to optimize the mass market sales of some popular web page.

  The potential for randomized web testing is almost limitless. Randomized trials of alternatives have increased not just click-through rates and sales, they’ve increased the rate at which web forms are completed. Randomization can be used to enhance the performance of any web page.

  That includes the layout of online newspapers. The graphic designers at Slate, MSNBC, even the New York Times could learn a thing or two from randomized testing. In fact James Roche, the president of Offermatica, says that they’ve started to do some work for web publications. They are typically brought in by the subscription department. However, once “the editors see the increase in online subscriptions,” James explains, they “warm to the idea of using Offermatica to optimize their primary business drivers: page views and ad clicks.”

  Charities and political campaigns could also test which web designs optimize their contributions. In fact, charities have already started using off-line randomized trials to explore the wellsprings of giving. Experimental economists Dean Karlan and John List helped a non-profit advocacy group test the effectiveness of direct mailings much the same way as Credit Indemnity did. They sent out over 50,000 letters to past contributors asking for gifts. The letters differed in whether and how they offered a matching gift. Some of the letters offered no matching gift, some offered dollar-for-dollar matching, and some letters offered two- and even three-for-one matching. Dollar-for-dollar matching offers did increase giving by about 19 percent. The surprising result, however, was that the two-for-one and three-for-one matches didn’t generate any higher giving than the one-for-one match. This simple study gives charities a powerful new tool. A donor who wants the biggest bang for the buck would do a lot better to offer it as part of a one-for-one program.

  One thing we’ve seen over and over is that decision makers overestimate the power of their own intuitions. The intuitions make sense to us and we become wedded to them. Randomized testing is an objective way to see whether we were right. And testing is a road that never ends. Tastes change. What worked yesterday may not work tomorrow. A system of periodic retesting with randomized trials is a way to ensure that your marketing efforts remain optimized. Super Crunching is centrally about data-driven decisions. And ongoing randomized trials make sure that there is a continual supply of new data to drive decisions.

  Randomization—It’s Not Just for Breakfast

  You might think at this point that the power of randomization is just about marketing—optimizing direct-mail solicitations or web advertisements. But you’d be wrong. Randomized trials are also being used to help manage both employee and customer relationships.

  Kelly Cook, director of customer relationship management at Continental Airlines, used the coin-flipping approach to figure out how to build stronger customer loyalty. She wanted to see how best to respond when a passenger experienced what Continental euphemistically called a “transportation event.” This is the kind of event you don’t want to experience, such as having your flight severely delayed or canceled.

  Cook randomly assigned Continental customers who had endured transportation events to one of three groups. For the next eight months, one group received a form letter apologizing for the event. The second group received the letter of apology and compensation in the form of a trial membership in Continental’s President’s Club. And the third group, which served as a control, received nothing.

  When the groups were asked about their experience with Continental, the control group that didn’t receive anything was still pretty angry. “But the other groups’ reaction was amazement that a company would have written them unsolicited to say they were sorry,” Cook recalls. The two groups that received a letter spent 8 percent more on Continental tickets in the ensuing year. For just the 4,000 customers receiving letters, that translated to extra revenue of $6 million. Since expanding this program to the top 10 percent of Continental’s customers, the airline has seen $150 million in additional revenues from customers who otherwise would have had a good reason to look elsewhere.

  Just sending a letter without compensation was enough to change consumer perceptions and behavior. And the compensation of trial
membership itself turned into a new source of profit. Thirty percent of customers who received a trial membership in Continental’s President’s Club decided to renew their membership after the trial period expired.

  Yet retailers beware. Customers can become angry if they learn that the same product is being offered at different prices. In September 2000, the press started running with the story of a guy who said that when he deleted the cookies on his computer (which identified him as a regular Amazon customer), Amazon’s quoted price for DVDs fell from $26.24 to $22.74. A lot of customers were suddenly worried that Amazon was rigging the Internet. The company quickly apologized, saying that the difference was the result of a random price test. In no uncertain terms CEO Jeff Bezos declared, “We’ve never tested and we never will test prices based on customer demographics.”

  He also announced a new policy with regard to random testing that might serve as a model for other companies that test. “If we ever do such a test again, we’ll automatically give customers who purchase a test item the lowest test price for that item at the conclusion of the test period—thereby ensuring that all customers pay the lowest price available.” Just because you test higher prices doesn’t mean that you have to actually charge them once a customer places an order.

  These randomized tests about prices are a lot more problematic than other kinds of randomization. When Offermatica randomizes over different visual elements of a web page, it’s usually about trying to improve the customer experience by removing unnecessary obstacles. The results of this kind of experiment are truly a win–win for both the seller and the buyers. But as we saw with CapOne, randomization can also be used to test how much money can be extracted from the consumer. Offermatica or AdWords could be used to run randomized tests about what kind of price the market will bear.

 

‹ Prev