Shortest Way Home

Home > Other > Shortest Way Home > Page 6
Shortest Way Home Page 6

by Pete Buttigieg


  It turns out that most of us, for and against the war, were wrong about the weapons. He didn’t have any—and so they were not there to be used against American troops. Iraq fell quickly, and for a moment it seemed that the invasion was a vindication of American intervention abroad. Protesters like me looked foolish. Sure, the pretext for war was actually false, but who would quibble over that, as a brutal dictatorship was being turned into a model democracy at relatively little cost to America?

  Then the suicide bombings began. We were not, as the administration had promised, “greeted as liberators.” A well-functioning democracy did not emerge. And the ensuing chaos made it clear that the administration had not planned for the aftermath of the invasion, as Iraqi cities became a kill zone for our troops. We who were against the invasion had been wrong about the weapons, but right about the war. The administration had been wrong about both.

  AS ALL THIS UNRAVELED, I was spending my senior year mostly absorbed in a different war—Vietnam, the focus of my thesis on the influence of Puritan thought. In an unused seminar room that a few of us had taken over, I sat surrounded by Dunkin’ Donuts coffee cups and books, immersed in the recent past while its parallels with the present became impossible to ignore. I studied the way America’s government in the 1970s told its people an increasingly improbable story of mission and moral clarity, trying to defy a reality on the ground that could no longer be denied amid a rising American body count.

  I remember very little about Senior Week or graduation, other than that it rained, and that we had a sense of graduating into a darkening world where we would need to make ourselves useful. We had arrived on campus during the final months of the Clinton presidency, on the heels of the longest peacetime economic expansion in U.S. history. Now we were wartime graduates. It felt like a kind of memorial service for our vanishing college life as we filed in our black gowns into the expanse of the open quadrangle known as Tercentenary Theater and sat facing Memorial Church while the massive structure of Widener Library towered behind us under the warm rain clouds. I listened vaguely as the U.N. Secretary General, Kofi Annan, gave a speech about three simultaneous global crises of security, solidarity, and cultural division. Meanwhile, at William & Mary’s commencement address, Jon Stewart treated the graduates to a cheeky but rueful commentary on the “real world,” on behalf of the generation in charge: “I don’t really know how to put this, so I’ll be blunt. We broke it. . . . But here’s the good news. You fix this thing, you’re the next Greatest Generation, people.”

  3

  Analytics

  The education that began at Harvard continued in one form or another for the next ten years. First came Campaigns 101, for which my classroom was a cubicle in a windowless office in downtown Phoenix, where I did research and press work for the Kerry-Edwards presidential run of 2004. After that defeat I followed my boss, Doug Wilson, back to Washington, where he worked for the former defense secretary, William Cohen. The winter and spring amounted to Washington 101, an education in the mechanics of our capital, which I navigated as a sort of gofer for Doug, helping him to organize a conference of American and Muslim leaders. (This included an early lesson in the whims of political fortune after we invited the three Republicans then deemed most likely to become president: George Allen, Lindsey Graham, and Mark Sanford.) The summer of 2005 saw me back in an actual classroom, this time in Tunis, where a highly affordable language program gave me the chance to improve my Arabic.

  Formal education continued that fall: a Rhodes Scholarship took me to Oxford for two years in large halls and small professors’ rooms in the ancient colleges learning philosophy, politics, and economics. Back to the U.S. in 2007, I landed a job in Chicago at McKinsey & Company, and my classroom was everywhere—a conference room, a serene corporate office, the break room of a retail store, a safe house in Iraq, or an airplane seat—any place that could accommodate me and my laptop. The capstone on my decade of education was in 2010, when I left McKinsey for a tough but priceless year-long political crash course in which I challenged the state treasurer of Indiana, and was overwhelmingly defeated in my first experience on the ballot.

  Geographically, the arc of these years was a sort of looped boomerang, a first departure from home that took me east, then farther west, then back east; then across the Atlantic; and then at last closer to home, to Chicago, less than a hundred miles from South Bend, and finally all the way back to the neighborhood where I grew up. In retrospect it was a homeward spiral all along: the more my worldly education grew with lessons from abroad, the clearer it became that this long and winding road was leading me back home, to find belonging by making myself useful there. Now it’s obvious, but in the midst of that education it felt like just one step at a time. A Tunisian souk, an Oxford exam room, and a Great Lakes office park all had something to teach me, and each place nudged me closer to home.

  EDUCATION CAN COME BY DRUDGERY or by adventure, and I had my share of both. Sometimes they’re interwoven. In Washington, days spent mostly arranging other people’s airline tickets were occasionally punctuated by the chance to tag along and meet some foreign ambassador as part of the conference preparations (adventure enough, for a twenty-two-year-old interested in policy). In Tunis, where air-conditioning was as rare as a summer day below a hundred degrees, mornings in the sweaty classroom gave way to afternoons walking through hookah smoke and perfume in the markets of the old city and trying, in vain, to get Tunisian acquaintances to reveal over coffee how they felt about living under a dictatorship. (A few years later, the Arab Spring would tell us things that the friendly yet circumspect young Tunisians would not discuss openly with foreign students, no matter how many prying and dangerous questions we naïvely asked.)

  At Oxford, my chosen program was unforgiving. The watchword of the famous PPE (Philosophy, Politics, and Economics) program was rigor: any sloppy argument or imprecise claim would get picked apart politely by a skeptical professor or fellow student. It was the ideal prescription for someone like me, reared on a humanities curriculum in which a stirring phrase carried as much weight as a precise argument. The missing piece in my formal education had been the analytical side of things, which to my Oxford faculty was the element that mattered most.

  I was forced to learn the British tradition of analytical philosophy, which breaks down the meaning of the words we throw around casually yet with conviction in debates about ethics and politics, sometimes without knowing what we are talking about. Under the gaze of the Oxford dons, every question was handled with the utmost precision, to get to the bottom of what was really intended by a term or an idea in the course of an argument.

  For example, in a philosophical debate over the nature of free will, we were required to confront just how difficult it is to define what “free will” even means. We considered one definition: that a freely chosen act is one taken by someone who could have done otherwise. It felt intuitively like a good way of describing free will: if I did something, and could have done something else, then clearly I made this choice freely. But then, another philosopher pointed out, what if you chose to stay in a room all day because you wanted to, but without your knowing it the door was locked from the outside? It turns out you could not have done otherwise, but we also would believe this choice was a free one, so clearly a more precise definition had to be found . . . and the refinements and arguments would go from here.

  I’m not sure I ever got to the bottom of free will, but these kinds of analyses and arguments cultivated a healthy sense of clarity. After years of painting, with broad verbal brushes, the kind of beautiful images that earn good grades in certain American college literature courses, I now had to make sure that every sentence and idea was precise, clearly defined, and airtight, in order to survive the skepticism of a British critic.

  In the process, I learned more rigorous ways to explain the moral intuitions I already had about politics and society. Sitting in one of Oxford’s ancient libraries, I learned the theories of John Rawls
(who, ironically, taught at Harvard). Rawls became famous for creating a new definition of justice, which boils down to this: a society is fair if it looks like something we would design before knowing how we would come into the world. He imagined a fictional “original position,” the position we would be in if we were told we were about to be born, but were not told about the circumstances we would be born into—how tall or short we would be, or of what race or nationality, or what resources or personal qualities we would have. This vision of justice is often compared to being asked how you would want a cake to be divided if you did not know which piece will be yours: equally, of course.

  Like most good philosophy in the analytical tradition, it gave precise expression to something we already understand intuitively. For example, it gives us one way to explain why we know that racial equality is a feature of a just society: even a prejudiced group of people would probably all insist on a racially equal society if they were asked to design in advance a world into which they would soon be born, without knowing which race life’s lottery would assign them at birth. It was a compelling way of thinking about fairness, and not hard to connect to debates about racial and income inequality in our politics back home. And, because nothing there was endorsed as “correct” but only as worth studying and picking apart, I was then led to Robert Nozick’s impressive conservative critique of Rawls—followed by Gerald Cohen’s equally impressive socialist critique of Nozick.

  In ethics, I studied the debates between Kantians, who believe that your motivation is the most important thing in deciding whether you are doing good, and utilitarians, who look only to the outcome of your deeds, not your intent. Years later, in office, I would think of these debates often, knowing that government often requires you to think as a utilitarian—to try to bring about the “greatest good for the greatest number”—even if your personal philosophy is more Kantian, Christian, or otherwise grounded in something besides the cold math of utilitarian ethics. Meanwhile, studying international relations as part of the “Politics” leg of the PPE program, I learned to trace in detail what happened when a few colonial powers promised, to more than one group of people, the same small patches of Middle Eastern land. And I learned to debate the remarkable finding by political scientists that truly democratic countries almost never go to war with one another.

  Most new and useful of all, perhaps, was a rigorous training in economics. I had taken an economics course in college, but had known nothing like the intensity of the tutorials at Pembroke College. These one-on-one or two-on-one sessions with faculty were the backbone of instruction at Oxford. In the case of my economics course, they felt less like the friendly and personalized instruction conjured up by the word “tutorial,” and more like a weekly oral exam on whatever I had managed to teach myself in the preceding six days. But the system worked.

  Racing to catch up to my second-year peers, I mastered the basics of supply and demand, utility, preferences, auction theory, and market equilibrium. I learned to admire the theoretical elegance of the free market under perfect conditions. Then I began to learn about all of the situations in which those perfect conditions break down, and all of the ways markets get skewed in the real world. One calculus equation at a time, I came to understand in thorough mathematical detail why supply and demand cannot be expected to deliver fair prices or efficient outcomes in many situations. Indeed, even the most orthodox economic theories showed that market failures were all but guaranteed to occur in situations, like health care and education delivery, where a seller has power over a buyer, or a buyer is seeking a service that can’t easily be assigned a dollar value, or the seller and the buyer have different levels of information about the product.

  The two years passed quickly, rushing toward the handful of days in June of 2007 when, in keeping with the Oxford tradition, I donned a white bow tie, suit, and gown to walk to the giant hall they call the Examination Schools. There, hundreds of students at a time would sit for each of the eight three-hour exams that would account for the entirety of our grades. Exiting the last exam, I received a pie in the face from a group of jubilant friends (also per tradition), and spent the next few days waiting for the results.

  When they were finally posted, on a big sheet of paper with everyone’s names on them outside the Exam Schools, I checked several times to make sure I wasn’t misreading. I had finished with a “First,” the highest grade in their remarkably simple (and very British) system of First, Second Upper, Second Lower, and Third Class degrees. To an English undergraduate, this single grade becomes a mark you carry for the rest of your life, shaping career opportunities for decades. Knowing I would head back to America meant that there was less at stake for me in the grade, but I took pride in it even while sensing that the time had come to learn what wasn’t on the page and get an education in the real world, if there was such a thing. Which is why I went to McKinsey.

  ANALYZING THE FINER POINTS of a profound question on the nature of freedom is one thing; analyzing a client’s financial future and advising them on what to do about it is another. This is the specialty of McKinsey & Company, the dominant name in the field of management consulting. Since its product is, above all, the intellect of its employees, the firm (better known in consulting circles as the Firm) prides itself on hiring not just top business school graduates, but anyone it considers very bright and teachable, such as Rhodes Scholars.

  To some, McKinsey is the pinnacle of smart and useful analysis in the business world. To others, it is the primary symbol of a trend that sees more and more graduates from prestigious programs go into the private sector when they could be committing themselves to public service, research, or some other worthy pursuit. When I decided to attend an informational session about McKinsey for graduate students, I felt ambivalent but more sympathetic to the latter camp. My education to date, and my hopes of making an impact in the world, pointed to public service, inquiry, and the arts, not business. But I also knew that I would have to understand business if I wanted to make myself useful in practice. Despite all my education, I felt ignorant about how the private sector really worked. I would leave Oxford with a degree in economics, but knew little firsthand about the functions—from logistics to finance—that made the private sector operate. And the firm known best for its expertise on how the private sector works was actually willing to give me an interview for a post-MBA job, taking a chance on the idea that if I was prepared to learn, they could teach me all the things about business I didn’t know.

  Also, crucially, they had a Chicago office. It was not the most glamorous office in the Firm—that title probably belonged in London, Dubai, New York, or Silicon Valley—but it was known for the diversity of industries it served, which would make it a good training ground. More importantly, it was a way for me to come back to the Midwest, a region whose role in shaping me had become more obvious the farther away I’d moved. When I finally saw the Chicago office for myself on the day of my final-round interview, I noticed not just the modern wood paneling, large abstract paintings, and big windows that signified an elegant corporate office space. I also saw, out the windows on the high floor of the Chase Tower where my interviewer received me, a view of Lake Michigan’s shoreline that you could trace, past Hyde Park and the South Side and the Skyway Bridge, all the way to the smokestacks marking the state line and the beginning of northern Indiana.

  LET ME ASK YOU, for a moment, to imagine a list of the most interesting subjects in the world, ranked from one to infinity. The list is different for each of us. But some topics are fairly high on the list for almost everyone: topics such as television, religion, warfare, food, sports, space travel, the presidency, and sex. Now ask yourself where, on that list, you would put the subject on which I became an expert during the winter of 2010: North American grocery pricing.

  Not in your top thousand? Me neither, at the time I was offered to join a team working on a client study on the subject. (For an associate, life at McKinsey mostly consists of months-long sti
nts on a “team” of three or four people working on an engagement or “study” solving a problem for a client.) I was there because I had admired the partner in charge of the team since meeting him in the recruiting process. Jeff Helbling was low-key and clean-cut, with a sort of smart and unflappable discipline that melded the styles of his alma maters, West Point and Harvard Business School. A wise McKinsey alum had once told me that there were four things to think about when chasing assignments at the Firm: geography, industry, function, and people. Of these, she counseled, the most important is people. So when the chance came to work with Jeff, I jumped, even though I was uncertain how much professional fulfillment came from the prospect of commuting to Toronto every week to help a client in the grocery business figure out how to update its prices.

  Soon I was spending my weekdays in a small, glass-walled conference room with three colleagues in a suburban office park, building models to compute how much it would cost to cut prices on various combinations of tens of thousands of items across hundreds of stores in every part of the country. The more I worked on the problem, the more complicated it seemed to become. Eventually the volume of data went beyond the capabilities of Microsoft Excel, and I began using a program called Microsoft Access. Access is designed to hold databases, but I was using it to do math, stretching its functionality to make it work partly like a computational spreadsheet on top of the data management program it was intended to be. As the data set grew to millions of lines, it started freezing my laptop computer. To make me more productive, the firm mailed a more powerful desktop computer to our team room. I hooked it up and spent the better part of many fourteen-hour days calculating at the machine, which my colleagues nicknamed Bertha.

  Against all my expectations, it was fascinating. I wasn’t just learning about the retail business or about computer programs—I was also learning about the nature of data. By manipulating millions of data points, I could weave stories about possible futures, and gather insights on which ideas were good or bad. I could simulate millions of shoppers going up and down the aisles of thousands of stores, and in my mind I pictured their habits shifting as a well-placed price cut subtly changed their perceptions of our client as a better place to shop.

 

‹ Prev