Rationality- From AI to Zombies

Home > Science > Rationality- From AI to Zombies > Page 153
Rationality- From AI to Zombies Page 153

by Eliezer Yudkowsky


  That’s what I think would be a fitting direction for the energies of communities, and a common purpose that would bind them together. Tasks like that need communities anyway, and this Earth has plenty of work that needs doing, so there’s no point in waste. We have so much that needs doing—let the energy that was once wasted into the void of religious institutions, find an outlet there. And let purposes admirable without need for delusion fill any void in the community structure left by deleting religion and its illusionary higher purposes.

  Strong communities built around worthwhile purposes: That would be the shape I would like to see for the post-religious age, or whatever fraction of humanity has then gotten so far in their lives.

  Although . . . as long as you’ve got a building with a nice large high-resolution screen anyway, I wouldn’t mind challenging the idea that all post-adulthood learning has to take place in distant expensive university campuses with teachers who would rather be doing something else. And it’s empirically the case that colleges seem to support communities quite well. So in all fairness, there are other possibilities for things you could build a post-theistic community around.

  Is all of this just a dream? Maybe. Probably. It’s not completely devoid of incremental implementability, if you’ve got enough rationalists in a sufficiently large city who have heard of the idea. But on the off chance that rationality should catch on so widely, or the Earth should last so long, and that my voice should be heard, then that is the direction I would like to see things moving in—as the churches fade, we don’t need artificial churches, but we do need new idioms of community.

  *

  322

  Rationality: Common Interest of Many Causes

  It is a not-so-hidden agenda of Less Wrong that there are many causes that benefit from the spread of rationality—because it takes a little more rationality than usual to see their case as a supporter, or even just as a supportive bystander. Not just the obvious causes like atheism, but things like marijuana legalization—where you could wish that people were a bit more self-aware about their motives and the nature of signaling, and a bit more moved by inconvenient cold facts. The Machine Intelligence Research Institute was merely an unusually extreme case of this, wherein it got to the point that after years of bogging down I threw up my hands and explicitly recursed on the job of creating rationalists.

  But of course, not all the rationalists I create will be interested in my own project—and that’s fine. You can’t capture all the value you create, and trying can have poor side effects.

  If the supporters of other causes are enlightened enough to think similarly . . .

  Then all the causes that benefit from spreading rationality can, perhaps, have something in the way of standardized material to which to point their supporters—a common task, centralized to save effort—and think of themselves as spreading a little rationality on the side. They won’t capture all the value they create. And that’s fine. They’ll capture some of the value others create. Atheism has very little to do directly with marijuana legalization, but if both atheists and anti-Prohibitionists are willing to step back a bit and say a bit about the general, abstract principle of confronting a discomforting truth that interferes with a fine righteous tirade, then both atheism and marijuana legalization pick up some of the benefit from both efforts.

  But this requires—I know I’m repeating myself here, but it’s important—that you be willing not to capture all the value you create. It requires that, in the course of talking about rationality, you maintain an ability to temporarily shut up about your own cause even though it is the best cause ever. It requires that you don’t regard those other causes, and they do not regard you, as competing for a limited supply of rationalists with a limited capacity for support; but, rather, creating more rationalists and increasing their capacity for support. You only reap some of your own efforts, but you reap some of others’ efforts as well.

  If you and they don’t agree on everything—especially priorities—you have to be willing to agree to shut up about the disagreement. (Except possibly in specialized venues, out of the way of the mainstream discourse, where such disagreements are explicitly prosecuted.)

  A certain person who was taking over as the president of a certain organization once pointed out that the organization had not enjoyed much luck with its message of “This is the best thing you can do,” as compared to e.g. the X-Prize Foundation’s tremendous success conveying to rich individuals “Here is a cool thing you can do.”

  This is one of those insights where you blink incredulously and then grasp how much sense it makes. The human brain can’t grasp large stakes, and people are not anything remotely like expected utility maximizers, and we are generally altruistic akrasics. Saying, “This is the best thing” doesn’t add much motivation beyond “This is a cool thing.” It just establishes a much higher burden of proof. And invites invidious motivation-sapping comparison to all other good things you know (perhaps threatening to diminish moral satisfaction already purchased).

  If we’re operating under the assumption that everyone by default is an altruistic akrasic (someone who wishes they could choose to do more)—or at least, that most potential supporters of interest fit this description—then fighting it out over which cause is the best to support may have the effect of decreasing the overall supply of altruism.

  “But,” you say, “dollars are fungible; a dollar you use for one thing indeed cannot be used for anything else!” To which I reply: But human beings really aren’t expected utility maximizers, as cognitive systems. Dollars come out of different mental accounts, cost different amounts of willpower (the true limiting resource) under different circumstances. People want to spread their donations around as an act of mental accounting to minimize the regret if a single cause fails, and telling someone about an additional cause may increase the total amount they’re willing to help.

  There are, of course, limits to this principle of benign tolerance. If someone’s pet project is to teach salsa dance, it would be quite a stretch to say they’re working on a worthy sub-task of the great common Neo-Enlightenment project of human progress.

  But to the extent that something really is a task you would wish to see done on behalf of humanity . . . then invidious comparisons of that project to Your-Favorite-Project may not help your own project as much as you might think. We may need to learn to say, by habit and in nearly all forums, “Here is a cool rationalist project,” not, “Mine alone is the highest-return in expected utilons per marginal dollar project.” If someone cold-blooded enough to maximize expected utility of fungible money without regard to emotional side effects explicitly asks, we could perhaps steer them to a specialized subforum where anyone willing to make the claim of top priority fights it out. Though if all goes well, those projects that have a strong claim to this kind of underserved-ness will get more investment and their marginal returns will go down, and the winner of the competing claims will no longer be clear.

  If there are many rationalist projects that benefit from raising the sanity waterline, then their mutual tolerance and common investment in spreading rationality could conceivably exhibit a commons problem. But this doesn’t seem too hard to deal with: if there’s a group that’s not willing to share the rationalists they create or mention to them that other Neo-Enlightenment projects might exist, then any common, centralized rationalist resources could remove the mention of their project as a cool thing to do.

  Though all this is an idealistic and future-facing thought, the benefits—for all of us—could be finding some important things we’re missing right now. So many rationalist projects have few supporters and far-flung; if we could all identify as elements of the Common Project of human progress, the Neo-Enlightenment, there would be a substantially higher probability of finding ten of us in any given city. Right now, a lot of these projects are just a little lonely for their supporters. Rationality may not be the most important thing in the world—that, of course, is the thing
that we protect—but it is a cool thing that more of us have in common. We might gain much from identifying ourselves also as rationalists.

  *

  323

  Helpless Individuals

  When you consider that our grouping instincts are optimized for 50-person hunter-gatherer bands where everyone knows everyone else, it begins to seem miraculous that modern-day large institutions survive at all.

  Well—there are governments with specialized militaries and police, which can extract taxes. That’s a non-ancestral idiom which dates back to the invention of sedentary agriculture and extractible surpluses; humanity is still struggling to deal with it.

  There are corporations in which the flow of money is controlled by centralized management, a non-ancestral idiom dating back to the invention of large-scale trade and professional specialization.

  And in a world with large populations and close contact, memes evolve far more virulent than the average case of the ancestral environment; memes that wield threats of damnation, promises of heaven, and professional priest classes to transmit them.

  But by and large, the answer to the question “How do large institutions survive?” is “They don’t!” The vast majority of large modern-day institutions—some of them extremely vital to the functioning of our complex civilization—simply fail to exist in the first place.

  I first realized this as a result of grasping how Science gets funded: namely, not by individual donations.

  Science traditionally gets funded by governments, corporations, and large foundations. I’ve had the opportunity to discover firsthand that it’s amazingly difficult to raise money for Science from individuals. Not unless it’s science about a disease with gruesome victims, and maybe not even then.

  Why? People are, in fact, prosocial; they give money to, say, puppy pounds. Science is one of the great social interests, and people are even widely aware of this—why not Science, then?

  Any particular science project—say, studying the genetics of trypanotolerance in cattle—is not a good emotional fit for individual charity. Science has a long time horizon that requires continual support. The interim or even final press releases may not sound all that emotionally arousing. You can’t volunteer; it’s a job for specialists. Being shown a picture of the scientist you’re supporting at or somewhat below the market price for their salary lacks the impact of being shown the wide-eyed puppy that you helped usher to a new home. You don’t get the immediate feedback and the sense of immediate accomplishment that’s required to keep an individual spending their own money.

  Ironically, I finally realized this, not from my own work, but from thinking “Why don’t Seth Roberts’s readers come together to support experimental tests of Roberts’s hypothesis about obesity? Why aren’t individual philanthropists paying to test Bussard’s polywell fusor?” These are examples of obviously ridiculously underfunded science, with applications (if true) that would be relevant to many, many individuals. That was when it occurred to me that, in full generality, Science is not a good emotional fit for people spending their own money.

  In fact very few things are, with the individuals we have now. It seems to me that this is key to understanding how the world works the way it does—why so many individual interests are poorly protected—why 200 million adult Americans have such tremendous trouble supervising the 535 members of Congress, for example.

  So how does Science actually get funded? By governments that think they ought to spend some amount of money on Science, with legislatures or executives deciding to do so—it’s not quite their own money they’re spending. Sufficiently large corporations decide to throw some amount of money at blue-sky R&D. Large grassroots organizations built around affective death spirals may look at science that suits their ideals. Large private foundations, based on money block-allocated by wealthy individuals to their reputations, spend money on Science that promises to sound very charitable, sort of like allocating money to orchestras or modern art. And then the individual scientists (or individual scientific task-forces) fight it out for control of that pre-allocated money supply, given into the hands of grant committee members who seem like the sort of people who ought to be judging scientists.

  You rarely see a scientific project making a direct bid for some portion of society’s resource flow; rather, it first gets allocated to Science, and then scientists fight over who actually gets it. Even the exceptions to this rule are more likely to be driven by politicians (moonshot) or military purposes (Manhattan project) than by the appeal of scientists to the public.

  Now I’m sure that if the general public were in the habit of funding particular science by individual donations, a whole lotta money would be wasted on e.g. quantum gibberish—assuming that the general public somehow acquired the habit of funding science without changing any other facts about the people or the society.

  But it’s still an interesting point that Science manages to survive not because it is in our collective individual interest to see Science get done, but rather, because Science has fastened itself as a parasite onto the few forms of large organization that can exist in our world. There are plenty of other projects that simply fail to exist in the first place.

  It seems to me that modern humanity manages to put forth very little in the way of coordinated effort to serve collective individual interests. It’s just too non-ancestral a problem when you scale to more than 50 people. There are only big taxers, big traders, supermemes, occasional individuals of great power; and a few other organizations, like Science, that can fasten parasitically onto them.

  *

  324

  Money: The Unit of Caring

  Steve Omohundro has suggested a folk theorem to the effect that, within the interior of any approximately rational self-modifying agent, the marginal benefit of investing additional resources in anything ought to be about equal. Or, to put it a bit more exactly, shifting a unit of resource between any two tasks should produce no increase in expected utility, relative to the agent’s utility function and its probabilistic expectations about its own algorithms.

  This resource balance principle implies that—over a very wide range of approximately rational systems, including even the interior of a self-modifying mind—there will exist some common currency of expected utilons, by which everything worth doing can be measured.

  In our society, this common currency of expected utilons is called “money.” It is the measure of how much society cares about something.

  This is a brutal yet obvious point, which many are motivated to deny.

  With this audience, I hope, I can simply state it and move on. It’s not as if you thought “society” was intelligent, benevolent, and sane up until this point, right?

  I say this to make a certain point held in common across many good causes. Any charitable institution you’ve ever had a kind word for, certainly wishes you would appreciate this point, whether or not they’ve ever said anything out loud. For I have listened to others in the nonprofit world, and I know that I am not speaking only for myself here . . .

  Many people, when they see something that they think is worth doing, would like to volunteer a few hours of spare time, or maybe mail in a five-year-old laptop and some canned goods, or walk in a march somewhere, but at any rate, not spend money.

  Believe me, I understand the feeling. Every time I spend money I feel like I’m losing hit points. That’s the problem with having a unified quantity describing your net worth: Seeing that number go down is not a pleasant feeling, even though it has to fluctuate in the ordinary course of your existence. There ought to be a fun-theoretic principle against it.

  But, well . . .

  There is this very, very old puzzle/observation in economics about the lawyer who spends an hour volunteering at the soup kitchen, instead of working an extra hour and donating the money to hire someone to work for five hours at the soup kitchen.

  There’s this thing called “Ricardo’s Law of Comparative Advantage.” There’s
this idea called “professional specialization.” There’s this notion of “economies of scale.” There’s this concept of “gains from trade.” The whole reason why we have money is to realize the tremendous gains possible from each of us doing what we do best.

  This is what grownups do. This is what you do when you want something to actually get done. You use money to employ full-time specialists.

  Yes, people are sometimes limited in their ability to trade time for money (underemployed), so that it is better for them if they can directly donate that which they would usually trade for money. If the soup kitchen needed a lawyer, and the lawyer donated a large contiguous high-priority block of lawyering, then that sort of volunteering makes sense—that’s the same specialized capability the lawyer ordinarily trades for money. But “volunteering” just one hour of legal work, constantly delayed, spread across three weeks in casual minutes between other jobs? This is not the way something gets done when anyone actually cares about it, or to state it near-equivalently, when money is involved.

  To the extent that individuals fail to grasp this principle on a gut level, they may think that the use of money is somehow optional in the pursuit of things that merely seem morally desirable—as opposed to tasks like feeding ourselves, whose desirability seems to be treated oddly differently. This factor may be sufficient by itself to prevent us from pursuing our collective common interest in groups larger than 40 people.

  Economies of trade and professional specialization are not just vaguely good yet unnatural-sounding ideas, they are the only way that anything ever gets done in this world. Money is not pieces of paper, it is the common currency of caring.

 

‹ Prev