But the interesting thing is that real arguments do nothing to status. Status is a matter of social perception. Winning a chess game might raise your status among geeks and lower it among jocks. So when you use an argument to score a status win, there’s a very strong chance that it isn’t a real argument at all, but a rationalization.
With complicated arguments, it can be hard to figure out the details of exactly where reasoning stops and rationalization begins. But you don’t need to figure it out. All you need to do is note whether status changes as the result of a given argument. If it does, you can file it away, because used in reverse, it will cause the reverse status change.
In the example above, Bob used the “open and transparent” argument (inferring the consequences of assumed shared values in disingenuous ways to score a cheap point and occupy the moral high ground). Alice filed away the pattern and reversed it when it suited her. The tactic works not because the reasoning pattern is strong but because it is weak. You could challenge such a pattern the moment you spot it, and enter into a long argument. Or you could just file it away to use as a petard later.
Many sitcom plots are entirely driven by status see-saws caused by hoist-by-own-petard dynamics.
The nice thing is that even though the argument is weak, your opponent will not want to attack it, since doing so would undermine their own previous use of the tactic. This helps create cultures of consensus around bullshit arguments that nobody wants to call out. But you can raise your own game by only using such weak logic in such petard-hoist defenses, and developing your own logical patterns much more carefully, avoiding any arguments that could be used against you later. This requires honesty and self-awareness. Not everybody can answer the key question: am I making this argument to achieve a favorable status outcome for myself or to arrive at a real truth? Note that I have nothing against status-movement goals. They are in fact the bread and butter of Slightly Evil thinking. It’s just that reason is a very dangerous, double-edged tool to use when playing status games. Use other tools.
As a rule of thumb, useless arguments that move status without discovering truths are most often found in the application of unexamined “values.” Such values are rarely about profound moral positions. They are more often a crutch for lazy thinkers.
Crisis Non-Response
The British Yes, Minister and Yes, Prime Minister* television shows are gold mines for Slightly Evil. I recently watched the entire series again, and re-read the books, after nearly 20 years, and was astonished by how modern the humor still seems. There is even a superb episode about a bank bailout. For those of you who are unfamiliar with the shows, they track the rising fortunes of Jim Hacker, who goes from being “Minister for Adminstrative Affairs” in the first show, to the Prime Minister’s office. The premise of the show is the constant battle between Hacker and the bureaucratic civil service. For those of you familiar with my Gervais Principle† series, the picture painted is of an inverted MacLeod hierarchy: Members of Parliament are the losers, ministers of the Crown are the clueless, and the civil service contains the sociopaths. But unlike The Office, the main characters are not pure examples of the archetypes. Hacker occasionally wins via a streak of sociopathy, while the civil service occasionally loses through cluelessness. In the episode “A Victory for Democracy”, we find a description of how the British Foreign Office deployed a well-thought out “creative inertia” strategy to not respond to a crisis. The strategy involves four stages. You can try using this the next time you want to avoid action. From the book:
“The standard Foreign Office response to any crisis is:
Stage One: We say nothing is going to happen
Stage Two: We say that something may be going to happen, but we should do nothing about it
Stage Three: We say that maybe we should do something about it, but there’s nothing we can do
Stage Four: We say that maybe there was something we could have done but it’s too late now.”
Sound familiar?
In the episode, this strategy is adopted by the civil service to resist Hacker’s attempts to get Britain to intervene in an impending communist coup on a Commonwealth island. (Hacker wants to intervene in order to score points with the Americans, while the civil service wants to avoid acting because of some Middle Eastern implications)
Hacker, in this case, outmaneuvers the civil service via a fait accompli, sending an airborne battalion to the island on a goodwill mission via the Defense Ministry, without letting the Foreign Office catch on. That’s generally the best way to respond to such stalling: a fait accompli. If you try to argue at any stage, you’ve already lost because the whole point of the strategy is to waste time until it is too late. If you want to actually use this stalling strategy, keep an eye on your flanks.
I’ll share more juicy bits from this rich source on this list occasionally, but I strongly recommend you watch the shows and read the book versions, which are not straight transcripts; the stories are presented slightly differently via texts of memos and internal documents.
The Hierarchy of Deceptions
To function as a human being, you are forced to accept a minimum level of deception in your life. The more complex and challenging your life, the higher this minimum. If you live a quiet and conventional family life, you may never be challenged beyond the problem of whether to tell your kid the truth about Santa Claus. If you are President of the United States, your moral intelligence is going to be tested a lot more severely. The journey on the path of Slightly Evil deception begins with your attitude towards lying, but ends up at Russian roulette. The Principle of Conservation of Deception
Can the minimum level of deception in your life go to zero? Every time I encounter somebody who swears by a philosophy of absolute honesty, I invariably discover later that they’ve been deceiving themselves, and often others as well, subconsciously. This observation inspired me to make up a conjecture: the principle of conservation of deception: at any given level of moral and intellectual development, there is an associated minimum level of deception in your life. If you aren’t deceiving others, you are likely deceiving yourself. Or you’re in denial.
You can only lower the level of deception in your life through further intellectual and moral development. In other words you have to earn higher levels of truth in your life. This actually takes intelligence, not just pious intentions.
Perhaps there is an ideal of moral and cognitive genius where you can function without deception at all. I haven’t met anyone at this level, but I often come across people who recognize the principle of conservation of deception, and seem to be consciously working to lower the amount of deception in their lives.
The Anatomy of Deception
The question of deception arises when you are in a situation where you have a skill or information advantage over another decision-maker. How you use that advantage depends on four things: alignment of intentions, your relationship to the other party, the relative value of a win to the two parties, and the degree of moral certainty you have about your intentions. Let’s get the first three out of the way, since they are easier to understand as simple calculations.
On the one extreme, where there is a strong alignment of intentions, a good relationship, and a desire to see the other party win, you have a parent lying to make a problem simpler for a child and secretly helping him/her succeed. On the other extreme, which is completely adversarial in both intentions and relationship, and you stand to benefit much more, you pull out all stops to win. Whatever the situation, you can use both your information advantage and skill advantage towards the appropriate outcome.
Deception is very nearly an amoral behavior. There are “good” lies: little white lies, nurturing lies, and complicity in the larger polite fictions of society. And there are “bad” lies that help you inflict as much destruction as possible. But on the whole, deception is fundamentally friendlier to evil. So any use of deception is at least slightly evil.
The Hierarchy of
Deceptions
I find it very useful to think in terms of a hierarchy of deception skills, from the least sophisticated to the most sophisticated. The less sophisticated ones are harder to justify than the more sophisticated ones, which is reason enough to increase your sophistication level.
The least sophisticated form of deception is outright lying and fabrication of evidence.
A slightly more sophisticated form of deception is misdirection. You don’t lie, but you foreground a pattern of true information that is likely to lead to false conclusions.
Next you get withholding of information. You don’t lie or misdirect, but you don’t necessarily share any information that you don’t have to.
Next, you get equivocation, or sharing of information in ambiguous ways. This allows you to maintain plausible deniability against charges of lying, misdirection or withholding information, and relies on the predisposition of the other party to draw certain conclusions over others.
At the final level of sophistication, you get not-correcting-others. You don’t lie, misdirect, withhold or equivocate. But when others are drawing false conclusions that you could correct if you chose to (or missing inferences that are obvious to you due to your greater skill), you selectively choose not to help them out. If they make no mistakes and miss nothing, you’ve given your entire advantage away, though.
There is progressive minimalism and (social and moral) defensibility of means in this hierarchy. I am not a lawyer, so I’d be curious to hear from the lawyers among you, about the legal defensibility of different levels of deception. Outright lying is obviously perjury, while not-correcting-others would appear to be entirely defensible.
The more sophisticated techniques are harder to use. They take less energy (both to use initially, and to cover up if discovered later), but more skill. How you use all these techniques to compete in adversarial situations should be obvious. It is harder to see how you would use any of the skills to help another party. When you get good at not-correcting-others in helpful ways, you become a good teacher.
Moral Certainty
Any sort of deception, to be justifiable within your personal morality, needs to be driven by a certain amount of moral certainty regarding your own intentions.
Fortunately, life is deliciously interesting: the question of deception often comes up precisely when you are not entirely sure that you are morally in the right. To make the analysis simple, let’s assume a deterministic situation at the extremes: you have a certain decisive advantage. If you used it, you’d win for sure. If you gave it away entirely, you’d lose for sure. In between, your likelihood of winning depends on how much of the advantage you cede. If you cede half the advantage, you are leaving the outcome to fate.
How much of your advantage should you give away when you are morally unsure about your position? I assert that you should give away the advantage in direct proportion to the amount of doubt you feel. Not only does this seem fair in some cosmic sense, but it is also a great way to prevent moral doubt from paralyzing you to the point that you don’t act at all.
I think this is the reason behind the appeal of the familiar trope of a movie villain who sets up games of chance when he doesn’t have to. You have a revolver, a decisive advantage, but you choose to play Russian roulette with your opponent, rather than shooting him outright. Often, such villains, whose actions reflect a fundamental grappling with moral doubt, are more interesting than the heroes who live in a world of moral righteousness.
Personally, I almost never lie outright these days. To the extent possible, I try to turn moral ambiguity into Russian roulette.
The Art of Damage Control
Being Slightly Evil means you are looking to live the gambler’s life: placing bets and taking risks. If you win, you get a lot more for your efforts than drones who merely work hard. The downside is that you can and will fail on occasion. And the possibility of failure leads to one of the worst pieces of advice that gets passed around (out of either cluelessness or malice aforethought): “You must be willing to look foolish.”
Why is this bad advice? It is bad advice because it turns the manageable problem of damage control into some kind of holy cross that you must necessarily, and passively, bear. People who hand out this piece of advice typically do so while striking a pious, quasi-religious pose. They make it sound as though looking foolish were necessary atonement for your sins of risk-taking.
Certainly, looking foolish is a potential consequence of failure (besides of course, other consequences such as material losses, loss of trust, credibility, friendships and so forth). “You must be willing to look foolish” is part of a more general piece of advice, “you must accept the consequences.”
No, you shouldn’t.
“You must accept the consequences” is the start of a dangerous line of advice that also leads to “you should take one for the team,” hara kiri, captains “going down with the ship,” and other (usually unnecessary) acts of martyrdom. There are times and places when honor and such noble acts of self-sacrifice might be appropriate (usually actual battlefields are involved), but they are truly rare. Most of the time, nobody needs to die.
You should certainly accept that failure is imminent the moment the signs are clear. When the writing’s on the wall, it is time to quit fighting for success. But that doesn’t mean it is time to switch into noble passivity, waiting for the blow to fall. You are not a defendant awaiting trial (unless you’ve actually broken the law). It is time to rapidly shift into damage control mode. There are no holy judges out there watching to see if karmic justice is done, and waiting to applaud your noble actions. Only gleeful onlookers enjoying their moment of schadenfreude, other evil and slightly evil people furiously looking for an opportunity in your fall, and well-intentioned compassionate souls eager to commiserate and tempt you into passivity when you need to be active. If you are involved in a big enough failure, there will also be an angry mob baying for your blood very soon. And yes, there are potential innocent and hapless fallout victims who will soon pose a moral quandary for you.
As a principal in a risky endeavor, unless you are prone to denial, you’ll realize that failure is unavoidable long before others do. This means you have the most time and control over consequences, which includes a degree of control over how, where and when others find out what’s happening, and how they react.
Damage control means predicting the unmanaged course of events, designing interventions to minimize fallout, and optimally distributing the residual impact among all exposed parties. This means trading off impact on trust, credibility, and future opportunities. It means salvaging material assets. And yes: it means deciding how foolish you can afford to look. Looking foolish is serious business. Reputations take a long time to establish and minutes to lose. Of all potential consequences, “looking foolish” is the most damaging. You can rebuild assets, re-establish trust and credibility and find life-lines and future opportunities in even the worst chaos. But once people start thinking of you as “foolish,” you’ve put yourself in a pigeonhole that is very hard to climb out of. Depending on the situation, you may be able to buy back 10 units of lost trust with 1.5 units of looking foolish. These calculations must be made. The do-nothing defaults will be unfavorable, especially because others will shift into active damage control mode, the moment they find out, even if you don’t.
Now, if you are really smart, the optimal hit will not be negative at all. You’ll find a way to play a failure so you not only escape all adverse consequences, but perhaps actually come out looking good.
Why is this Slightly Evil? Because there’s a slippery slope to True Evil. A basic truth about risk management is that old saw, “success has many parents, while failure is an orphan.” If there’s a win, you fight for as much of a share as you can (for yourself, or for a broader group whose interests you represent). If there’s a failure, you rush to dissipate consequences as widely and as far away from yourself as possible. The path to true evil lies in your po
wer to make innocents – scapegoats and fall guys – suffer the worst of the consequences. For many, damage control is pretty much the same as figuring out who can most safely be blamed. And the people who are easiest to hurt (and also the last to find out) are typically the most innocent (there is rarely anyone who is truly innocent; every stakeholder is complicit in a failure to some extent).
You cannot be all noble and rise above the fray of the blame game. When a true failure looms, you must play or be played. As always, you decide where to draw your lines in the sand. And sometimes yes, you may well decide to shoulder more than your share of the burden out of altruism. But it better be calculated rather than clueless altruism.
Disrupting an Adversary
On Annoying Others
Wanting to be liked is a significant need that must be overcome on the slightly evil path. It is not enough to learn to take criticism with a stiff-faced smile. You must become comfortable with actively provoking and then dealing with dislike. The first step is to learn how to be annoying without any purpose in mind. That’s the learning, practice and play stage. Then there’s a danger zone: you can get addicted to button-pushing schadenfreude for the hell of it. Feeding your self-esteem by baiting others is as limiting as feeding it by fishing for compliments. There’s a reason both those idioms arise from the same fishing metaphor. They are about behaviors at the same level. Those who get addicted are the lifelong contrarians and trolls-without-a-cause. At some point smart people simply start ignoring the bait, and the contrarians are reduced to baiting idiots, which is neither entertaining nor valuable. But once you’ve learned how to be annoying for no reason, and avoided the temptations of the danger zone, you can graduate to being selectively annoying in calibrated ways when you have a good reason. Why would you ever want to do that? Quite simply, because people who are in an annoyed state behave more predictably than those who are in a non-annoyed state, where they are actually thinking. If you ever need to stop somebody from thinking too much about something, and more benign methods like flattery, distraction or avoidance fail, you escalate by being annoying.
Be Slightly Evil: A Playbook for Sociopaths (Ribbonfarm Roughs 1) Page 11