Book Read Free

Be Slightly Evil: A Playbook for Sociopaths (Ribbonfarm Roughs 1)

Page 12

by Venkatesh Rao


  It is far easier, and far more valuable, to annoy people using their strengths, than by using their weaknesses. In fact, you cannot really annoy people by attacking their weaknesses. You can only insult them and buy anger and resentment that might come back to bite you in the form of vindictiveness. When you annoy people using their strengths on the other hand, they tend to get frustrated with themselves, rather than angry at you. And while annoyance usually fades (unless you reinforce it into a permanent state – the danger zone above), insults get carved in stone. And best of all, causing annoyance is a tactic that can neutralize people’s most effective behaviors, when they are not in your best interests.

  This phenomenon is part of a broader phenomenon I’ve talked about before: all arrested development is caused by strengths, not weaknesses. If you get too good at something, you get addicted to those rewards, and your behavior around that strength gets predictable, even if highly effective.

  To be truly effective, you must select strength vectors where you personally are much weaker than your target (or can appear much weaker because you’ve managed to make them underestimate you). In fact that’s the source of this whole bag of tricks: all annoyance tactics are derived from the natural behaviors of stupid, illogical, uncreative and unintuitive people, and rely on the mechanics of the Dunning-Kruger effect. * he only difference is that in their natural form, these are typically poorly-timed lash out/bite back behaviors that arise from threats to self-esteem. In their deliberate form, the tactics are used when you want to achieve specific effects.

  Derailing the Data-Driven

  In the Yes, Minister and Yes, Prime Minister TV shows of the 80s, the Whitehall bureaucrats Humphrey Appleby and Bernard Wooley kept the hapless minister (and later, Prime Minister) Jim Hacker trapped between a rock and a hard place: they would either flood him with so much information that he couldn’t find what he needed to know, or withhold so much, it wasn’t there for him to find. By effectively combining filtration tactics with distraction tactics based on irrelevant information, the bureaucrats managed to keep the reins in their own hands. In information wars, filters are power and useless data are weapons. This general approach to manipulation relies on the fundamental relationship between data and decisions. New tactics are becoming available in our digital age.

  When two parties have divergent agendas, the party that controls data flows is usually the one that wins. To control how a decision is framed and made, you have to control the data flows that feed into that decision. This requires two levels of work. First, you have to frame the decision. This step determines which data are deemed important and relevant. Second, you have to hide some data and exaggerate the importance of other data. Framing is a more powerful lever, since by perversely misframing a decision, you can send someone down a completely irrelevant bunny trail and give them the illusion of choice. For example, if you know that for a given question, the opinions of parents matter more than those of teachers, encouraging your opponent to design a survey to target teachers to distract him/her (preferably a detailed survey to be administered at a teacher conference in 6 months rather than a quick one conducted online next week), will win you an advantage.

  But if you cannot completely distract somebody from the important data flows, you need to learn the basics of data judo, so they end up doing the wrong things with the right data.

  In Hacker’s day (the show is set in the 80s), this general approach to manipulation relied on the paper version of information overload/scarcity. Overload meant Hacker would have to process multiple boxes of papers every night. The bureaucrats would hide the important papers deep inside the last box (a tactic also favored by defendants in class-action lawsuits, where the discovery phase results in huge piles of mostly useless data). He’d be so tired by the time he got to them late at night, he’d sign without looking (later in the show, Hacker wises up to this tactic).

  Hacker’s mistake lay in delegating the determination of “important” to his nominal underlings. If he felt overwhelmed with data, he’d complain of information overload and tell his staff to only give him the most important papers to read. If he felt important things were being kept from him, he’d demand that he be kept in the loop about everything.

  In data flows, there is no real protection against manipulation by people with more privileged and direct access to data, (it’s like being the lower riparian state along the course of a river) but you can do better than Hacker. You can set explicit relevance criteria for example: “send me everything about issue X, but leave me out of the loop on issue Y.” That at least escalates the game, since hiding information from a relevance filter is tougher than hiding it from a generic “importance” filter.

  But enough about 80s style information-based manipulation. We are in the 2010s now, with several generations of information technology between us and the Jim Hacker age.

  The equivalent of Jim Hacker in the 2010s is the self-styled “data-driven” decision-maker. I first started encountering the term around 2006, when “analytics” was starting to catch on as a buzzword. The typical clueless data-driven decision-maker (call him/her a CDDD) has the following characteristics:

  Often has a background in a process discipline such as six sigma.

  Loves anything with a statistical cachet to it, like “A/B testing” or “ARIMA model.”

  Often has some very rudimentary training in statistics and probability theory.

  Conflates more sophisticated analysis with more useful results.

  Confuses precision with accuracy (this usually shows up as worry about data quality while forgetting about data relevance).

  Is often a formula geek rather than somebody who actually looks at specific numbers.

  Is vastly more confident and secure in his/her clueless state compared to his paper-driven predecessors.

  The last two elements are particularly important. By “formula geek,” I mean someone who has only a very hazy conceptual understanding of mathematical ideas like regression and technological tools like SQL, but is able to actually use the tools very well, and relies on them to provide answers and insight. They can run regressions, fit curves, talk about R-squared and even run simpler SQL queries and routine database reports. This leads to the greater sense of confidence and competence: CDDDs mistake basic understanding of more powerful tools for greater personal competence (like somebody with a car feeling more confident about their sense of direction than somebody on foot)

  What they completely lack is any sense of taste about when and how to use the tools, any sense of what data is missing, and how to improvise. They believe “drilling down” means generating more detailed reports or knowing the tricks involved in slicing data increasingly refined ways.

  The truth of course, is that to “drill down” is to act like a detective following an instinctive trail of questioning in a mystery novel, based on clues that seem significant. Intuition in data-driven thinking doesn’t vanish, it merely moves from the answers to the questions. There are extremely sophisticated thinkers who simply “get” data-driven decision-making without knowing any statistics or technical details: they understand that being intelligently data-driven is simply about asking the right questions at the right time, which is something that takes hard thinking and a sense of timing rather than technical skills.It can be scary to watch these smart people in action. I once watched a CDDD team do a 30 minute presentation to a smart executive, presenting tons of data and answering tons of obvious and irrelevant questions. All eye-glazing stuff. In the Q&A session, most of the tired audience simply asked dull follow-up questions that the CDDD team could easily answer. The smart executive? He cut right to the chase and asked the ONE important question that reframed the issue and made it obvious that all the data and analysis was irrelevant.

  CDDDs fail in the following predictable ways:

  Failing to understand the relationship between time and data: more data is only useful if it is being generated and intelligently analyzed faster than
options are expiring due to a ticking clock.

  Falling prey to the drunkard-and-keys effect: looking in the glare of wherever data is available, rather than where the data is actually needed, to lower risk.

  Going beyond “roughly right”: If you can eyeball a graph and notice that it is basically trending up in a rough straight line, and that’s good enough for the decision you need to make, there is no real point in doing complicated math to figure out exactly how straight it is (the key phrase is “good enough”). This is an extremely common failure mode. A simple way to test for this is to ask “can ANY possible conclusion from this mathematical refinement exercise actually swing the decision the other way?” If the answer is no, the work is not worth doing.

  Failing to understand sampling: CDDDs understand the technical ideas in sampling (randomness, i.i.d, true randomness versus chain samples versus convenience samples, methodological problems with different data collection methods). But they don’t understand the far simpler framing issues in sampling. A technically perfect A/B test is completely useless if you are asking the wrong question to begin with.

  All these failure modes arise from the same place: failing to actually think about the problem at a pre-technical level: asking the right questions and pondering the underlying assumptions and hypotheses. All these activities are outside of the technical work of data-driven decision making. There are no formulas or processes at this framing stage.

  Which brings us to the slightly evil part. If you are dealing with a CDDD who is getting in your way, what can you do? You could of course, do the digital equivalent of too much/too little data dumping as with Jim Hacker (too much is usually far easier these days: give them a massive Excel sheet, or access to a database query interface that can do far too much, and is designed to drill down and generate reports in the wrong directions).

  But the digital world offers more room for creative misdirection, overload and information hiding. They key is to recognize that CDDDs do everything they do out of risk aversion, but are hazy about what data reduce what risks and uncertainties. Their risk aversion also tends to be absolute rather than relative. CDDDs usually want the same levels of certainty around every decision, whether or not there is enough information to lower the risk to their comfort levels. This means they are in a hurry to get to the technical parts because it feels like they are accomplishing something.

  So you need to encourage them in their quest for a false sense of security, and hurry them along to the technical exercises. Here are four techniques to take advantage of each of the four predictable failure modes above.

  If you know that a decision will tend towards a default option you like, if left unmade, you can suggest delaying or deferring a decision until more data is in. If the analysis supports what you wanted anyway, you look smart. If not, you can always say, “it’s too late now, we’re already committed. Second-guessing now will be very costly.”

  If you know that the actual data required to move a decision out of your gut is simply unavailable or too expensive, look for the most convenient red-herring data source. Suggest that the CDDD study that data source. If possible, suggest that they chair a committee to study that data source.

  Using this failure mode requires some technical knowledge. If you don’t care whether an upward-trending graph is a straight line, exponential or an S-curve (all you care about is “up”), then suggest the most complex technique you can. Everybody can do linear regressions. Suggest something like “we really need to do a logistic regression here; there may be some implications if it turns out we are on an S-curve.” If your CDDD doesn’t know how to do logistic regressions, he/she will waste time studying up the subject (and enjoying it) or hunting for an expert who knows how to use the technique. More generally, sending people off on useless learning missions and digital wild-goose chases is one of the best ways to distract them from substantive issues.

  A real thinker will not move on to technical questions about sampling (“is this i.i.d?”) before thinking through the qualitative and narrative questions (“are women really the target market here?”). If you want to distract a CDDD from the important questions about a sample, scare them with methodological questions: “Are we really sure this time series is i.i.d? We don’t want a Black-Scholes-Mertens type Black Swan meltdown here).

  Perhaps for the digital age, we need a phrase to replace “wild goose chase.” How about “black swan chase”?

  Keep in mind that in some ways, being forced to use these techniques wastes useful talent. If at all possible, try and point a CDDD in a direction where they can do good. Unfortunately, this is often impossible, because of their false sense of confidence. Because they are often more competent around data tools than their peers, they mistakenly believe they are also more insightful around data in general.

  Rebooting Conversations

  Sometimes conversations just start off wrong. So wrong that you need to hit the reboot button. I saw a virtuoso display of conversation rebooting once. A customer at a store had run into a major mess while trying to get a return processed, and the floor staff could not help her. The manager had not yet returned from lunch. She stood there getting angrier by the minute. When the manager finally walked through the door (nursing a chilly frappuccino; very apt given what he did next), she could hold herself back no longer. She strode up to him immediately and launched into an angry outburst: “This is just not acceptable; I’ve been waiting here fifteen minutes! I was promised...”

  The manager waited for a pause in the outburst before firmly taking charge: “No, no, NO. That is not the way. Let’s start again. Hi! My name is ___, and you are?”

  That took the wind out of the woman’s sails. She was forced to restart with introductions, properly embarrassed that she’d railed at a stranger without figuring out if he deserved the anger.

  This was a particularly extreme example, the equivalent of sidestepping and calming down a raging bull. I don’t think my nerves would have held that steady.

  But less extreme “soft reboot” situations are both more common and easier to handle. The trigger is always someone (call him/her the “bull”) coming up to you unexpectedly, in an emotionally charged state (anger, fear and sullenness are the common ones). I am only talking about casual, work and professional situations of course, not spouses, kids or parents. Cases where you have no particular obligation to be nurturing and caring (can “nurturing and caring” be an effective management style? That’s a topic for another day. Short answer, “Not if it is a simple port of parental instincts”).

  Common reactions like “Whoa, whoa, calm down” or “time out, time out!” can be dangerous because you implicitly accept responsibility for being the calm and adult one, and give the bull permission to continue the tantrum. That’s a common (and often deliberate) exploit employed by the emotionally violent against those whose desire for peace and harmony is a known weakness. What you need is a reaction that gives you reboot control, but doesn’t leave you responsible for maintaining overall calm. Just your own calm. You leave the bull responsible for his/her own emotions, and you don’t take responsibility for the situation until YOU decide you want to. This also means being willing to let the situation spiral out of control with “nobody in charge” for a while, if the bull doesn’t restrain himself/herself.

  The basic trick is simple: you repeat all or part of their opening line, but with zero emotional content. Deadpan.

  This works about 80% of the time. Sometimes you may have to change an assertion into a question, either just with interrogative modulation, or by a minimalist word substitution. Here’s are a few examples:

  Bull: HAVE YOU HEARD? THIS IS BULLCRAP!

  You: This is bullcrap?

  Bull: I AM NOT PUTTING UP WITH THIS! THEY’VE MOVED MY DESK TO THE BASEMENT!

  You: They’ve moved your desk to the basement?

  Bull (crying or close to it): Whha–wwhat am I going to do now? I am screwed.

  You: You’re screwed?
r />   This works because of the basic dynamics of emotions. When faced with an emotionally charged stimulus, your own emotional reaction will race ahead and censor the options generated by your cognitive reaction. Emotional reactions to such charged stimuli can be empathetic (you are an ally, and you mirror the emotion), sympathetic (you react with a calming/moderating emotion before determining if you want responsibility for the blow-up) or complementary (for example, defensive cringing in the face of rage).

  Reacting with a repetition (or a slightly modified repetition) is a cognitively lightweight operation, so it is quick enough to prevent an emotional hijacking. Depending on the situation, it can take some nerves to do the repetition drained of emotional content, but it is still a straightforward behavior to practice and learn.

  What happens next? Usually, the bull will see your response as a request for elaboration. Elaboration takes coherent thinking, so he/she will be forced to slow down before saying anything more. At the same time, you’ve substituted your emotionally neutral repetition for the charged opening, as the stimulus to respond to. The train of thought that starts in your head will be less constrained. Most importantly, you don’t end up with responsibility for the developing situation before you decide if you want it.

 

‹ Prev