Human Error

Home > Other > Human Error > Page 14
Human Error Page 14

by James Reason


  A maritime example of inadvisable rule usage was provided by a recent study of avoidance behaviour in qualified watchkeeping officers (Habberley, Shaddick & Taylor, 1986). These observations revealed the very subtle distinctions that can exist between elegant and inadvisable problem solutions. The investigators examined the way experienced ships’ officers handled potential collision situations in a nocturnal ship’s bridge simulator.

  The watch-keeping task was categorised using Rasmussen’s performance levels:

  In bridge watchkeeping, the detection and routine plotting of other ships is an example of skill-based performance, not requiring much conscious effort once well learned, and forming a continual part of the task. The watchkeeper uses rule-bases behaviour to manage the large majority of encounters with other ships (not only with reference to the formal Rules, but in accordance with what is customary practice on his ship). It is only in rather exceptional circumstances, such as the very close-quarters situation, that he needs to switch to knowledge-based behaviour, in order to find a safe solution to the problem which has developed. (Habberley, et al, 1986, p. 30)

  These performance levels were defined operationally: the transition between SB and RB occurring at the 6-to-8-mile range, and that between SB and KB at the 2-to-3-mile range.

  The most surprising finding was the way in which subjects consistently allowed oncoming ships to approach within close range before taking avoiding action. This had little or nothing to do with their inability to detect the approaching vessel, though they showed a tendency to wait until their lights were clearly visible before doing anything. But avoiding action was frequently left until the range was much smaller, “for no apparent reason, and without any sense on the subject’s part that this was an error” (Habberley et al., 1986, p. 47).

  All watchkeepers are taught to use the available sea-room to stay several miles away from other ships, only coming closer when traffic density makes it unavoidable. In contrast, most of these officers adopted the strategy of coming equally close to other ships regardless of traffic density. Both the advisable and the inadvisable rules have their own logic. The former is founded on the clear fact that closeness is a precursor to collision; while the latter asserts that since closeness is not a sufficient cause for collision, neither is it so in other less crowded conditions.

  Despite their employment of the ‘close encounter’ strategy, these officers showed a high degree of competence in the way they manoeuvred their ships. In only 5 of the 141 test runs was it necessary for the simulator operator to intervene in order to prevent a ‘collision’ caused by a subject’s actions; a serious error rate of just 3.5 per cent. Assuming that ‘other ships’ had the same error rate, a collision could occur on 0.1 per cent of such close encounters.

  In accident avoidance, experience is a mixed blessing. Operators learn their avoidance skills not so much from real accidents as from near-misses. However, Habberley and coauthors (1986, p. 50) note: “if near-accidents usually involve an initial error followed by an error recovery (as marine near-misses seem to do), more may be learned about the technique of successful error recovery than about how the original error might have been avoided. Watchkeepers who become successful shiphandlers may see no reason to avoid close-quarters situations, from which on the basis of their past experience they know that they can extricate themselves.”

  The confidence these and other experienced operators have in their ability to get themselves out of trouble can maintain inadvisable rule behaviour. This is particularly so when a high value is attached to recovery skills and where the deliberate courting of a moderate degree of risk is seen as a necessary way of keeping these skills sharp.

  6. Failure modes at the knowledge-based level

  The failures that arise when the problem solver has to resort to computationally-powerful yet slow, serial and effortful ‘on-line’ reasoning originate from two basic sources: ‘bounded rationality’ and an incomplete or inaccurate mental model of the problem space. Evidence relating to the former has already been presented in the early sections of Chapter 3. The problems of incomplete knowledge are discussed at length in Chapters 5 and 7. For the moment, we will confine ourselves to listing some of the more obvious ‘pathologies’ of knowledge-based processing. But first it is necessary to outline the nature of KB processing and to distinguish three different kinds of problem configuration.

  A useful image to conjure up when considering the problems of knowledge-based processing is that of a beam of light (the workspace) being directed onto a large screen (the mental representation of the problem space). Aside from the obvious fact that the knowledge represented on the screen may be incomplete and/or inaccurate, the principal difficulties are that the illuminated portion of the screen is very small compared to its total extent, that the information potentially available on the screen is inadequately and inefficiently sampled by the tracking of the light beam and that, in any case, the beam changes direction in a manner that is only partially under the control of its operator. It is repeatedly drawn to certain parts of the screen, while other parts remain in darkness. Nor is it obvious that these favoured portions are necessarily the ones most helpful in finding a problem solution. The beam will be drawn to salient but irrelevant data and to the outputs of activated schema that may or may not bear upon the problem.

  It is also helpful to distinguish three main types of problem configuration (see Figure 3.2). A ‘problem configuration’ is defined as the set of cues, indicators, signs, symptoms and calling conditions that are immediately available to the problem solver and upon which he or she works to find a solution.

  Static configurations: These are problems in which the physical characteristics of the problem space remain fixed regardless of the activities of the problem solver. Examples of this problem type are syllogisms, the Wason card test and cannibals-and-missionaries problems. These static configurations may also vary along an abstract-concrete dimension, that is, in the extent to which they are represented to the problem solver in recognisable real-world terms.

  Reactive-dynamic configurations: Here, the problem configuration changes as a direct consequence of the problem solver’s actions. Examples are jigsaw puzzles, simple assembly tasks, and the Tower of Hanoi. Such problems can also vary along a direct-indirect dimension. At the direct end, the effects of the problem solver’s actions are immediately apparent to the problem solver’s unaided senses. Indirect problems require additional sensors and displays so that the relevant feedback might reach the problem solver.

  Multiple-dynamic configurations: In these problems, the configuration can change both as the result of the problem solver’s activities and, spontaneously, due to independent situational or system factors. An important distinction here is between bounded and complex multiple-dynamic problems. In the former, the additional variability arises from limited and known sources (e.g., the other player’s moves in a game of chess). In the latter, however, this additional variability can stem from many different sources, some of which may be little understood or anticipated (e.g., coping with nuclear power plant emergencies or managing a national economy).

  It is important to recognise that different configurations require different strategies and, as a consequence, elicit different forms of problem-solving pathology. When confronted with a complex multiple-dynamic configuration, it makes some adaptive sense to rely primarily upon the strategy (often seen in NPP emergencies) of ‘putting one’s head into the data stream and waiting for a recognisable pattern to come along’ (see Reason, 1988).

  6.1. Selectivity

  There is now a wealth of evidence (see Evans, 1983) to show that an important source of reasoning errors lies in the selective processing of task information. Mistakes will occur if attention is given to the wrong features or not given to the right features. Accuracy of reasoning performance is critically dependent upon whether the problem solver’s attention is directed to the logically important rather than to the psychologically salient aspects of
the problem configuration.

  Figure 3.2. Outlining the distinguishing characteristics of the three types of problem configuration. The dashed-line arrow indicates that the problem solver can scan selected aspects of the problem configuration, but it does not produce any physical change in that configuration.

  6.2. Worskpace limitations

  Reasoners at the KB level interpret the features of the problem configuration by fitting them into an integrated mental model (Johnson-Laird, 1983). In order to check whether a given inference is valid it is necessary to search for different models of the situation that will explain the available data. This activity of integrating several possible models places a heavy burden upon the finite resources of the conscious workspace. The evidence from a variety of laboratory-based reasoning studies indicates that the workspace operates by a ‘first in-first out’ principle. Thus, it is easier to recall the premises of a syllogism in the order in which they were first presented than in the reverse order. Similarly, it is easier to formulate a conclusion in which the terms occur in the order in which they entered working memory. Thus, the load or ‘cognitive strain’ imposed upon the workspace varies critically with the form of problem presentation.

  6.3. Out of sight out of mind

  The availability heuristic (see Kahneman et al., 1982) has two faces. One gives undue weight to facts that come readily to mind. The other ignores that which is not immediately present. For example, Fischhoff and coauthors (1978) presented subjects with various versions of a diagram describing ways in which a car might fail to start. These versions differed in how much of the full diagram had been pruned. When asked to estimate the degree of completeness of these diagrams, the subjects were very insensitive to the missing parts. Even the omission of major, commonly-known components (e.g., the ignition and fuel systems) were barely detected.

  6.4. Confirmation bias

  This works upon the criteria that allow a current hypothesis to be relinquished in the face of contradictory evidence. Confirmation bias has its roots in what Bartlett (1932) termed effort after meaning. In the face of ambiguity, it rapidly favours one available interpretation and is then loath to part with it.

  Several studies have shown that preliminary hypotheses formed on the basis of early, relatively impoverished data, interfere with the later interpretation of better, more abundant data (see Greenwald, Pratkanis, Leippe & Baumgardner, 1986). The possible mechanics of this process have been discussed at length elsewhere (see Nisbett & Ross, 1980) and will be considered further in Chapter 5.

  6.5. Overconfidence

  Problem solvers and planners are likely to be overconfident in evaluating the correctness of their knowledge (Koriat, Lichtenstein & Fischhoff, 1980). They will tend to justify their chosen course of action by focusing on evidence that favours it and by disregarding contradictory signs. This tendency is further compounded by the confirmation bias exerted by a completed plan of action. A plan is not only a set of directions for later action, it is also a theory concerning the future state of the world. It confers order and reduces anxiety. As such, it strongly resists change, even in the face of fresh information that clearly indicates that the planned actions are unlikely to achieve their objective or that the objective itself is unrealistic.

  This resistance of the completed plan to modification or abandonment is likely to be greatest under the following conditions:

  (a) When the plan is very elaborate, involving the detailed intermeshing of several different action sequences.

  (b) When the plan was the product of considerable labour and emotional investment and when its completion was associated with a marked reduction in tension or anxiety (see Festinger, 1954).

  (c) When the plan was the product of several people, especially when they comprise small, elite groups (see Janis, 1972).

  (d) When the plan has hidden objectives, that is, when it is conceived, either consciously or unconsciously, to satisfy a number of different needs or motives.

  6.6. Biased reviewing: the ‘check-off illusion

  Even the most complacent problem solver is likely to review his or her planned courses of action at some time prior to their execution. But here again, distortions creep in. One question problem solvers are likely to ask themselves is: ‘Have I taken account of all possible factors bearing upon my choice of action?’ They will then review their recollections of the problem-solving process to check upon the factors considered. This search will probably reveal what appears to be a satisfactory number; but as Shepard (1964, p. 266) pointed out: “although we remember that at some time or another we have attended to each of the different factors, we fail to notice that it is seldom more than one or two that we consider at any one time.” In retrospect, we fail to observe that the conscious workspace was, at any one moment, severely limited in its capacity and that its contents were rapidly changing fragments rather than systematic reviews of the relevant material. We can term this the ‘check-off’ illusion.

  6.7. Illusory correlation

  Problem solvers are poor at detecting many types of covariation. Partly, they have little understanding of the logic of covariation, and partly they are disposed to detect covariation only when their theories of the world are likely to predict it (Chapman & Chapman, 1967).

  6.8. Halo effects

  Problem solvers are subject to the ‘halo effect’. That is, they will show a predilection for single orderings (De Soto, 1961) and an aversion to discrepant orderings. They have difficulty in processing independently two separate orderings of the same people or objects. Hence, they reduce these discrepant orderings to a single ordering by merit.

  6.9. Problems with causality

  Problem solvers tend to oversimplify causality. Because they are guided primarily by the stored recurrences of the past, they will be inclined to underestimate the irregularities of the future. As a consequence, they will plan for fewer contingencies than will actually occur. In addition, causal analysis is markedly influenced by both the representativeness and the availability heuristics (Tversky & Kahneman, 1974). The former indicates that they are likely to judge causality on the basis of perceived similarity between cause and effect. The latter means that causal explanations of events are at the mercy of arbitrary shifts in the salience of possible explanatory factors. This is also compounded by the belief that a given event can only have one sufficient cause (see Nisbett & Ross, 1980). As indicated in Chapter 3, problem solvers are also likely to suffer from what Fischhoff has called ‘creeping determinism’ or hindsight bias. Knowledge of the outcome of a previous event increases the perceived likelihood of that outcome. This can also lead people to overestimate their ability to influence future events, what Langer (1975) has termed ‘the illusion of control’.

  6.10. Problems with complexity

  6.10.1. The Uppsala DESSY studies

  For several years now, Brehmer and his research group (Brehmer, Allard & Lind, 1983; Brehmer, 1987) at the University of Uppsala have used the dynamic environmental simulation system (or DESSY for short) to investigate problem solving in realistically complex situations. Much of their work has focused upon a fire-fighting task in which subjects act as a fire chief who obtains information about forest fires from a spotter plane and then deploys his various fire-fighting units to contain them. The complexity arises from the fact that while fires spread exponentially, the means to combat them can only travel in a linear fashion.

  To date, they have concentrated primarily upon the effects of two task variables: complexity (the number of fire-fighting units the ‘fire chief has at his disposal, and their relative efficiency) and feedback delay. The results are reasonably clear-cut. So long as the relative efficiency of the units is kept constant, the number deployed at any one time has little effect upon the performance of the ‘fire chief. However, subjects fail to differentiate between the more efficient and less efficient fire-fighting units, even when the former put out fires four times as fast as the latter. They do hold strong beliefs about the effic
iency of these units, but they bear virtually no relationship to their actual performance.

  The other major finding is that feedback delay has a truly calamitous effect upon the ‘fire chief’s’ performance. Even when the delay is minimal, virtually no improvement occurs with practice. These results indicate that the subjects fail to form any truly predictive model of the situation. Instead (like Karmiloff-Smith’s Phase 1 children), they are primarily data-driven. This works well enough when the feedback is immediate, but it is disastrous with any kind of delay, because the subjects lose synchrony with the current situation and are then always lagging behind actual events.

  An adaptive response to delayed feedback would be to give more freedom of action to local unit commanders; which the ‘fire chiefs’ rarely do. Such failures to ‘distribute’ the decision-making process become even more evident towards the end of the session, when most of the forest has burned down and the fire chief’s own base is about to be engulfed. This suggests that the tendency to overcontrol increases as a function of stress, a finding consistent with the work of Doerner and his associates that is discussed below.

  6.10.2. The Bamberg Lohhausen studies

  Like Berndt Brehmer, Dietrich Doerner and his associates (Doerner, 1978; Doerner & Staudel, 1979; Doerner, 1987) at the University of Bamberg have used computer simulations to map out the strengths and weaknesses of human cognition when confronted with complex problem-solving environments. In one series of studies, subjects were given the task of running a small mid-European town (Lohhausen) as mayor. Lohhausen had approximately 3,500 inhabitants, most of whom worked in a municipal factory producing watches. Subjects were able to manipulate several variables: the production and sales policy of the town factory, rates of taxation, jobs for teachers, the number of doctors’ practices, housing construction, and so on. A major concern was to document the ‘pathologies’ exhibited by all subjects initially and by a few persistently. Doerner divided these mistakes into two groups: primary mistakes, made by almost all subjects, and the mistakes of subjects with poor performance. Primary mistakes included:

 

‹ Prev