Book Read Free

Human Error

Page 13

by James Reason


  The Jars Test and comparable techniques revealed a strong and remarkably stubborn tendency towards applying the familiar but cumbersome solution when simpler, more elegant solutions were readily available. This mechanisation of thinking is quick to develop and hard to dislodge. If a rule has been employed successfully in the past, then there is an almost overwhelming tendency to apply it again, even though the circumstances no longer warrant its use. To a person with just a hammer, every problem looks like a nail.

  5.1.8. General versus specific rules

  On the face of it, these arguments run counter to the claim of Holland and his coauthors (1986, p. 205) that: “People have a preference for using rules at the lowest, most specific hierarchical level; they customarily use rules at higher, more general levels only when no more specific rule provides an answer at a satisfactory level of confidence.” But these positions are not so far apart as they might initially appear.

  The first thing to appreciate is the difference of emphasis. Holland and coauthors wished to highlight the ease with which people modify their rule structure to cope with novel situations. Our concern is with explaining recurrent error forms.

  Another point relates to the kinds of evidence adduced to support these apparently contradictory positions. Holland and his coworkers’ case rests primarily upon laboratory studies that reveal that “individuating information, whether diagnostic or nondiagnostic, has substantial power to override default assumptions based on category membership” (p. 219). Our error-related assertions derive, for the most part, from naturalistic observations of problem-solving errors in complex, real-life environments. The difference is a crucial one; in the laboratory, the specific or individuating signs are presented in a way that largely guarantees their reception by the subjects. The same is not necessarily true of the real world.

  A possible compromise position is as follows. Let us concede that where the individuating information is detected and where the ‘action’ consequences do not conflict with much stronger rules at a higher level, then people will operate at the more specific level. However, the conditions prevailing in complex, dynamic problem solving, such as handling an emergency in a nuclear power plant, rarely satisfy these criteria. Countersigns can either be submerged in a torrent of data or else explained away. In addition, there are likely to be substantial differences between the strengths of rules at different levels in the hierarchy. Such marked variations in rule or habit strength are rarely reproducible in the laboratory.

  To summarise: several features of human information processing such as bounded rationality, ‘conservatism’, partial matching, the identification of key signs, explaining away countersigns and strength differences favouring more commonly encountered (unexceptional) problems conspire to yield strong-but-wrong rule selections in real-life situations. It is accepted, however, that more specific rules may be preferred in the relatively uncomplicated world of the psychological laboratory.

  In Chapter 4, the cognitive processes implicated in the underspecification of rules and other knowledge structures are considered in some detail. Also discussed are the mechanisms by which the knowledge base resolves conflicts between partially matched ‘candidates’ in favour of contextually-appropriate, high-frequency responses.

  5.2. The application of bad rules

  It is convenient to divide ‘bad rules’ into two broad classes: encoding deficiencies, in which features of a particular situation are either not encoded at all or are misrepresented in the conditional component of the rule; and action deficiencies, in which the action component yields unsuitable, inelegant or inadvisable responses. In each case, we are interested in both the origins of such suboptimal rules and in the means by which they are preserved. Before examining these failure modes, however, it would be instructive to take a developmental perspective on the issue of rule construction.

  5.2.1. A developmental perspective

  Studies of children’s problem solving at various developmental stages provide important insights into the ways in which rule structures develop. Particularly interesting is the somewhat puzzling observation (see Karmiloff-Smith, 1984) that older children are more likely, at least for a period, to make certain rule-based grammatical errors than younger children. A few months after they begin to employ the regular English past tense form, -ed, children start making errors with irregular past tense forms that they had previously used correctly. Thus, they say ‘goed’ and ‘breaked’, where they had previously said ‘went’ and ‘broke’.

  Karmiloff-Smith viewed these and other language errors as indicative of the way children grapple with problems in general Drawing upon observations of a variety of problem-solving behaviours, she formulated a three-stage, process-orientated framework to describe how children acquire adequate problem-solving routines. A key feature of this theory is the existence of an intermediate phase giving rise to highly predictable error forms.

  Phase 1. Procedural phase: At an early developmental stage, the behavioural output of the child is primarily data-driven. Actions are shaped mainly by local environmental factors. In Rasmussen’s terms, control resides for the most part at the knowledge-based level. The procedures children generate are feedback driven and success-orientated. In effect, the child fashions, ‘on-line’, a specific rule for each new problem. The result is a largely unorganised mass of problem-solving routines. According to Karmiloff-Smith (1984, p. 6): “The adult observer may interpret the child’s behaviour as if it were generated from a single representation, but for the child the behavioural units consist of a sequence of isolated, yet well-functioning procedures which are recomputed afresh for each part of the problem.”

  Phase 2. Metaprocedural phase: So called because at this stage “children work on their earlier procedural representations as problem spaces in their own right.” In contrast to Phase 1, environmental features may be disregarded altogether. Behaviour is guided predominantly (though not exclusively) by rather rigid ‘top-down’ knowledge structures. The child is engaged in organising the one-off procedural rules (acquired in Phase 1) into meaningful categories. Karmiloff-Smith (1984, p. 7) continues: “There is thus a loss of the richness of the phase 1 adaptation to negative feedback but a gain in that the simplified single approach to all problem parts affords a unifying of the isolated procedures of phase 1.” One consequence of this inner-directed sorting of specific rules into general categories is that these more global rules are applied overenthusiastically and rigidly, with too little regard for local cues signalling possible exceptions.

  Phase 3. Conceptual phase: Here, performance is guided by subtle control mechanisms that modulate the interaction between data-driven and top-down processing. A balance is struck between environmental feedback and rule-structures; neither predominates. Like Phase 1 (but unlike Phase 2), performance is relatively error-free. But this success is mediated by quite different knowledge structures. Instead of the mass of piecemeal procedures characteristic of Phase 1, at Phase 3, the child can benefit from the extensive reorganisation that occurred in Phase 2. These new structures can accommodate environmental feedback without jeopardising the overall organisation of the rule-based system.

  5.2.2. Encoding deficiencies in rules

  (a) Certain properties of the problem space are not encoded at all. Siegler (1983) found that 5-year-old children consistently failed at balance-beam problems, even though they understood the importance of the relative magnitudes of the weights on either side of fulcrum. They appeared to be unaware of the significance of the distance of a weight from the fulcrum. This continued even after they had received training designed to focus their attention upon the distance factor. These difficulties were not apparent in a group of 8-year-olds. Since the younger children failed to attend to distance, it could not be encoded and was thus absent from their rules for dealing with the balance-beam problem.

  The difficulty in this instance is that 5-year-olds cannot cope with manipulating two relationships at the same time. Adults, of course,
do not generally retain these developmental limitations. Nevertheless, there are phases during the acquisition of complex skills when the cognitive demands of some component of the total activity screen out rule sets associated with other, equally-important aspects. At these intermediate levels, the task of learning to drive a car or fly an aeroplane is still managed at the RB and KB levels of performance. It constitutes a set of problems for which the relevant rule structures are either missing or fragmented.

  Ellingstadt, Hagen and Kimball (1970) compared the control performance of experienced drivers with two groups of novices, those with fewer than 10 hours’ experience and those with more than 10 hours. Experienced drivers showed, as might be expected, very little variability in either of the two main aspects of driving: speed control and steering.

  The novices with fewer than 10 hours of driving experience tended to simplify the task of managing the vehicle by virtually ignoring one aspect of it, namely speed control. By ‘load-shedding’ in this way, they succeeded in keeping their vehicles in the correct lane for about 70 per cent of the test drive. But their speed never rose above a steady crawl.

  However, the more experienced novices showed a pattern of performance somewhere midway between the experienced and the very inexperienced drivers. Although they steered the car in much the same way as the inexperienced novices, their speed control was wildly erratic. Sometimes they moved at a snail’s pace, while at other times they careered around the track at breakneck speed. At this intermediate level of driving skill, vehicle control would appear to be governed by two competing sets of rules: one for managing speed and the other for direction. Only with continued practice do they become integrated into a single coherent set of control structures. When this integration occurs, vehicle management is focused at the skill-based level. Rule-based activity is primarily concerned with coping with the problems posed by the existence of other road users and with maintaining the desired route.

  (b) Certain properties of the problem space may be encoded inaccurately. In this case, the feedback necessary to disconfirm bad rules may be misconstrued or absent altogether. Many examples have been provided by recent research on ‘intuitive’ or ‘naive’ physics: the erroneous beliefs people hold about the properties of the physical world (Champagne, Klopfer & Anderson, 1980; McCloskey & Kaiser, 1984; Kaiser, McCloskey & Proffitt, 1986; reviewed by Holland et al., 1986).

  Intuitive physics pays little heed to Newton’s laws of motion: “It is better characterized as Aristotelian, or perhaps as medieval. The central concept of intuitive physics is that of impetus” (Holland et al., 1986, p. 209). For example, McCloskey (1983) asked college students to judge the trajectory followed by a ball emerging from a coiled tube after it had been injected there with some force. Two alternatives were offered: one showing the ball following a straight path, the other a curved trajectory. Forty per cent of the students chose the curved path. This wrong choice is entirely in accord with fourteenth-century thinking: “A mover in moving a body impresses on it a certain impetus, a certain power capable of moving this body in the direction in which the mover set it going, whether upwards, downwards, sideways or in a circle” (Buridan, cited in Kaiser et al., 1986).

  As Holland and his coauthors point out, one reason why these erroneous rules arise in the first place is that the human visual system is extremely poor at detecting the acceleration of objects: the key to Newtonian physics. On the other hand, people are good at judging velocity: the basis of intuitive physics. Such flawed rules persist because they go largely unpunished. Furthermore, impetus theory provides a reasonably good basis for predicting the motion of objects in conditions of constant friction, which are often encountered in everyday life.

  (c) An erroneous general rule may be protected by the existence of domain-specific exception rules. This is likely when the problem solver encounters relatively few exceptions to the general rule, as in the case of the impetus-based assumptions of naive physics. In this case, the exception proves the rule. In a social context, similar mechanisms can operate to preserve stereotypes: “the very existence of multitudinous specific-level hypotheses, many of which operate as exceptions to higher-level rules, will serve to protect erroneous stereotypes from disconfirmation. Some of my best friends are ...” (Holland et al., 1986, p. 222).

  5.2.3. Action deficiencies in rules

  The action component of a problem-solving rule can be ‘bad’ in varying degrees. At one extreme, it could be plain wrong. At an intermediate level, it could be clumsy, inelegant or inefficient, but still achieve its aims. Or it could simply be inadvisable; that is, it could lead to the solution of a particular problem in a reasonably efficient or economic fashion, but its repeated use may expose its user to avoidable risks in a potentially hazardous task or environment. Examples of each of these failure modes are considered below.

  (a) Wrong rules. Some of the best-documented examples of errors arising from the use of wrong rules have been obtained from studies of mathematical procedures (Brown & Burton, 1978; Brown & VanLehn, 1980; Young & O’Shea, 1981). One such study will illustrate their employment.

  Young and O’Shea have shown convincingly that most children’s errors in subtraction sums arise not from the incorrect recall of number facts, but from applying incorrect strategies. They analysed a corpus of over 1,500 subtractions done by 10-year-olds (Bennett, 1976). Errors were classified into three groups: algorithm errors (36 per cent), pattern errors (16 per cent) and number-fact errors (37 per cent).

  The most popular type of algorithmic (or wrong rule) errors revealed a systematic misunderstanding of when ‘borrowing’ was needed. Some had reversed the rule entirely, borrowing when the subtrahend digit was less than the minuend digit and not borrowing when subtrahend was greater than the minuend. Some children never borrowed; others borrowed when it was not necessary. Many errors involved the zero digit. The most common mistake was of the ‘0-N = 0’ class (e.g., 70-47 = 30), where in the ‘two-up-two-down’ configuration common to subtraction sums in English children, the seven subtracted from the zero was given as a zero rather than three.

  The conclusion drawn from this analysis was that “it is more fruitful to regard the child as faithfully executing a faulty algorithm than as wrongly following a correct one.” In short, most of these wrong solutions arise from ‘bugs’ in the subtraction program.

  At the other end of the spectrum were the mistakes made by the Chernobyl operators (Collier & Davies, 1986). One of these in particular suggests the application of a wrong rule. In order to carry out their assigned task of testing a turbine-driven voltage generator that would supply electricity to the emergency core cooling system’s (ECCS) pumps for a brief period after an off-site power failure, the operators switched off the ECCS. Later, they increased the water flow through the core threefold. The operators appeared to be working in accordance with the following inferential rule: if (there is more water flowing through the core) then (the reactor will have a greater safety margin, and hence there will be less risk of requiring ECCS cooling, which would be unavailable). In the dangerously low power regime in which they were then operating, the reverse was actually the case; more water equalled less safety. We will look at this incident in greater detail in Chapter 7.

  (b) Inelegant or clumsy rules: Many problems afford the possibility of multiple routes to a solution. Some of these are efficient, elegant and direct; others are clumsy, circuitous and occasionally bizarre. In a forgiving environment or in the absence of expert instruction, some of these inelegant solutions become established as part of the rule-based repertoire.

  Sometimes these procedures become enshrined at the skill-based level. I have noticed, for instance, that certain elderly British drivers operate their vehicles predominantly in a ‘fuel-saving’ mode. Raised in a time of economic austerity or fuel rationing, they remain as long as possible in fourth gear, even when their car is labouring painfully. Approaching a traffic light, they will slip into neutral and ‘coast’ to a halt, unco
ncerned by the loss of control this entails. To a younger generation, it seems as though they would rather murder the engine than increase its rate of fuel consumption, even when the economies of the past are no longer strictly necessary. Indeed, ‘false’ economies are prime exemplars of procedures that satisfy certain goals, yet bring even more acute problems in their wake.

  (c) Inadvisable rules: Here, the rule-based solution may be perfectly adequate to achieve its immediate goal most of the time, but its regular employment can lead, on occasions, to avoidable accidents. The behaviour is not wrong (in the sense that it generally achieves its objective, though it may violate established codes or operating procedures), it does not have to be clumsy or inelegant, nor does it fall into the ‘plain crazy’ category; it is, in the long run, simply inadvisable.

  Quite often, these behaviours arise when an individual or an organisation is required to satisfy discrepant goals, among which the maintenance of safety is often a very feeble contender. Accidents are rare events. For most people, their possibility is fairly remote. And in any case, the needs of safety are often apparently satisfied by the routine observance of certain procedures like wearing a safety belt, carrying out vehicle maintenance, holding regular fire drills and the like. For a driver in a hurry, the dangers of following too close to the vehicle in front are far harder to imagine and much less compelling than the consequences of a missed appointment. To the land-locked directors of Townsend Thoresen, the need to keep their shareholders happy was a more immediate and understandable objective than the safe running of their ferries, whose day-to-day operation they were not qualified to understand. We will explore this notion of conflicting goals further in Chapter 7, which considers what lessons can be learnt from the Chernobyl and Zeebrugge disasters.

 

‹ Prev