Human Error

Home > Other > Human Error > Page 12
Human Error Page 12

by James Reason


  (c) “While making tea, I noticed that the tea caddy was empty. I got a fresh packet of tea from the shelf and refilled the caddy. But then I omitted to put the tea in the pot, and poured boiling water into an empty kettle.”

  Lapses (b) and (c) suggest that secondary corrective routines (rule-based solutions to regularly encountered ‘hiccups’ in a routine) can get ‘counted in’ as part of the planned sequence of actions, so that when the rule-based activity is over, the original sequence is picked up at a point one or two steps further along. These have been termed program counter failures (Reason & Mycielska, 1982)

  4.1.3. Reduced intentionality

  It frequently happens that some delay intervenes between the formulation of an intention to do something and the time for this activity to be executed. Unless it is periodically refreshed by attentional checks in the interim, this intention probably will become overlaid by other demands upon the conscious workspace. These failures of prospective memory lead to a common class of slips and lapses that take a wide variety of forms. These include detached intentions (“I intended to close the window as it was cold. I closed the cupboard door instead.”), environmental capture (“I went into my bedroom intending to fetch a book. I took off my rings, looked in the mirror and came out again— without the book.”) and multiple sidesteps (“I intended to go to the cupboard under the stairs to turn off the immersion heater. I dried my hands to turn off the switch, but went to the larder instead. After that, I wandered into the living room, looked at the table, went back to the kitchen, and then I remembered my original intention.”).

  Sometimes these errors take the form of states rather than actions i.e., lapses rather than slips): the what-am-I-doing-here experience (“I opened the fridge and stood there looking at its contents, unable to remember what it was I wanted.”) and the even more frustrating I-should-be-doing-something-but-I-can’t-remember-what experience.

  4.1.4. Perceptual confusions

  The characteristics of these fairly common errors suggest that they occur because the recognition schemata accept as a match for the proper object something that looks like it, is in the expected location or does a similar job. These slips could arise because, in a highly routinised set of actions, it is unnecessary to invest the same amount of attention in the matching process. With relatively unusual or unexpected stimuli, attentional processing brings noncurrent knowledge to bear upon their interpretation. But with oft-repeated tasks, it is likely that the recognition schemata, as well as the action schemata, become automatised to the extent that they accept rough rather than precise approximations to the expected inputs. This degradation of the acceptance criteria is in keeping with ‘cognitive economies’ and its attendant liberation of attentional capacity.

  Thus, perceptual slips commonly take the form of accepting look-alikes for the intended object (“I intended to pick up the milk bottle, but actually reached out for the squash bottle.”). A closely-related variety involves pouring or placing something into a similar but unintended receptacle (“I put a piece of dried toast on the cat’s dish instead of in the bin.” “I began to pour tea into the sugar bowl.”).

  4.1.5. Interference errors: blends and spoonerisms

  Two currently active plans or, within a single plan, two action elements, can become entangled in the struggle to gain control of the effectors. This results in incongruous blends of speech and action (“I had just finished talking on the phone when my secretary ushered in some visitors. I got up from behind the desk and walked to greet them with my hand outstretched saying ‘Smith speaking’.” “I was just beginning to make tea, when I heard the cat clamouring at the kitchen door to be fed. I opened a tin of cat food and started to spoon the contents into the teapot instead of his bowl.”) or in the transposition of actions within the same sequence, producing a behavioural spoonerism (“In a hurried effort to finish the housework and have a bath, I put the plants meant for the lounge in the bedroom and my underwear in the lounge window.”).

  4.2. Overattention: Mistimed checks

  When an attentional check is omitted, the reins of action or perception are likely to be snatched by some contextually appropriate strong habit (action schema) or expected pattern (recognition schema). What is less intuitively obvious, however, is that slips can also arise from exactly the opposite process, that is, when focal attention interrogates the progress of an action sequence at a time when control is best left to the automatic ‘pilot’. Any moderately skilled person who has tried to type or play the piano while concentrating on the movements of a single finger will know how disruptive this can be.

  Making tea is a good example of the kind of activity that is especially susceptible to place-losing errors arising from superfluous checks. This a test-wait-test-exit type of task (see Harris & Wilkins, 1982), in which a series of largely automatic actions need to be carried out in the right order and where there are periods of waiting for something to happen: the kettle to boil, the tea to brew in the pot. It is also an activity in which a quick visual check on progress does not always provide the right answer.

  Consider the situation in which one interrupts some reverie to enquire where one is in the tea-making sequence. Mistimed checks such as these can produce at least two kinds of wrong assessment. Either one concludes that the process is further along than it actually is, and, as a consequence, omits some necessary step like putting the tea in the pot or switching on the kettle (omission). Or, one decides that it has not yet reached the point where it actually is and then repeats an action already done, such as setting the kettle to boil for a second time or trying to pour a second kettle of water into an already full teapot (repetition). The intriguing thing is that if these checks had not been made, the automatic tea-making schemata would probably have performed their tasks without a hitch.

  A rare but revealing kind of slip can appear in bi-directional sequences. An inappropriately timed check can cause an action sequence to double back on itself (reversal), as in the following cases.

  (a) “I intended to take off my shoes and put on my slippers. I took my shoes off and then noticed that a coat had fallen off a hanger. I hung the coat up and then instead of putting on my slippers, I put my shoes back on again.”

  (b) “I got the correct fare out of my purse to give to the bus conductor. A few moments later I put the coins back into the purse before the conductor had come to collect them.”

  like omitted checks, inappropriate monitoring is associated with attentional capture. Mistimed monitoring is most likely to occur immediately following a period of ‘absence’ from the task in hand. Suspecting that one has not performed necessary checks in the immediate past can prompt an inopportune interrogation of progress that falls, not at the node, but in the middle of a preprogrammed sequence.

  5. Failure modes at the rule-based level

  A useful conceptual framework within which to identify the possible modes of failure at the RB level has been provided by Holland, Holyoak, Nisbett and Thagard (1986):

  In assembling a [mental] model of the current situation (often, in fact, a range of models, which are allowed to compete for the right to represent the environment), the [cognitive] system combines existing rules—which are themselves composed of categories and the relations that weld the categories into a structure providing associations and predictions. The assembly of a model, then, is just the simultaneous activation of a relevant set of rules. The categories are specified by the condition parts of the rules; the (synchronic) associations and predictive (diachronic) relations are specified by the action parts of the rules. (Holland et al., 1986, p. 29)

  In any given situation, a number of rules may compete for the right to represent the current state of the world. The system is extremely ‘parallel’ in that many rules may be active simultaneously. Success in this race for instantiation depends upon several factors:

  (a) A prerequisite for entering the race at all is that the condition part (the if part) of the rule should be matched either t
o salient features of the environment or to the contents of some internally generated message.

  (b) Matching alone does not guarantee instantiation; a rule’s competitiveness depends critically upon its strength, the number of times a rule has performed successfully in the past.

  (c) The more specifically a rule describes the current situation, the more likely it is to win.

  (d) Success depends upon the degree of support a competing rule receives from other rules (i.e., the degree of compatibility it has with currently active information).

  A central feature of this model concerns the manner in which rules are organised. Models of complex environments comprise a layered set of transition functions that Holland and his co-workers term quasi-homomorphisms, or q-morphisms for short. In effect, rules are organised into default hierarchies, with the most general or prototypical representations of objects and events given at the top level. These allow approximate descriptions and predictions of the basic recurrences of everyday life, but with many exceptions. Whenever exceptions are encountered, increasingly more specific rules are created at lower levels of the hierarchy. As Holland and his coauthors (1986, p. 36) explain: “Each additional layer in the hierarchy will accommodate additional exceptions while preserving the more global regularities as default expectations.” The necessary condition for creating a more specific rule is a failed expectation based upon the instantiation of an overly-general (higher-level) rule. The addition of these more specific rules at lower and lower levels of the hierarchy increases both the complexity and the adaptability of the overall model.

  The main reason for selecting this rather than another detailed rule-based framework (e.g., Anderson, 1983) as a basis for the present discussion is because of the close attention that Holland and his colleagues have given to possible failure modes. We will return to their treatment of error mechanisms at various points in this section.

  As a first approximation, it is convenient to divide the possible varieties of rule-based errors into two general categories: RB mistakes that arise from the misapplication of good rules, and those due to the application of bad rules. It should be noted that while much of what follows is consistent with the Holland group’s framework, there are some significant departures. These arise in part from differences of emphasis; Holland and his colleagues were primarily concerned with inductive learning, the process by which knowledge is expanded to accommodate changes in the world. Our interest is in the ways in which rule-based operations can go wrong.

  5.1. The misapplication of good rules

  As used here, a ‘good rule’ is one with proven utility in a particular situation. However, both the error data and the internal logic of default hierarchies indicate that such rules, though perfectly adequate in certain circumstances, may be misapplied in environmental conditions that share some common features with these appropriate states, but also possess elements demanding a different set of actions.

  If one accepts that rules are organised in default hierarchies (with rules for dealing with more prototypical situations towards the top and with successively lower levels comprising rules for coping with increasingly more specific or exceptional circumstances), then there are several factors that conspire to produce the misapplication of higher-level rules, or strong-but-wrong rules.

  5.1.1. The first exceptions

  It is highly likely that on the first occasion an individual encounters a significant exception to a general rule, particularly if that rule has repeatedly shown itself to be reliable in the past, the strong-but-now-wrong rule will be applied. It is only through the occurrence of such errors that these ‘parent’ rules will develop the more specific ‘child’ rules necessary to cope with the range of situational variations.

  A good example of this was the error made by the Oyster Creek operators when they took the water level in the annulus as an indication of the level in the shroud. Nothing in their previous experience had given them any reason to doubt the invariance of this relationship, and they had no knowledge of the prior slip that had caused the dangerous discrepancy on this particular occasion. Nor, for that matter, had the system designers anticipated such a possibility, since they omitted to provide a direct indication of the water level in the shroud. The only thing revealing that the two levels were no longer the same was the insistent ringing of an alarm, indicating that the shroud water level had dropped below a fixed point dangerously close to the top of the core. However, since the operators let this ring for a full half hour before taking appropriate corrective action, they probably interpreted it as a false alarm.

  Another more homely example was recently recounted to me by a friend (Beveridge, 1987). He was about to pull out into the traffic flow after having been parked at the side of the road. He checked his wing mirror and saw a small red car approaching. He then made a cursory check on his rear-view mirror (which generally gives a more realistic impression of distance) and noted a small red car still some distance away. He then pulled out from the kerb and was nearly hit by a small red car. There were two of them, one behind the other. He had assumed they were one and the same car. The first car had been positioned so that it was only visible in the wing mirror.

  5.1.2. Signs, countersigns and nonsigns

  As the Oyster Creek example demonstrates, situations that should invoke exceptions to a more general rule do not necessarily declare themselves in an unambiguous fashion, particularly in complex, dynamic, problem-solving tasks. In these circumstances, there are likely to be at least three kinds of information present:

  (a) Signs, inputs that satisfy some or all of the conditional aspects of an appropriate rule (using Rasmussen’s terminology).

  (b) Countersigns, inputs that indicate that the more general rule is inapplicable.

  (c) Nonsigns, inputs which do not relate to any existing rule, but which constitute noise within the pattern recognition system.

  The important point to stress is that all three types of input may be present simultaneously within a given informational array. And where countersigns do manage to claim attention, they can be ‘argued away’, as at Oyster Creek, if they do not accord with the currently instantiated view of the world.

  5.1.3. Informational overload

  The difficulty of detecting the countersigns is further compounded by the abundance of information confronting the problem solver in most real-life situations (see Billman, 1983). The local state indications almost invariably exceed the cognitive system’s ability to apprehend them. Only a limited number will receive adequate processing. And these are likely to match the conditional components of several rules.

  5.1.4. Rule strength

  The chances of a particular rule gaining victory in the ‘race’ to provide a description or a prediction for a given problem situation depends critically upon its previous ‘form’, or the number of times it has achieved a successful outcome in the past. The more victories it has to its credit, the stronger will be the rule. And the stronger it is, the more likely it is to win in future races. Some theorists—notably Anderson (1983), though not Holland and coauthors (1986)—allow the possibility of partial matching; a rule may enter the race if some but not all of its conditions are satisfied. This idea of partial matching is the one preferred here, since it allows for a trade-off between the degree of matching and the strength of the rule. The stronger a rule, the less it will require in the way of situational correspondence in order to ‘fire’. In other words, the cognitive system is biased to favour strong rather than weak rules whenever the matching conditions are less than perfect.

  5.1.5. General rules are likely to be stronger

  Implicit in the idea of a default hierarchy is that situations matching higher-level rules will be stronger than those lower down by virtue of their greater frequency of encounter in the world. Exceptions are, by definition, exceptional. Although it is possible to imagine situations in which lower-level rules acquire greater strength than higher-level ones, it is more likely that there will be a positiv
e relationship between level and rule strength.

  5.1.6. Redundancy

  Related to the notion of partial matching is the fact that certain features of the environment will, with experience, become increasingly significant, while others will dwindle in their importance. By the same token, particular elements within the conditional part of a rule will acquire greater strength relative to other elements (i.e., they will carry more weight in the matching process). It has long been known that the acquisition of human skills depends critically upon the gradual appreciation of the redundancy present in the informational input. Repeated encounters with a given problem configuration allow the experienced troubleshooter to identify certain sequences or groupings of signs that tend to cooccur. Having ‘chunked’ the problem space, the problem solver learns that truly diagnostic information is contained in certain key signs, the remainder being redundant. An inevitable consequence of this learning process is that some cues will receive far more attention than others, and this deployment bias will favour previously informative signs rather than the rarer countersigns.

  5.1.7. Rigidity

  Rule usage is subject to intrinsic ‘cognitive conservatism’. The most convincing demonstration of the rigidity of rule-bound problem solving was provided by Luchins and Luchins (1950) in their famous Jars Test. There can be little doubt of the robustness of these findings since the data were obtained from over 9,000 adults and children. Luchins was concerned with the blinding effects of past experience and with what happens when a habit “ceases to be a tool discriminantly applied but becomes a procrustean bed to which the situation must conform; when, in a word, instead of the individual mastering the habit, the habit masters the individual” (Luchins & Luchins, 1950).

 

‹ Prev