The Design of Future Things
Page 2
We are in the midst of a major change in how we relate to technology. Until recently, people have been in control. We turned the technology on and off, told it which operation to perform, and guided it through its operations. As technology became more powerful and complex, we became less able to understand how it worked, less able to predict its actions. Once computers and microprocessors entered the scene, we often found ourselves lost and confused, annoyed and angered. But still, we considered ourselves to be in control. No longer. Now, our machines are taking over. They act as if they have intelligence and volition, even though they don’t.
Machines monitor us with the best of intentions, of course, in the interest of safety, convenience, or accuracy. When everything works, these smart machines can indeed be helpful, increasing safety, reducing the boredom of tedious tasks, making our lives more convenient, and performing tasks more accurately than we could. It is indeed convenient that the automobile automatically slows when a car darts too closely in front of us, that it shifts gears quietly and smoothly, or, in the home, that our microwave oven knows just when the potatoes are cooked. But what about when the technology fails? What about when it does the wrong thing or fights with us for control? What about when Jim’s auto notices that there are no cars in front of it, so it accelerates to highway speed, even though it is no longer on a highway? The same mechanisms that are so helpful when things are normal can decrease safety, decrease comfort, and decrease accuracy when unexpected situations arise. For us, the people involved, it leads to danger and discomfort, frustration and anger.
Today, machines primarily signal their states through alerts and alarms, meaning only when they get into trouble. When a machine fails, a person is required to take over, often with no advance warning and often with insufficient time to react properly. Jim was able to correct his car’s behavior in time, but what if he couldn’t have? He would have been blamed for causing an accident. Ironically, if the actions of a so-called intelligent device lead to an accident, it will probably be blamed on human error!
The proper way to provide for smooth interaction between people and intelligent devices is to enhance the coordination and cooperation of both parties, people and machines. But those who design these systems often don’t understand this. How is a machine to judge what is or is not important, especially when what is important in one situation may not be in another?
I have told the story of Jim and his enthusiastic car to engineers from several automobile companies. Their responses always have two components. First, they blame the driver. Why didn’t he turn off the cruise control before exiting? I explain that he had forgotten about it. Then he was a poor driver, is their response. This kind of “blame-and-train” philosophy always makes the blamer, the insurance company, the legislative body, or society feel good: if people make errors, punish them. But it doesn’t solve the underlying problem. Poor design, and often poor procedures, poor infrastructure, and poor operating practices, are the true culprits: people are simply the last step in this complex process.
Although the car companies are technically correct that the driver should remember the mode of the car’s automation, that is no excuse for poor design. We must design our technologies for the way people actually behave, not the way we would like them to behave. Moreover, the automobile does not help the driver remember. In fact, it seems more designed to help the driver forget! There is hardly any clue as to the state of the cruise control system: the car could do a far better job of reminding the driver of what control it has assumed.
When I say this to engineers, they promptly introduce the second component of their response: “Yes, this is a problem, but don’t worry. We will fix it. You’re right; the car’s navigation system should realize that the car is now on the exit road, so it should automatically either disconnect the cruise control or, at least, change its setting to a safe speed.”
This illustrates the fundamental problem. The machine is not intelligent: the intelligence is in the mind of the designer. Designers sit in their offices, attempting to imagine all that might happen to the car and driver, and then devise solutions. But how can the designers determine the appropriate response to something unexpected? When this happens to a person, we can expect creative, imaginative problem solving. Because the “intelligence” in our machines is not in the device but in the heads of the designers so when the unexpected happens, the designer isn’t there to help out, so the machine usually fails.
We know two things about unexpected events: first, they always occur, and second, when they do occur, they are always unexpected.
I once got a third response from an automobile company engineer about Jim’s experience. He sheepishly admitted that the exit lane problem had happened to him, but that there was yet another problem: lane changing. On a busy highway, if a driver decides to change lanes, he or she waits until there is a sufficiently large gap in the traffic in the new lane, then quickly darts over. That usually means that the car is close to those in front and behind. The adaptive cruise control is likely to decide the car is too close to the car in front and therefore brake.
“What’s the problem with that?” I asked. “Yes, it’s annoying, but it sounds safe to me.”
“No,” said the engineer. “It’s dangerous because the driver in back of you didn’t expect you to dart in and then suddenly put on the brakes. If they aren’t paying close attention, they could run into you from behind. But even if they don’t hit you, the driver behind is annoyed with your driving behavior.”
“Maybe,” said the engineer, laughing, “the car should have a special brake light that comes on when the brakes are applied by the automobile itself rather than by the driver, telling the car behind, ‘Hey, don’t blame me. The car did it.’”
The engineer was joking, but his comments reveal the tensions between the behavior of people and machines. People take actions for all sorts of reasons, some good, some bad, some considerate, some reckless. Machines are more consistent, evaluating the situation according to the logic and rules programmed into them. But machines have fundamental limitations: they do not sense the world in the same way as people, they lack higher order goals, and they have no way of understanding the goals and motives of the people with whom they must interact. Machines, in other words, are fundamentally different: superior in some ways, especially in speed, power, and consistency, inferior in others, especially in social skills, creativity, and imagination. Machines lack the empathy required to consider how their actions impact those around them. These differences, especially in what we would call social skills and empathy, are the cause of the problems. Moreover, these differences—and therefore these conflicts—are fundamental, not ones that can be quickly fixed by changing the logic here or adding a new sensor there.
As a result, the actions of machines contradict what people would do. In many cases, this is perfectly fine: if my washing machine cleans clothes very differently than I would, I don’t care as long as the end result is clean clothes. Machine automation works here because once the washing machine has been loaded and started, it is a closed environment. Once started, the machine takes over, and as long as I refrain from interfering, everything works smoothly.
But what about environments where both people and machines work together? Or what happens with my washing machine if I change my mind after it has started? How do I tell it to use different setting, and once the washing cycle has started, when will the changes take effect—right away or with the next filling of the machine? Here, the differences between the way machines and people react really matter. Sometimes, it appears that the machine is acting completely arbitrarily, although if the machine could think and talk, I suspect it would explain that from its point of view, the person is the one being arbitrary. To the person, this can be frustrating, a continual battle of wills. To the observer, it can be confusing, for it is never clear who is in charge or why a particular action has been taken. It doesn’t really matter whether the machine or the person is co
rrect: it is the mismatch that matters, for this is what gives rise to aggravation, frustration, and, in some cases, damage or injury.
The conflict between human and machine actions is fundamental because machines, whatever their capabilities, simply do not know enough about the environment, the goals and motives of the people, and the special circumstances that invariably surround any set of activities. Machines work very well when they work in controlled environments, where no pesky humans get in the way, where there are no unexpected events, and where everything can be predicted with great accuracy. That’s where automation shines.
But even though the machines work well when they have complete control of the environment, even here they don’t quite do things the way we would. Consider the “smart” microwave. It knows just how much power to apply and how long to cook. When it works, it is very nice: you simply have to put in fresh salmon and tell the machine you are cooking fish. Out it comes, cooked to perfection, somewhere between a poached fish and a steamed one, but perfect in its own way. “The Sensor features detect the increasing humidity released during cooking,” says the manual, “[and] the oven automatically adjusts the cooking time to various types and amounts of food.” But notice that it doesn’t determine if the microwave cooks the food in the same way that a person would. A person would test the firmness, look at the color, or perhaps measure the internal temperature. The microwave oven can’t do any of this, so it measures what it can: the humidity. It uses the humidity to infer the cooking level. For fish and vegetables, this seems to work fine, but not for everything. Moreover, the sensing technology is not perfect. If the food comes out undercooked, the manual warns against using the sensor a second time: “Do not use the Sensor features twice in succession on the same food portion—it may result in severely overcooked or burnt food.” So much for the intelligent microwave.
Do these machines aid the home dweller? Yes and no. If machines can be said to have a “voice,” theirs is certainly condescending, offering no hint as to how or why they do what they do, no hint as to what they are doing, no hint as to the amount of doneness, cleanliness, or drying the machine is inferring from its sensing, and no idea of what to do when things don’t work properly. Many people, quite appropriately in my opinion, shun these devices. “Why is it doing this?” interested parties want to know. There is no word from the machines and hardly a word from the manuals.
In research laboratories around the world, scientists are working on even more ways of introducing machine intelligence into our lives. There are experimental homes that sense all the actions of their inhabitants, turning the lights on and off, adjusting the room temperature, even selecting the music. The list of projects in the works is impressive: refrigerators that refuse to let you eat inappropriate foods, tattletale toilets that secretly tell your physician about the state of your body fluids. Refrigerators and toilets may seem an unlikely pairing, but they team up to monitor eating behavior, the one attempting to control what goes into the body, the other measuring and assessing what comes out. We have scolding scales watching over weight. Exercise machines demanding to be used. Even teapots shrilly whistling at us, demanding immediate attention.
As we add more and more smart devices to daily life, our lives are transformed both for good and for bad. This is good when the devices work as promised—and bad when they fail or when they transform productive, creative people into servants continually looking after their machines, getting them out of trouble, repairing them, and maintaining them. This is not the way it was supposed to be, but it certainly is the way it is. Is it too late? Can we do something about it?
The Rise of the Smart Machine
Toward a Natural, Symbiotic Relationship
The hope is that in not too many years, human brains and computing machines will be coupled together very tightly, and that the resulting partnership will think as no human brain has ever thought.
—J. C. R. Licklider, “Man-Computer Symbiosis,” 1960.
In the 1950s, the psychologist J. C. R. Licklider attempted to determine how people and machines could interact gracefully and harmoniously, or in what he called a “symbiotic relationship,” so that the resulting partnership would enhance our lives. What would it mean to have a graceful symbiosis of people and technology? We need a more natural form of interaction, an interaction that can take place subconsciously, without effort, whereby the communication in both directions is done so naturally, so effortlessly, that the result is a smooth merger of person and machine, jointly performing a task.
There are numerous instances of “natural interaction.” Let me discuss four that demonstrate different kinds of relations: between people and traditional tools, between horse and rider, between driver and automobile, and one involving machine automation, “recommendation” systems that suggest books to read, music to listen to, and films to watch.
Skilled artisans work their materials through their tools, just as musicians relate with their instruments. Whether used by a painter or sculptor, woodworker or musician, their tools and instruments feel like a part of the body. So, craftspeople do not act as if they are using tools but as if they are directly manipulating the items of interest: paint on canvas, sculptured material, wood, or musical sounds. The feel of the materials provides feedback to the person: smooth and resonant here, bumpy or rough there. The interaction is complex but pleasurable. This symbiotic relationship only occurs when the person is well skilled and the tools are well designed. When it happens, this interaction is positive, pleasurable, and effective.
Think of skilled horseback riders. The rider “reads” the horse, just as the horse can read its rider. Each conveys information to the other about what is ahead. Horses communicate with their riders through body language, gait, readiness to proceed, and their general behavior: wary, skittish, and edgy or eager, lively, and playful. In turn, riders communicate with horses through their body language, the way they sit, the pressures exerted by their knees, feet, and heels, and the signals they communicate with their hands and reins. Riders also communicate ease and mastery or discomfort and unease. This interaction is positive example two. It is of special interest because it is an example of two sentient systems, horse and rider, both intelligent, both interpreting the world and communicating their interpretations to each other.
Example three is similar to the horse and rider, except that now we have a sentient being interacting with a sophisticated, but nonsentient, machine. At its best this is a graceful interaction between the feel of the automobile, the track, and the actions of the driver.
I think of this when I sit beside my son while he drives my highly tuned German sports car at high speed on the racetrack that we have rented for the afternoon. We approach a sharp curve, and I watch as he gently brakes, shifting the car’s weight forward, then turns the steering wheel so that as the front end of the car turns, the rear end, now with reduced weight bearing down, skids, putting the car into a deliberate, controlled skid, known as an “oversteer” condition. As the rear end swings around, my son straightens the steering wheel and accelerates, shifting the car’s weight back to the rear wheels so that we are once again accelerating smoothly down a straightaway with the pleasure of feeling in complete control. All three of us have enjoyed the experience: me, my son, and the car.
Example four, the recommendation system, is very different from the other three for it is slower, less graceful, and more intellectual. Nonetheless, it is an excellent example of a positive interaction between people and complex systems, primarily because it suggests without controlling, without annoyance: we are free to accept or ignore its recommendations. These systems work in a variety of ways, but all suggest items or activities that you might like by analyzing your past selections or activities, searching for similarities to other items in their databases, and by examining the likes and dislikes of other people whose interests appear similar to yours. As long as the recommendations are presented in a noninvasive fashion, eliciting your voluntary examin
ation and participation, they can be helpful. Consider the search for a book on one of the internet websites. Being able to read an excerpt and examine the table of contents, index, and reviews helps us decide whether to make a purchase.
Some sites even explain why they have made their recommendations, offering to let people tune their preference settings. I have seen recommendation systems in research laboratories that watch over your activities, so if you are reading or writing, they suggest articles to read by finding items that are similar in content to what is on your display. These systems work well for several reasons. First, they do offer value, for the suggestions are often relevant and useful. Second, they are presented in a nonintrusive manner, off to the side, without distracting you from the primary task but readily available when you are ready. Not all recommendation systems are so effective, for some are intrusive—some seem to violate one’s privacy. When done well, they demonstrate that intelligent systems can add pleasure and value to our interactions with machines.
A Caveat
When I ride a horse, it isn’t any fun for me or the horse. Smooth, graceful interaction between horse and rider requires considerable skill, which I lack. I don’t know what I am doing, and both I and the horse know this. Similarly, I watch drivers who are neither skilled nor confident struggle with their automobiles, and I, as a passenger, do not feel safe. Symbiosis is a wonderful concept, a cooperative, beneficial relationship. But in some cases, as in my first three examples, it requires considerable effort, training, and skill. In other cases, such as in my fourth example, although no high-level skill or training is required, the designers of these systems must pay careful attention to appropriate modes of social interaction.