Our Own Devices: How Technology Remakes Humanity

Home > Other > Our Own Devices: How Technology Remakes Humanity > Page 35
Our Own Devices: How Technology Remakes Humanity Page 35

by Edward Tenner


  Jockeys were the next civilian athletes to use hard head protection after the war. By 1924 lightweight but strong fiber helmets fitting under caps like the old steel skullcaps were introduced in Australia, and the next year they were made mandatory in U.S. steeplechase events. They spread to Thoroughbred racing. In 1927 a similar helmet saved the life of a jockey rolled over and kicked in the head by his runaway colt. Head protection is now required in public riding schools and events in many U.S. states.48

  The most influential and controversial of the rigid sports helmets was developed by the late 1930s for American football. Originally bareheaded, players of the 1890s had began to use homemade mohair-cushioned leather helmets as play got rougher until manufacturers started to offer what was probably the first ready-made athletic head protection, around 1900. For better impact absorption, the head was isolated from the shell by a web of fabric straps in 1917, a system marginally improved by innovations of the 1920s and 1930s. The breakthrough was a plastic shell with a new web suspension, adjustable for head size. Unlike leather, the lighter plastic allowed the riveting of the suspension, and did not mildew. But even before U.S. entry into the war and the suspension’s adoption by the army as the foundation of the M-1 helmet system, it was kept off the market by materials shortages.49

  After some initial failures following the war, the National Football League admitted the helmet in 1949. By the early 1950s it had virtually replaced the leather helmet. In Why Things Bite Back I told the story of its paradoxically catastrophic effect on injuries. It reduced some head damage but was held responsible for tripled neck injuries and a doubling of deaths from cervical spine injuries. Such casualties have resulted in lawsuits that have continued to plague the U.S. sporting goods industry. In 1954, Sports Illustrated called them “Martian-looking headpieces” and “the most lethal weapon” for wearer and opposition alike. But the real problem was not in the technology but in the technique. There is no evidence that the Riddell company, any more than so many other innovators, foresaw the change in user behavior that its products helped bring about. Coaches had once instructed players to tackle ball carriers by wrapping their arms around them. Their new technique instead used the helmet as a battering ram not only to stop the carrier but to dislodge the ball—ultimately by aiming below the victim’s chin, hoping to knock him out. While helmet technology has continued to evolve since the early 1990s, and safer designs have been promised, the underlying problem remains from football’s early years as a spectator sport: fans enjoy and encourage violent plays. For the determined developer of new aggressive techniques, the challenge is finding the loopholes in the new rules.50

  After the war, pulp and rubber were succeeded by generations of new plastics. Aerodynamic contours replaced the old pudding-bowl design. Top automotive racing models incorporated ventilators and antilift design, dramatizing rather than concealing risks. In fact, racing helmets can be so heavy and forces so great that drivers may need a supplementary restraint such as Peter Hubbard’s HANS (Head and Neck System) to keep the helmet from whipping the neck fatally in a crash. While the death of a prominent amateur auto racer, William “Pete” Snell, in 1958 converted most of his fellow competitors to the idea of protection and led to the establishment of a leading safety equipment research foundation in his memory, most organized U.S. motorcycle enthusiasts question mandatory helmet legislation, observing that it has reduced fatalities mainly by discouraging motorcycle riding. In accidents at road speeds above fifteen miles per hour, some U.S. opponents of helmets add, the additional mass of the helmet only helps substitute neck for head injuries. Yet emergency-room surgeons treating motorcycle accident victims remain among the strongest helmet advocates, and many cyclists never ride without head protection.51

  In hockey and most other sports, the helmet is a tool, not a totem. In baseball and cricket, it is worn only by batters, when at bat. In the early twenty-first century all hockey players wear helmets; the handful of holdouts, grandfathered when protection became mandatory in 1994, are now retired. Even the mocking adoption of flimsy helmets designed for less hazardous ice sports appears to have vanished. The hockey player’s helmet is but another element of specialty clothing, a routine necessity like automotive seat belts. Controversy in most sports is not about whether helmets should be worn but about the best balance between comfort and protection. The American Society for Testing and Materials (ASTM), the Snell Memorial Foundation, and other national and private laboratories constantly develop new tests for impact attenuation, just as medieval helmets and armor were proved.52

  Thanks to a constantly expanding armory of energy-absorbent materials, construction features, and simulation equipment, the helmet now represents both the triumphs and the paradoxes of the technological reduction of risk. There is no universal sports helmet, and the U.S. Consumer Product Safety Commission (CPSC) advises purchasing a separate, purpose-designed model for each sport. By now the list includes not only football and hockey but lacrosse, skateboarding, snowboarding, in-line skating, BMX cycling, equestrian sports, extreme sports, and boxing. After a number of celebrity fatalities, skiing helmets have gained support. Many medical researchers now favor helmets even for youth soccer, where concussions and brain damage have turned out to be more common than most parents and coaches had realized. And the middle-aged are even more likely to suffer soccer and bicycling injuries than their children, partly because they are less flexible about adopting protection. (Paradoxically, the rate of injury from in-line skating is lower than that from bicycling and less than half the injury rate from softball, because in-line skating looks so dangerous that protective equipment has been the norm.)53

  Among these varieties, the most ubiquitous symbol of the rebirth of head protection remains the bicycle helmet, the commonest, lightest, most colorful, and most controversial of all. Yet it is also relatively recent. It began with the surge of mountain bicycles in the mid-1980s, and was led by Bell Sports, the largest maker of auto racing helmets. Through the 1980s and 1990s, the size of the potential market encouraged Bell and others to increase the appeal of protection by using new and lighter materials and ventilated aerodynamic shapes to reduce weight without degrading performance, although tooling for some of the most striking models requires (like so many other high technologies) time-consuming and high-priced hand craftsmanship.54

  Thanks to stylish design and growing markets, a once stodgy idea became sexy and youthful. In 2001, the CPSC reported that 69 percent of child cyclists and 43 percent of adults in the United States wore helmets. Yet this apparent success has turned up a paradox. In the decade from 1991 to 2001 the surge in helmets was accompanied by a decline in rider-ship and an increase in cyclist accidents, resulting in 51 percent more head injuries per bicyclist. To some opponents of mandatory helmet laws, these statistics are the result of what is called risk compensation, bolder behavior arising from the feeling of enhanced security. And there are indeed people who will push the envelope of any safety device. But most bicycle safety advocates reject an increase of risk taking as an explanation. They believe the CPSC statistics of helmet use may be exaggerated, and they point to adverse traffic conditions, more aggressive behavior by motorists (attributed in turn by risk compensation theorists to the reassurance of seat belts and air bags) and faster bicycles (which the same theorists would probably explain by the perceived safety of helmets). One major point on which both sides appear to agree is that helmets offer limited protection in automobile-bicycle collisions.55

  The most powerful case against mandatory bicycle helmet laws may be that they ignore the primary role of motorists in cycling injuries and transfer too much of the burden of safety to the cyclist. There is evidence that they also discourage cycling, especially because effective helmets are not cheap relative to the price of bicycles and because they must be replaced after absorbing shocks and as children’s heads grow. Thus the British Medical Association has endorsed voluntary bicycle helmet use but opposed a legal requirement, on the gr
ounds that the health benefits of bicycle exercise far outweigh risks of injury.56

  Today’s sports helmet does the same job as the ones introduced in the ancient Near East. It spares the wearer a skull fracture or concussion by spreading the energy of an impact, no longer with a mace tip but with an asphalt surface. And, like the ancient helmet, it can build group spirit. Ancient, medieval, and twentieth-century military helmets were designed in part to foster morale; in the view of critics, the sporting helmet may create an excess of confidence. And many wearers of today, like Greek hoplites and medieval soldiers before them, complain of stuffiness, heat, and sensory barriers. The similarities between warlike and recreational helmets go even further, in the view of at least one commentator. In 1984, a twenty-nine-year-old writer and amateur hockey player, Gregory Bayan, defended his bareheaded ways in Newsweek. Lamenting the loss of daring individuality from Olympic and professional hockey, regretting the appearance of televised players as “identical automatons,” he compared the helmet to “a dorsal fin that breaks water and heralds the presence of a leviathan … something deeply wrong in America.” To Bayan, the helmet’s enthusiasts were “Hardheads,” determined to eradicate individuality and risk in the name of safety.57

  Yet the most curious result of the avoidance of risk was to appear only in the late 1980s, as the result of a successful medical campaign to reduce sudden infant death syndrome (S I D S). Parents were urged to put babies to sleep on their backs so as to lower the risk of their suffocating. The unintended consequence was that many infants spent so much time on their backs that their heads became flattened, a condition called plagiocephaly. While mild cases can self-correct if parents vary their babies’ positions while the babies are supervised or awake, thousands of infants have been wearing custom-made orthopedic helmets to correct the condition. There are already seventeen companies making them in the United States alone. This therapy is a small price to pay for occasional and reversible mishaps of a lifesaving technique, but it also shows that one is never too young to become a Hardhead. The helmet was born in ancient warfare, and in wearing it we have become men, women, and children from Mars.58

  Epilogue

  Thumbs Up

  EVERYDAY TECHNOLOGY sometimes reshapes the body; the feet of shod people are different from the feet of those who have always walked barefoot. More important, it helps shape how we use our bodies. Technology, and the techniques of using it, have coevolved over millennia.

  The most challenging open question is whether mind, body, and machine will fuse in some radical new way over the next generation. For over fifty years, waves of enthusiasts have proclaimed the dawning of a new age of augmented humanity. But they are far from agreed on the mechanism. For some, the intimacy is limited to even more portable and powerful versions of the devices we take with us now: computers that might be as easily carried as cell phones and personal digital assistants (PDAs) now are, and viewed through special eyeglass displays. Spectacles could also transmit the emotional states of users so that a speaker, for example, could detect a group’s interest or boredom. There are already sneakers that can transmit or record information on a runner’s performance, and even civilian motorcycle helmets with intercoms and navigation aids built in. Other enthusiasts scorn mere wearability; they are having sensors and transmitters surgically implanted in their bodies, as some deaf children and adults have been fitted with cochlear implants that restore hearing. The cyborg or human-machine is probably the most powerful and persistent idea, perhaps because it seems a logical next step from technological symbiosis. Politically the cyborg movement extends from Paul Verhoeven’s original RoboCop film to the work of Donna Haraway and Chris Hables Gray, who see the connection between human and machine as an emancipatory strategy against rigid economic and gender roles.1

  But is the body really becoming more mechanized? George Washington never wore wooden teeth, and his last set of dentures—made of gold plates inset with hippopotamus teeth, human teeth, and elephant and hippo ivory, hinged with a gold spring—was as good as the craftsmen of his time could produce. He still suffered great discomfort, eating and speaking with difficulty. Yet perhaps this reserve only enhanced his dignity. In any case, it should not be surprising that one in ten Americans had an implant of some kind, not including dentures, by 2002; the nation was founded by a cyborg. Nor was Washington an isolated case. Thomas Jefferson’s polygraph (not a lie detector but a linkage and second pen automatically tracing a duplicate of his writing) and semireclining work chair and Benjamin Franklin’s bifocals were also giant steps in human-mechanical hybridization. (John F. Kennedy continued the cyborg tradition as one of the first political adopters of the robotic signature machine, a giant and distinctively American step in the cloning of gesture.)2

  Without antiseptics or antibiotics, wounds in the U.S. Civil War demanded extensive amputation and created an innovative artificial-limb industry. Today, responsive advanced prosthetics, wheelchairs, vision implants, and other assistive devices exceed the nineteenth century’s wildest dreams; there is even litigation in the United States on whether a teenage swimmer with an artificial leg was unfairly barred from wearing a flipper on it. But the first choice of medicine and dentistry is still the conservation of natural materials and abilities. The trend has gone from spectacles to contact lenses to laser surgery; dentistry has moved steadily from dentures to prophylaxis and conservation of endangered natural teeth. And some dental researchers believe that adults may be able to grow replacement teeth naturally, just as children teethe. Other regeneration, including the recovery of function for paraplegics and quadriplegics, may follow. And no robot can match the performance of a trained assistance dog.3

  The body remains surprisingly and reassuringly conservative. The zori design is still used for some of the most stylish sandals for women and men. Even the majority of athletic shoes with the most technically advanced uppers and soles still use a system of lacing at least two hundred years old. The most advanced new office chairs still rely on the hundred-year-old principle of a spring-mounted lumbar support, for all their additional adjustments. Recliners still place the body in the same contours as the library chairs of the nineteenth century; interest in built-in data ports and other technological enhancements is fading, according to industry sources. Not only has the QWERTY arrangement resisted all reform movements, but even alternatives to the flat conventional keyboard are expensive niche products, partly because of innovators’ marketing difficulties, but also because few users are willing to relearn techniques in the absence of discomfort. A century after the piano started to lose its prestige and markets, it remains the master instrument with its familiar keyboard. Computers help produce advanced progressive eyeglasses without the apparent breaks of bifocals, but wearers still hold glasses on their heads with the folding temples introduced in the eighteenth century. The latest NATO helmet echoes the outlines of the medieval sallet. But then, our foot bones and vertebrae and fingers and eyes and ears and skulls have not changed much, either. Even automatic transmissions rely on a familiar tactile principle, a knob or handle and lever; push-button shifting is largely a memory of the Edsel. And the twenty-first century’s automobiles are still directed and controlled by wheels and pedals—familiar from early modern sailing ships and wagons—rather than the alternative interfaces that appeared in patents and experimental cars. Meanwhile, many technological professionals study body techniques that may need few or no external devices: yoga, martial arts, and the Alexander Technique.

  Even the Christopher Columbus of wearable computing has misgivings about integrating himself with today’s “smart” technology. Steve Mann holds an MIT doctorate in computer science and teaches in Canada; he was photographed wearing a helmet equipped with a video camera and a rabbit-ears antenna as early as 1980. In his book Cyborg, he acknowledges being “increasingly uncomfortable with the idea of a cyborg future,” where privacy is sacrificed for pleasure and convenience to a degree he compares to drug addiction.4
/>   Today’s real advanced cyborg technology is actually neither utopian nor apocalyptic. Virtual-reality helmets are still not playthings but professional tools demanding rigorous training in physical and mental techniques to prevent disorientation and lapses in judgment. At the other extreme of complexity, miniature keyboards on cellular telephones and other devices are surprisingly influential. They have been shifting the balance of power of the human hand from the index finger to the thumb. We have seen that C. P. E. Bach elevated the thumb’s role in playing the musical keyboard 250 years ago, and that the pioneers of touch typing rediscovered the fourth and fifth fingers but banished the thumb to tripping the space bar. Now the thumb is enjoying a second renaissance.

  The thumb is coming back to computing with pen- and pencil-like devices, such as the styli used with PDAs. Even a radical new mouse, developed by the Swedish physician and ergonomist Johan Ullman, is gripped and moved around the desk with a pen-shaped stick, using the precision muscles of the thumb and fingers rather than twisting the hand and tiring the forearm, in Dr. Ullman’s analysis. As for pencils themselves, they have had some of their strongest unit sales yet, increasing by over 50 percent in the United States in the 1990s.5

  The biggest surprise for the thumb is electronic. In Japan today, there are so many new data entry devices that young people are called oyayubi sedai, the Thumb Generation. In Asia and Europe, these users have turned voice recognition on its head, sending short text messages to friends, thumbs jumping around their cellular keyboards. By 2002, there were over 1.4 billion of these transmissions each month in the United Kingdom alone. One British researcher, Sadie Plant, has found that thumbs all around the world are becoming stronger and more skillful. Some young Japanese are now even pointing and ringing doorbells with them. As Plant told the Wall Street Journal, “The relationship between technology and the users of technology is mutual. We are changing each other.” The major laboratories did not predestine the thumb as the successor to the index finger, though they did help make the thumb’s usefulness possible. The full capacities of the digit were discovered through the joint experimentation of users, designers, and manufacturers. And its new role is an expression of the intimate relationship described by Frank Wilson, writing of the “twenty-four-karat thumb”: “The brain keeps giving the hand new things to do and new ways of doing what it already knows how to do. In turn, the hand affords the brain new ways of approaching old tasks and the possibility of understanding and mastering new tasks.”6

 

‹ Prev