Conditioning does not take a mighty brain. Fruit fly larvae (yes, tiny maggots) learned to form associations to the odor of either ethyl acetate or isoamyl acetate. When they were offered a choice between an odor they had smelled while given rich and delicious food full of wholesome brewer’s yeast and an odor they had smelled while given nutrient-deficient food laced with quinine, 64 percent wriggled toward the side that reminded them of yeast. And when offered a choice between an odor that they had smelled when they were being harassed with a fine brush (to simulate predation) and an odor they had smelled when being left alone, 73 percent headed for peace and quiet. These were, of course, larvae from a strain of fruit fly renowned for its good test scores, but still, we’re discussing the intellect of maggots.
Tiger, tiger, burning bright, on the roadways of the night
In the 1950s, Lieutenant Colonel Locke, of the Malayan Civil Service, in the state of Trengganu, had duties that included shooting problem tigers. Problem tigers, as Locke saw it, were tigers who ate people, tigers who ate cattle, tigers who ate dogs, and one tiger who had formed the habit of walking up to rubber tappers in the forest and growling. Although he only growled, this invariably caused the perturbed rubber tappers to take the rest of the day off, and the resulting financial losses to the local rubber industry spelled the tiger’s doom.
Locke’s shooting technique involved putting out bait, erecting a concealed platform in a nearby tree, and waiting there at night until he heard a tiger at the bait. Then he’d switch on a flashlight so he could aim, and shoot the tiger. One night Locke was after a cattle-killing tiger. This particular tiger was an elderly male who, Locke happened to know, had been in a car accident while crossing a road at night. The tiger had recovered from his injuries, but “retained an overwhelming dread of bright lights.” There weren’t many bright lights in Trengganu in those days.
On this evening, Locke finally heard the tiger come to the bait, a dead cow. Locke switched on his light, and the tiger immediately reared up and toppled over backward into some bushes, where he lay moaning dismally. The astonished Locke reports that the tiger was neither growling nor roaring, but moaning “as though the beast was in mental anguish. I was convinced that he thought another car was after him.” The tiger sobbed for a while and then fell silent. After twenty minutes he got up and approached the carcass. Locke switched on the light again, and the tiger instantly bolted. The tiger did not return to the cow that night. In fact, he never touched cattle again and thereby escaped being shot.
The tiger seems to have learned, in one traumatic accident, to fear sudden bright lights in the night. Then it seems to have learned to associate eating cattle with the horrifying lights. Whether it actually thought, “If I touch a cow, a car will appear and attack me” is more speculative.
Associating the lights in the night with being struck by a car is an example of Pavlovian conditioning—the innate fear of being hurt was associated with the learned stimulus of lights. Associating messing with cattle with the dreaded lights is an example of operant conditioning—the tiger now connected his action of attacking cattle with the negative stimulus of the lights.
Operant conditioning
Operant conditioning is also called Skinnerian conditioning, after its famous and persuasive advocate, B. F. Skinner. The most common scientific example of basic operant conditioning is the white rat in a cage equipped with a lever and a food hopper, in which the rat learns that if it pushes the lever, it will be rewarded with a piece of rat chow tumbling into the hopper. (Such cages are called Skinner boxes.) One can go on from here to condition far more complex behaviors in any species you care to name, including the human. Rewards condition behavior, and so do unpleasant, negative things, such as being given an electric shock, very popular in the lab. (Or being hit by a car, as in the tiger’s case.)
Sonja Yoerg describes one rat among a group of 20 that a colleague was training to press a lever to get a food reward, using automated Skinner boxes. When the colleague checked on their progress, 19 rats had become conditioned to press the lever with their paws to get a reward in the standard way. But apparently the twentieth rat had, at the beginning of the process, accidentally hit the lever with its head and gotten rewarded. As a result its technique involved facing away from the lever, rising on its hind legs, toppling over backward, and hitting the lever with its head. Repeatedly. Was this rat any stupider than the others, or just unluckier?
A vast array of behavior can be explained as the result of conditioning by pleasant and unpleasant experiences. Sometime conditioned behavior looks more intelligent than it is: the animal appears to understand what it is doing when in fact it has only learned, without knowing why, that if it does a certain thing, a certain good thing will result. (In essence, such actions are superstitions.) Not only is it an extremely effective way of training animals, it’s the way many things are learned in the real world.
Skinnerians fell so deeply in love with this powerful way of explaining behavior that for a while they rejected explanations for learning other than conditioning, either operant or associative. Thus we find psychologist Irene Pepperberg grumbling that, “according to Skinner,…one needn’t study a wide variety of animals, because none would react any differently from a pigeon or a rat: The rules of learning were universal.”
While the basic concept of operant conditioning is valid, many exceptions and variations that were once thought to be impossible have turned up.
A rat is a pig is a dog is a boy
Beginning in the 1940s, two of Skinner’s disciples, Keller and Marian Breland, used operant conditioning with great success to train performing animals. They published an eventually influential paper, “The Misbehavior of Organisms,” on their findings about learning in different animals. Despite utterly standardized procedures, they reported, each animal put its own species’ spin on what it was learning. They conditioned a chicken to stand on a platform, but the chicken couldn’t stand still and kept scratching around on the platform, so instead they trained the chicken to “dance”—in other words, to scratch in a context that makes it look like dancing. In the final performance the chicken pulls a loop that starts a model jukebox, which plays while the chicken dances. Jitterbug mama!
The Brelands conditioned a raccoon to put money in a piggy bank. He quickly learned to pick up a coin and take it to the bank, but it was hard for him to let go of the coin. He’d start to put it into the slot only to pull it out at the last second and clutch it to him. When he finally mastered this, they tried him with two coins, but the raccoon couldn’t bear to do it. “Not only could he not let go of the coins, but he spent seconds, even minutes, rubbing them together (in a most miserly fashion), and dipping them into the container. He carried on…to such an extent that the practical application we had in mind—a display featuring a raccoon putting money in a piggy bank—simply was not feasible.” The more they tried to get him to bank his funds, the more tenaciously he rubbed and gloated.
The Brelands called this “a clear and utter failure of conditioning theory…the animal simply does not do what he has been conditioned to do.” Chickens instinctively scratch for food, and raccoons instinctively handle or “wash” their food to do such things as peel a crayfish. Their behavior gradually drifted toward their natural inclinations, even when the result was less food for a hungry raccoon.
At an aquarium in Hawaii, trainers had a hard time conditioning river otters to do tricks. It wasn’t that the otters didn’t get it; it was that they got it right away, and then got over it. Trainer Karen Pryor began by training an otter to stand on a box. The moment she produced a box the otter rushed over, stood on the box, and was rewarded. Soon the otter understood that standing on the box earned a piece of fish. But instantly she began exploring the situation. What if she lay down on the box? Would she get fish for that? How about if she had three feet on the box—would that count? What if she hung upside down from the edge of the box or put her front paws on it and barked? When Pry
or complained to some visiting behavioral psychologists, they said she must be mistaken. “If you reinforce a response, you strengthen the chance that the animal will repeat what it was doing when it was reinforced; you don’t precipitate some kind of guessing game.”
Pryor took the behaviorists to see the otters, and to back up her claim she tried to condition an otter to swim through a hoop. She put the hoop in the water, the otter swam through, and she gave it fish. The otter swam through again, and she rewarded it again. Very good. But, from the otter’s point of view, already old news. The otter swam through the hoop—and stopped halfway through. And looked up for a reward. No reward. The otter swam through the hoop—but as it was almost through, it grabbed the hoop with its hind foot and tore it off. And looked up for a reward. No reward. Okay. The otter lay in the hoop, bit the hoop, and backed through the hoop, each time checking to see if that rated a prize. “See?” said Pryor. “Otters are natural experimenters.” One bemused scientist replied that it took him four years to teach students to think like that.
Backward conditioning and latent learning
Another phenomenon once considered impossible is backward conditioning, in which, for example, an animal who has just had an unpleasant experience looks around for something or somebody to blame. Sure enough, animals as well as people do this.
Latent learning describes things an animal learns for no reward that may come in handy later. When a rat explores a maze even though it has never found food in a maze, that has been called latent learning. I suspect exploring is its own reward: rats like to poke around. E. C. Tolman, who discovered this phenomenon in the late 1940s, was ridiculed, since this kind of learning was not predicted by either classical or operant conditioning theory.
Animal trainers sometimes speak of the moment when the light goes on, the moment when something the animal has learned by rote is suddenly understood. Aha! I get it! Karen Pryor describes what she calls “the prelearning dip.” Just as an animal is really starting to learn what’s wanted, it stalls. “This can be most discouraging for the trainer. Here you have cleverly taught a chicken to dance, and now you want it to dance only when you raise your right hand. The chicken looks at your hand, but it doesn’t dance. Or it may stand still when you give the signal and then dance furiously when the signal is not present,” writes Pryor. “After that, however, if you persist, illumination strikes: Suddenly, from total failure, the subject leaps to responding very well indeed—you raise your hand, the chicken dances.”*
Pryor argues that the chicken is unthinkingly responding to cues that mean it will get rewards. Gradually it gets better, and the trainer is pleased. Then suddenly the chicken “notices” the cue. It realizes that the cue has something to do with being rewarded, and starts paying attention to the cue instead of dancing. “When, by coincidence or the trainer’s perseverance, it does once again offer the behavior in the presence of the cue, and it does get reinforced, the subject ‘gets the picture.’ From then on, it ‘knows’ what the cue means and responds correctly and with confidence.”
James Gould describes something similar during concept learning in honeybees. An example of concept learning is when bees learn that a nectar reward will be marked by either a symmetric or an asymmetric marker. The marker changes, so the bee can’t just learn which marker is the correct one, but has to learn that whichever marker is asymmetric is the correct one. “The learning curve is different from that of more standard tests in which bees are taught that a particular odor, color, or shape is always rewarded. During concept learning there is no evident improvement over chance performance until about the fifth or sixth tests, whereas in normal learning there is incremental improvement beginning with the first test. This delay is characteristic of what has been called ‘learning how to learn,’ which is interpreted as a kind of ‘ah-ha’ point at which the animal figures out the task.” Bzzt!
Trial and error
Trial and error is experimenting to see what works. Strangely, this fine model of the scientific method is often spoken of scornfully by animal behaviorists. Perhaps they suspect animals’ ability to formulate a hypothesis and follow up with further testing.
Young herring gulls, like adults, fly up and drop shellfish to break them open. When they start, they’re not very good at it. They may not let the clam or mussel fall far enough, or they may drop it on a surface too soft to crack it. Joanna Burger describes young gulls on the New Jersey coast dropping mussels on sand. When the mussels don’t open, they try dropping them from a greater height. If that doesn’t work, they try dropping them on a dirt road. If that doesn’t work, they’ll try concrete—and that usually works. “Your enterprising gull will then figure out that he can break the shells open on a board. Should you, while beachcombing, come across a board surrounded by shells, you know you’ve seen the handiwork of an Einstein of a Herring Gull.”
Trial and error is fine when you have time for it, like the gulls. Some things are more urgent. Young vervet monkeys are born ready to react when they hear alarm calls from other vervets. Very young babies dash for their mothers. Older infants learn what to do when they hear an eagle alarm call (hide in a bush) as opposed to when they hear a leopard alarm call (climb a tree). They learn this by seeing what other vervets do. As Frans de Waal points out, “It would be incredibly costly for them to do so by trial and error.”
Getting to Carnegie Hall
One form of learning is practice. Practice is generally boring, but playing is fun, so it’s handy that play can serve as practice. Two-month-old Inca terns on the Peruvian coast, who have just learned to fly and can’t yet catch enough fish for themselves, have been seen “practicing hunting,” which looks like playing. The young birds, hanging out on some rocks, take off, circle over the water, and then plunge down on an unsuspecting piece of seaweed. Bearing the seaweed off in triumph, a young tern will then drop it into the water and attack it again. Other juveniles, seeing this, either attack their own piece of seaweed or try to nab another bird’s chosen victim. Other tern kids try the “contact dipping” approach of flying low over the water and dipping to snatch the coveted seaweed, or go into an aerobatic display of rapid twists and turns just above the surface. Grown-up terns don’t do this. They have fish to catch.
Maturation
When an animal gets better at doing something or recognizing something, it’s possible that it hasn’t learned a thing. It’s easy to mistake growth for learning. Newborn chicks peck zestfully at everything they see, but their aim is sloppy. If chicks born in a laboratory see a brass nailhead in a smooth field of clay, they peck at it. The pattern of peck marks they create in the clay around the nailhead is large and loose, and they often miss the nailhead by a lot. As they get older, their aim improves, and if they are tested four days later, the pecking pattern (since they still haven’t learned not to try to eat nails) becomes smaller and clusters tightly around the nailhead.
A possible explanation is that their aim has improved because their better-directed pecks were rewarded by food, and so they learned through conditioning—target practice—to aim better. To see if this was so, Eckhard Hess fitted new-hatched Leghorn chicks with tiny rubber hoods which held goggles over their eyes.* The goggles displaced what the chicks were seeing to one side. As soon as they had been fitted with the goggles, the chicks were tested with the nailhead in clay. The pattern of the pecks was large and loose and displaced to one side, away from the nailhead. Then the chicks, still fitted with goggles, spent several days either in an environment where grains of food were loosely scattered or in which their food was spread thickly in wide bowls so that they usually hit something to eat no matter how badly they missed.
Hess thought that the chicks who ate from wide bowls would not learn to correct for the goggles (because they still got food when they missed) and that the chicks whose food grains were scattered would learn to compensate for the goggles. He gave them the nailhead-in-clay test after four days, and both groups showed identical tightly clust
ered, precise pecking patterns—off to one side of the nailhead. They had all improved their pecking precision, not because they had learned but because they had gotten older.
Chicks do learn some things about pecking—don’t peck your toes and don’t peck chicken droppings. Chicks also seem to be born with the important knowledge that when you get a piece of food too big to eat in a few pecks, you should grab it and run like the wind. “I had always supposed, if I bothered to think about it at all, that when a hen picks up a particularly fat worm and immediately starts to run away the motive was an innate greed, an unwillingness to share with her fellows. Or else that it was a wisdom born of previous experience. The truth is otherwise,” writes zoologist Maurice Burton. “A young chick, first able to run, will, on picking up a morsel of food that cannot be instantly swallowed, turn round and run, as if pursued by an imaginary host intent on stealing. It will do this even when there are no other chicks present.”
Social learning
Animals influence each other’s behavior in ways that researchers have tried desperately to pin down. One aspect of social learning is its direction, metaphorically speaking. In vertical learning, animals pass information down the generations. If your mother teaches you what fruits are safe to eat, or how to tie your shoes, that’s vertical learning. It is conservative, in that information can be conserved and passed on indefinitely.
Becoming a Tiger: The Education of an Animal Child Page 2