by T. R. Reid
Since the De Forest tube had three electrodes—the filament, the metal plate, and the grid—it was technically known as a triode. The triode amplifier made radio a practical everyday reality. With further advances—radio companies eventually developed tetrode and pentode tubes—performance was greatly enhanced. By 1930, radio had swept the world (De Forest’s absurd prediction of transatlantic broadcasting came true in 1915). By the mid-forties, engineers had learned how to use radio signals to draw a picture on a glass tube; the result was a device known at first as an iconoscope but today as television. Computer pioneers took advantage of the tube’s rapid switching in building the first generation of digital computers.
With the capacity to perform three essential functions—rectification, amplification, and rapid switching—the vacuum tube, at heart just a souped-up light bulb, was the hub of a new electronic world. If the development of electronics were viewed as a battle of competing technologies, vacuum tubes had overcome semiconductor devices, like the crystal set, and left them far behind. But this battle was not yet over.
The renaissance of the semiconductor began in the late 1930s, spurred by growing fear on both sides of the English Channel that war was imminent. Recognizing that the coming conflict would depend largely on airpower, scientists in England and Germany raced to develop an early warning system, using radio techniques, to spot approaching enemy planes. Fortunately for the Allies, the British perfected the concept first. Since their system could not only find planes but also gauge their distance, or range, from England, the British called the invention Radio Detection and Ranging. This name was quickly shortened to the acronym “radar.”
A radar station shot a radio beam into the air. As long as it didn’t run into anything, the beam kept moving in a straight line at the speed of light (some radar beams sent off during the Battle of Britain are presumably still moving out through space today, about 61 light-years from earth). But if the signal hit a piece of metal—say, a Luftwaffe bomber—in midair, the beam would bounce back to the radar station like a tennis ball bouncing off a wall. By marking where the returning beam came from, and measuring how long its round trip had taken, the British defenders could tell their fighters where to intercept the enemy.
At first the British had hoped to use standard radio equipment to transmit and receive the radar beams. This didn’t work, because vacuum tube rectifiers could not handle the high-frequency signals required for radar. Desperate for some other rectifying apparatus, the engineers went backward in history and resurrected the crystal set. Crystal rectifiers had been around for decades, but they had never been significant because their performance was always iffy. By the late 1930s, however, much more was known about the crystals in these crystal sets—the elements known as semiconductors. The radar engineers, working from this new base of knowledge, were able to build reliable crystal receivers.
The deeper understanding of semiconductors had come about in a random, almost haphazard, manner. Unlike the development of vacuum tube technology, in which each new researcher and each new discovery seemed to lead neatly ahead to the next, the scientific world’s knowledge of semiconductors grew out of a disjointed series of experiments and hypotheses. A monograph published in Berlin, an interesting experiment in Cambridge, a suggestion from Paris, a countertheory from Copenhagen—all this work gradually came into focus in the 1930s. Eventually, two lines of scientific work, one experimental and one theoretical, merged into a single theory of semiconductor physics.
The experimental contribution sprang from commercial roots. As electrical power became a commercially important commodity, the firms that sold and used electricity had to know the most efficient way to transmit it. If a company had to build a power line, should the line be made of copper or cotton? To answer that, experiments were run to measure the conductivity of countless different materials. Conductivity—that is, how easy it is for an electric current to flow through a given material—is a physicist’s concept; it is the opposite of the electrical engineer’s concept of resistance. The electrician’s unit of resistance is called the ohm. Some physicist with a wry humor accordingly decided that the unit of conductivity should be called the “mho.”
In experiments to determine mho ratings for hundreds of materials, certain patterns emerged. In some substances, particularly metals like silver, gold, and copper, current could flow easily. These materials were labeled conductors. Other substances— quartz, glass, rubber, and wood are prominent examples— blocked current flow. They were called insulators. Between these two extremes was a class of materials that conduct better than insulators but not as well as conductors. These semi-good conductors—elements like selenium, germanium, and silicon—were given the generic label “semiconductors.” Still lacking, though, was an explanation of why some materials made better conductors than others. Why did electrons travel so readily through copper but so reluctantly through glass? And what was it about silicon that made it fall in between? The answer to that question was provided by the theorists of quantum mechanics, particularly by a quiet, deferential Dane, Niels Henrik David Bohr, who worked out the basic architecture of the atom.
Niels Bohr was one of the great people of the twentieth century—not only one of the most powerful intellects but also one of the most generous and humane of men: “the incarnation of altruism,” his friend C. P. Snow wrote. Born in Copenhagen in 1885, the son and grandson of distinguished academicians, he grew up in highly intellectual surroundings. The family read widely in four languages and was steeped in music and the arts. Niels was also a soccer ace, although he was not picked for the Danish Olympic team (his brother snared a spot in 1908). Denmark did come beckoning a few years later, however, when Bohr had emerged in the top ranks of physicists and was working in England. To lure Bohr home, the Danes established an Institute of Theoretical Physics in Copenhagen; as it almost always does in Denmark, the money came from the Carlsberg Brewery. Under Bohr’s direction the institute became, for a while, the world capital of quantum physics.
“He was not, as Einstein was, impersonally kind to the human race,” C. P. Snow wrote after Bohr’s death in 1962. “He was simply and genuinely kind. It sounds insipid, but in addition to wisdom he had much sweetness.” His selfless nature shone through radiantly during World War II. He donated a priceless possession—his gold Nobel Prize medal—to be melted down so that the proceeds could go to Finnish war relief. At considerable professional and personal danger, he spoke out forthrightly against the Nazis and worked secretly to help Jewish scientists escape the Third Reich. When he learned that his most brilliant former student, Werner Heisenberg, had been put in charge of Germany’s research program to develop an atomic bomb, Bohr summoned the younger physicist from Germany and told him that it would be wrong, morally wrong, for any scientist to give Hitler this weapon. (Whether or not that lecture from Bohr was the reason, Heisenberg made minimal progress at best on the Nazi nuclear program.) Finally forced to flee his homeland in a small fishing boat with his son Aage (a chip off the block who was to win a Nobel Prize of his own), Bohr then set up an underground railway to spirit Jews out of Denmark. After the war, he began a tireless campaign against further deployment of nuclear weapons.
Midway through his scientific career, Bohr made a fascinating intellectual conversion. As a young man, he was convinced that the world around him had a logical order, that natural phenomena could be explained with rigorous logic and reason. He fell away from the church because of his feeling that its doctrines were logically untenable. Later, though, as he discovered parts of the world where logic evidently did not govern, he accepted a somewhat muddier picture. By the mid-1920s, he had developed a “Principle of Complementarity,” which held that physics was large enough to contain some seeming illogic and contradictions. Late in life he designed a personal coat of arms that carried the yin/yang symbol and the motto Contraria sunt complementa.
Bohr took his Ph.D. at Copenhagen in 1911, writing his dissertation on the still new con
cept of the electron, and then headed off to Cambridge to study under J. J. Thomson himself. There he became familiar with Thomson’s conception of the structure of the atom—a theory known as the “raisin cake atom” because it posited a sort of sponge cake with electrons scattered about here and there like raisins. Next Bohr went to Manchester to work with another great physicist, Ernest Rutherford; there he learned of Rutherford’s “nuclear atom,” which posited a small nucleus set inside an amorphous cloud of electrons. Then, in 1913, Bohr set down his own picture of the atom—a hypothesis that has prevailed, with regular refinements, ever since.
The “Bohr atom” is the atom that most adults today saw in their high school science books: the “solar system” model, with electrons swirling in concentric orbits around a central nucleus. “In this picture,” Bohr explained in his Nobel Prize address in 1922, “we see a striking resemblance to a planetary system, such as we have in our solar system.” The key point of the Bohr picture, though, was his insistence that electrons could not orbit in just any old spot. Using quantum mechanics and some mind-boggling mathematics, Bohr determined precisely how far from the nucleus each orbit should be, and how many electrons can reside in each orbit.
Under these rules, the orbit that is farthest from the nucleus can have from one to eight electrons. The electrical characteristics of each element are determined by this outermost orbit— specifically, by the number of electrons in the outer orbit. If an atom has only one electron in the farthest orbit, that electron will not be tightly bound to the nucleus; it could break away easily. But in an atom with a full house—eight electrons—in the outer orbit, the electrons will be held tightly in place.
Working from this theory of the atom, quantum physicists could predict which materials would be good conductors of electric current. A substance that easily released electrons would supply the free-flowing electrons that make up electric current; such a substance should be a good conductor. In a material that did not release free electrons, current would not flow; it would be an insulator. Theoretically, then, the conductivity of any material would be determined by the number of electrons in its outermost ring.
No one has ever seen an atom. Until we do, parts of the quantum picture of atomic structure will remain, in a strict sense, merely theory. The quantum view of conductivity, however, can be tested, because of the experiments that determined the conductivity of specific materials. When these experimental results are compared with the predictions of quantum theory, theory and experiment match perfectly.
The materials found to be the best conductors—silver, copper, gold—are indeed elements with a single electron in the outermost orbit. Materials that have proven the best insulators are indeed those with eight outer electrons. As a general matter, elements with three or fewer outer-ring electrons are conductors, and those with five or more are insulators. At the precise center of this continuum stand the semiconductors. Semiconductors, such as silicon and germanium, have four electrons in the outermost ring.
It is this special feature of semiconductor materials that makes them so spectacularly useful in electronics. Sitting on the fence, perched midway between the conductors and the insulators, semiconductors can perform valuable electronic service precisely because of their in-between structure.
Because semiconductors are right on the borderline between conductors and insulators, experimenters have found ways to alter their conductivity. This is done by a process called doping. Here’s how it works: In a solid block of pure silicon, the individual atoms tend to link up with their neighbors. Each atom has four outer-ring electrons; they form a tight four-corner connection with the four outermost elements of the atom next door. When that happens, all eight electrons are held tightly in place; no electrons can break free, and no current flows. But humans have learned how to fool the silicon atoms by doping the silicon with impurities. They do it by introducing tiny quantities of a different element—arsenic, for example—that has five outer-ring electrons. The four outside electrons of a silicon atom will bind themselves to four of the arsenic electrons, leaving one extra arsenic atom unbound, free to move. Thanks to the arsenic doping, the block of silicon is now a conductor. If more arsenic atoms are introduced, more free electrons result and more current will flow. The current moving through the silicon—a flow of free electrons—is the same current that J. J. Thomson saw moving through the vacuum in his cathode ray tube.
But what if the silicon is doped with an element—boron, for example—that has only three electrons in its outermost ring? When silicon atoms try to link up with boron, the arrangement comes up one electron short, leaving a vacant spot—an empty hole—where the eighth electron should be. A nearby electron will be pulled over to fill the hole; this leaves another hole where the electron came from. Another electron moves in to fill this new hole, leaving another hole in its place. The result, effectively, is a movement of holes across the silicon block.
In his classic text on semiconductor physics, William B. Shockley explains this by comparing a block of silicon to a parking lot. In a pure, undoped crystal, every space on the lot is filled and no traffic can flow. If one car is removed, leaving a vacant slot, another car can move ahead, leaving its place open, in turn, for another car to move into. It is the cars that move, of course, but to an observer looking down every once in a while from a high building it appears that the empty space is migrating across the lot. Effectively, Shockley’s book says, “the vacant parking place . . . can move owing to the successive motion of vehicles into it.”
The doping process results in two different types of silicon. Where the silicon has received extra electrons, it takes on a negative charge, because electrons are negative. Where the silicon is pocked with holes—representing missing electrons—it takes on a net positive charge.
If the phenomenon of holes had been discovered in Thomson’s day, when all men of science had a firm grounding in the classics, the vacant spot would most likely have been given a Greek or Latin name, a name like “vacutron” or “nihilon” or some such. The flow of holes was not firmly established, though, until the 1930s, when the classics were in decline and English had become the lingua franca of physics. Consequently, the formal scientific name for the positively charged hole is a simple English word: “hole.” (Shockley titled his definitive text Electrons and Holes in Semiconductors. ) With equivalent simplicity, physicists decided to refer to semiconductor material that had been doped with excess electrons—and is thus negatively charged—as an N-type semiconductor. A semiconductor block doped with positive charge— holes—is called P-type.
By the late 1930s, when all these intricacies had been worked out, physicists had a reasonably decent picture of what goes on inside a semiconductor. All that remained to put this knowledge to practical use—to launch the semiconductor revolution—was human ingenuity. This essential ingredient was to come from a team of three Americans headed by an intriguing figure who has been, in different seasons, one of the most respected and most reviled of all modern scientists, William B. Shockley.
Shockley was the only child of a technology-oriented couple; his father was a mining engineer, his mother a geologist. Born in London, where his father was stationed, in 1910, he grew up at the northern edge of what is now Silicon Valley. After graduating from Cal Tech he did graduate work at MIT, focusing on the movement of electrons in solid materials. Fresh out of school, the young Ph.D. went to work for Bell Labs in 1936. Management, ignoring his graduate work, shunted him off to the vacuum tube department; by sheer persistence, Shockley eventually worked his way into the semiconductor laboratory.
It was a rich, exciting time to be there: semiconductor physics was just falling into place, and there was a sense among the bright, ambitious young men in the field that they were witnessing the dawn of marvelous things. Buffeted by these currents of the new, Shockley one day had an amazing idea—a Nobel Prize–worthy idea, as it turned out. It was December 29, 1939, and he wrote it down right away in his lab noteboo
k: “It has today occurred to me that an amplifier using semiconductors rather than vacuum is in principle possible.” Within a decade, he had turned that principle into practice in the form of the transistor, an invention that won Shockley and his colleagues, Walter Brattain and John Bardeen, the Nobel Prize in 1956. Thereafter, with the apotheosis of science that followed Sputnik, the mass media made him into a national hero.
In addition to his work in theoretical physics, Shockley was involved in engineering (he earned more than ninety patents), teaching, and strategic planning (he devised submarine attack methodologies during World War II). His experience in these varied intellectual disciplines prompted him to do a great deal of thinking about thinking—about the motivations and the thought processes that lead to good ideas. His basic rule for solving any problem was to go back to fundamentals: “Try simplest cases.”
Understanding is most likely to result, Shockley taught, from reducing the situation to its simplest elements and proceeding from there. His famous book on semiconductors follows the pattern: In Chapter 1 he sets forth the comparison of atomic structures to parking lots, complete with little sketches of cars in a lot to demonstrate traffic flow. By Chapter 15 he is explaining that “the wave function A (φ) for the hole-wave packet is not an eigenfunction for the Hamiltonian for the 2N-1 electrons in the valence band.”
Motivation is at least as important as method for the serious thinker, Shockley believed, and his scientific papers are replete with asides about the role of this comment or that experiment in spurring him on to new discoveries. And he always maintained that the crucial motivational issue—in fact, the essential element for successful work in any field—was “the will to think.” This was a phrase he learned from the nuclear physicist Enrico Fermi in 1940, and never forgot. “In these four words,” Shockley wrote later, “[Fermi] distilled the essence of a very significant insight: A competent thinker will be reluctant to commit himself to the effort that tedious and precise thinking demands—he will lack ‘the will to think’—unless he has the conviction that something worthwhile will be done with the results of his efforts.” The discipline of competent thinking is important throughout life, Shockley says, whether on a prize-winning experiment or a pop quiz in freshman physics. For many years at Stanford he taught a freshman seminar called “Mental Tools for Scientific Thinking.” The basic text was the professor’s own essay, “THINKING about THINKING improves THINKING.”