Asimov's New Guide to Science

Home > Science > Asimov's New Guide to Science > Page 62
Asimov's New Guide to Science Page 62

by Isaac Asimov


  The early radio enthusiasts had to sit over their sets wearing earphones. Some means of strengthening, or amplifying, the signal was needed, and the answer was found in a discovery that Edison had made—his only discovery in “pure” science.

  In one of his experiments, looking toward improving the electric lamp, Edison, in 1883, sealed a metal wire into a light bulb near the hot filament. To his surprise, electricity flowed from the hot filament to the metal wire across the air gap between them. Because this phenomenon had no utility for his purposes, Edison, a practical man, merely wrote it up in his notebooks and forgot it. But the Edison effect became very important indeed when the electron was discovered and it became clear that current across a gap meant a flow of electrons. The British physicist Owen Willans Richardson showed, in experiments conducted between 1900 and 1903, that electrons “boil” out of metal filaments heated in vacuum. For this work, he eventually received the Nobel Prize for physics in 1928.

  In 1904, the English electrical engineer John Ambrose Fleming put the Edison effect to brilliant use. He surrounded the filament in a bulb with a cylindrical piece of metal (called a plate). Now this plate could act in either of two ways. If it was positively charged, it would attract the electrons boiling off the heated filament and so would create a circuit that carried electric current. But if the plate was negatively charged, it would repel the electrons and thus prevent the flow of current. Suppose, then, that the plate was hooked up to a source of alternating current. When the current flowed in one direction, the plate would get a positive charge and pass current in the tube; when the alternating current changed direction, the plate would acquire a negative charge and no current would flow in the tube. Thus, the plate would pass current in only one direction; in effect, it would convert alternating to direct current. Because such a tube acts as a valve for the flow of current, the British logically call it a valve. In the United States, it is vaguely called a tube. Scientists took to calling it a diode, because it has two electrodes—the filament and the plate (figure 9.12).

  Figure 9.12. Principle of the vacuum-tube diode.

  The tube—or radio tube, since that is where it was initially used—controls a stream of electrons through a vacuum rather than an electric current through wire. The electrons can be much more delicately controlled than the current can, so that the tubes (and all the devices descended from it) made a whole new range of electronic devices that could do things no mere electrical device could. The study and use of tubes and their descendants is referred to as electronics.

  The tube, in its simplest form, serves as a rectifier and replaced the crystals used up to that time, since the tubes were much more reliable. In 1907, the American inventor Lee De Forest went a step farther. He inserted a third electrode in the tube, making a triode out of it (figure 9.13). The third electrode is a perforated plate (grid) between the filament and the plate. The grid attracts electrons and speeds up the flow from the filament to the plate (through the holes in the grid). A small increase in the positive charge on the grid will result in a large increase in the flow of electrons from the filament to the plate. Consequently, even the small charge added by weak radio signals will increase the current flow greatly, and this current will mirror all the variations imposed by the radio waves. In other words, the triode acts as an amplifier. Triodes and even more complicated modifications of the tube became essential equipment, not only for radio sets but for all sorts of electronic equipment.

  Figure 9.13. Principle of the triode.

  One more step was needed to make radio sets completely popular. During the First World War, the American electrical engineer Edwin Howard Armstrong developed a device for lowering the frequency of a radio wave. This was intended, at the time, for detecting aircraft but, after the war, was put to use in radio receivers. Armstrong’s superheterodyne receiver made it possible to tune in clearly on an adjusted frequency by the turn of one dial, where previously it had been a complicated task to adjust reception over a wide range of possible frequencies. In 1921, regular radio programs were begun by a station in Pittsburgh. Other stations were set up in rapid succession; and with the control of sound level and station tuning reduced to the turn of a dial, radio sets became hugely popular. By 1927, telephone conversations could be carried on across oceans, with the help of radio; and wireless telephony was a fact.

  There remained the problem of static. The systems of tuning introduced by Marconi and his successors minimized “noise” from thunderstorms and other electrical sources, but did not eliminate it. Again it was Armstrong who found an answer. In place of amplitude modulation, which was subject to interference from the random amplitude modulations of the noise sources, he substituted frequency modulation in 1935: that is, he kept the amplitude of the radio carrier wave constant and superimposed a variation in frequency on it. Where the sound wave was large in amplitude, the carrier wave was made low in frequency, and vice versa. Frequency modulation (FM) virtually eliminated static, and FM radio came into popularity after the Second World War for programs of serious music.

  TELEVISION

  Television was an inevitable sequel to radio, just as talking movies were to the silents. The technical forerunner of television was the transmission of pictures by wire, which entailed translating a picture into an electric current.

  A narrow beam of light passed through the picture on a photographic film to a phototube behind. Where the film was comparatively opaque, a weak current was generated in the phototube; where it was clearer, a large current was formed. The beam of light swiftly scanned the picture from left to right, line by line, and produced a varying current representing the entire picture. The current was sent over wires and, at the destination, reproduced the picture on film by a reverse process. Such wirephotos were transmitted between London and Paris as early as 1907.

  Television is the transmission of a “movie” instead of still photographs—either “live” or from a film. The transmission must be extremely fast, which means that the action must be scanned very rapidly. The light-dark pattern of the image is converted into a pattern of electrical impulses by means of a camera using, in place of film, a coating of metal that emits electrons when light strikes it.

  A form of television was first demonstrated in 1926 by the Scottish inventor John Logie Baird. However, the first practical television camera was the iconoscope, patented in 1938 by the Russian-born American inventor V1adimir Kosma Zworykin. In the iconoscope, the rear of the camera is coated with a large number of tiny cesium-silver droplets. Each emits electrons as the light beam scans across it, in proportion to the brightness of the light. The iconoscope was later replaced by the image orthicon—a refinement in which the cesium-silver screen is thin enough so that the emitted electrons can be sent forward to strike a thin glass plate that emits more electrons. This amplification increases the sensitivity of the camera to light, so that strong lighting is not necessary.

  The television receiver is a variety of cathode-ray tube. A stream of electrons shot from a filament (electron-gun) strikes a screen coated with a fluorescent substance, which glows in proportion to the intensity of the electron stream. Pairs of electrodes controlling the direction of the stream cause it to sweep across the screen from left to right in a series of hundreds of horizontal lines, each slightly below the one before, and the entire “painting” of a picture on the screen in this fashion is completed in 1/30 second. The beam goes on painting successive pictures at the rate of thirty per second. At no instant of time is there more than one dot on the screen (bright or dark, as the case may be); yet, thanks to the persistence of vision, we see not only complete pictures but an uninterrupted sequence of movement and action.

  Experimental television was broadcast in the 1920s, but television did not become practical in the commercial sense until 1947. Since then, it has virtually taken over the field of entertainment.

  In the mid-1950s, two refinements were added. By the use of three types of fluorescent material on the tele
vision screen, designed to react to the beam in red, blue, and green colors, color television was introduced. And video tape, a type of recording with certain similarities to the sound track on a movie film, made it possible to reproduce recorded programs or events with better quality than could be obtained from motion-picture film.

  THE TRANSISTOR

  In the 1980s, in fact, the world was in the cassette age. Just as small cassettes can unwind and rewind their tapes to play music with high fidelity-on batteries, if necessary, so that people can walk around or do their housework, with earphones pinned to their heads, hearing sounds no one else can hear—so there are video cassettes that can produce films of any type through one’s television set or record programs when shown for replay afterward.

  The vacuum tube, the heart of all the electronic devices, eventually became a limiting factor. Usually the components of a device are steadily improved in efficiency as time goes on: that is, they are stepped up in power and flexibility and reduced in size and mass (a process sometimes called miniaturization). But the vacuum tube became a bottleneck in the road to miniaturization for it had to remain large enough to contain a sizable volume of vacuum or the various components within would leak electricity across a too-small gap.

  It had other shortcomings, too. The tube could break or leak and, in either case, would become unusable. (Tubes were always being replaced in early radio and television sets; and, particularly in the latter case, a live-in repairman seemed all but necessary.) Then, too, the tubes would not work until the filaments were sufficiently heated; hence, considerable current was necessary, and there had to be time for the set to “warm up.” And then, quite by accident, an unexpected solution turned up. In the 1940s, several scientists at the Bell Telephone Laboratories grew interested in the substances known as semiconductors. These substances, such as silicon and germanium, conduct electricity only moderately well, and the problem was to find out why. The Bell Lab investigators discovered that such conductivity as these substances possess was enhanced by traces of impurities mixed with the element in question.

  Let us consider a crystal of pure germanium. Each atom has four electrons in its outermost shell; and in the regular array of atoms in the crystal, each of the four electrons pairs up with an electron of a neighboring germanium atom, so that all the electrons are paired in stable bonds. Because this arrangement is similar to that in diamond, germanium, silicon, and other such substances are called adamantine, from an old word for “diamond.”

  If a little bit of arsenic is introduced into this contented adamantine arrangement, the picture grows more complicated. Arsenic has five electrons in its outermost shell. An arsenic atom taking the place of a germanium atom in the crystal will be able to pair four of its five electrons with the neighboring atoms, but the fifth can find no electron to pair with: it is loose. Now if an electric voltage is applied to this crystal, the loose electron will wander in the direction of the positive electrode. It will not move as freely as would electrons in a conducting metal, but the crystal will conduct electricity better than a nonconductor, such as sulfur or glass.

  This is not very startling, but now we come to a case that is somewhat more odd. Let us add a bit of boron, instead of arsenic, to the germanium. The boron atom has only three electrons in its outermost shell. These three can pair up with the electrons of three neighboring germanium atoms. But what happens to the electron of the boron atom’s fourth germanium neighbor? That electron is paired with a hole! The word hole is used advisedly, because this site, where the electron would find a partner in a pure germanium crystal, does in fact behave like a vacancy. If a voltage is applied to the boron-contaminated crystal, the next neighboring electron, attracted toward the positive electrode, will move into the hole. In doing so, it leaves a hole where it was, and the electron next farther away from the positive electrode moves into that hole. And so the hole, in effect, travels steadily toward the negative electrode, moving exactly like an electron, but in the opposite direction. In short, it has become a conveyor of electric current.

  To work well, the crystal must be almost perfectly pure with just the right amount of the specified impurity (that is, arsenic or boron). The germanium-arsenic semiconductor, with a wandering electron, is said to be n-type (n for “negative”). The germanium-boron semiconductor, with a wandering hole that acts as if it were positively charged, is p-type (p for “positive”).

  Unlike ordinary conductors, the electrical resistance of semiconductors drops as the temperature rises, because higher temperatures weaken the hold of atoms on electrons and allow them to drift more freely. (In metallic conductors, the electrons are already free enough at ordinary temperatures. Raising the temperature introduces more random movement and impedes their flow in response to the electric field.) By determining the resistance of a semiconductor, one can measure temperatures that are too high to be conveniently measured in other fashions. Such temperature-measuring semiconductors are called thermistors.

  But semiconductors in combination can do much more. Suppose we make a germanium crystal with one-half p-type and the other half n-type. If we connect the n-type side to a negative electrode and the p-type side to a positive electrode, the electrons on the n-type side will move across the crystal toward the positive electrode, while the holes on the p-type side will travel in the opposite direction toward the negative electrode. Thus, a current flows through the crystal. Now let us reverse the situation-that is, connect the n-type side to the positive electrode and the p-type to the negative electrode. This time the electrons of the n-side travel toward the positive electrode—which is to say, away from the p-side—and the holes of the p-side similarly move in the direction away from the n-side. As a result, the border regions at the junction between the n- and p-sides lose their free electrons and holes, thus effecting to a break in the circuit, and no current flows.

  In short, we now have a setup that can act as a rectifier. If we hook up alternating current to this dual crystal, the crystal will pass the current in one direction, but not in the other. Therefore alternating current will be converted to direct current. The crystal serves as a diode, just as a vacuum tube (or valve) does.

  In a way, electronics had come full circle. The tube had replaced the crystal, and now the crystal had replaced the tube—but it was a new kind of crystal, far more delicate and reliable than those that Braun had introduced nearly half a century before.

  The new crystal had impressive advantages over the tube. It required no vacuum, so it could be small. It would not break or leak. Since it worked at room temperature, it required very little current and no warm-up time. It was all advantages and no disadvantages, provided only that it could be made cheaply enough and accurately enough.

  Since the new crystals were solid all the way through, they opened the way to what came to be called solid-state electronics. The new device was named transistor (the suggestion of John Robinson Pierce of the Bell Lab), because it transfers a signal across a resistor (figure 9.14).

  Figure 9.14. Principle of the junction transistor.

  In 1948, William Bradford Shockley, Walter Houser Brattain, and John Bardeen at the Bell Lab went on to produce a transistor that could act as an amplifier. This was a germanium crystal with a thin p-type section sandwiched between two n-type ends. It was in effect a triode with the equivalent of a grid between the filament and the plate. With control of the positive charge in the p-type center, holes could be sent across the junctions in such a manner as to control the electron flow. Furthermore, a small variation in the current of the p-type center would cause a large variation in the current across the semiconductor system. The semiconductor triode could thus serve as an amplifier, just as a vacuum tube triode did. Shockley and his co-workers Brattain and Bardeen received the Nobel Prize in physics in 1956.

  However well transistors might work in theory, their use in practice required certain concomitant advances in technology—as is invariably true in applied science. Efficiency in transisto
rs depended very strongly on the use of materials of extremely high purity, so that the nature and concentration of deliberately added impurities could be carefully controlled.

  Fortunately, William Gardner Pfann introduced the technique of zone refining in 1952. A rod of, let us say, germanium, is placed in the hollow of a circular heating element, which softens and begins to melt a section of the rod. The rod is drawn through the hollow so that the molten zone moves along it. The impurities in the rod tend to remain in the molten zone and are therefore literally washed to the ends of the rod. After a few passes of this sort, the main body of the germanium rod is unprecedentedly pure.

  By 1953, tiny transistors were being used in hearing aids, making them so small that they could be fitted inside the ear. In short order, the transistor steadily developed so that it could handle higher frequencies, withstand higher temperatures, and be made ever smaller. Eventually it grew so small that individual transistors were not used. Instead, small chips of silicon were etched microscopically to form integrated circuits that would do what large numbers of tubes would do. In the 1970s, these chips were small enough to be thought of as microchips.

  Such tiny solid-state devices that are now universally used offer perhaps the most astonishing revolution of all the scientific revolutions that have taken place in human history. They have made small radios possible; they have made it possible to squeeze enormous abilities into satellites and probes; most of all, they have made possible the development of ever-smaller and ever-cheaper and ever-more versatile computers and, in the 1980s, robots as well. The last two items will be discussed later in chapter 17.

 

‹ Prev