by John Browne
Photons not only carry information about the stars from which they originated, they also carry energy. Long before the invention of the telescope, mirrors were used to capture and concentrate the light energy emitted from our own star, the Sun.
SOLAR POWER
In the middle of the seventeenth century, Father Athanasius Kircher, a Jesuit scholar, positioned five mirrors so as to direct sunlight on to a target 30 metres away. The heat produced was so intense that his assistant could not comfortably stand at the target. ‘What terrible phenomena might be produced,’ Kircher wondered, ‘if a thousand mirrors were so employed!’50
Kircher would probably have been familiar with the legend of Archimedes’ burning mirrors. At the start of the third century BC, as the Roman ships of General Marcellus advanced towards Syracuse, Archimedes directed his soldiers to raise and tilt their reflective shields towards the armada. The result was dramatic; the concentration of heat was so intense that the ships were set alight. Burning mirrors were among a large armoury of imaginative inventions that Archimedes deployed to defend Syracuse against the Romans. With his knowledge of geometry, Archimedes could calculate how to focus light rays and also aim projectiles to destroy the enemy’s ships before they could get close enough to land to do damage.51
In his sixteenth-century book Pirotechnia, Vannoccio Biringuccio recalls a conversation with a friend who had created a mirror almost 70 centimetres across. One day, while watching an army review in the German city of Ulm, the man entertained himself by using his mirror to direct sunlight on to the shoulder armour of one soldier, creating so much heat that ‘it became almost unbearable to the soldier … so that it kindled his jacket underneath and burned it for him, cooking his flesh to his very great torment’.52
In the sixteenth century, Leonardo da Vinci designed some novel peacetime applications for the sun’s rays. Ambitious as always, Leonardo planned to build a six-kilometre-wide concave mirror that would focus sunlight on to a central pole to heat water or melt metals.53 As with so many of his inventions, this particular monstrous device never made it beyond his sketchbook. It was not until the Industrial Revolution in Great Britain that glass and mirror constructions were possible on a bigger, but not quite Leonardo, scale. In his later years, Henry Bessemer built a solar furnace for the smelting of metals. Inside a 10-metre-high tower, a reflector directed sunlight on to a four-square-metre concave mirror in the roof. This focused the light through a lens at the bottom of a tower and into a crucible. He managed to melt copper and vaporise zinc in this furnace, but it was not very efficient and cost a great deal to build. After some years, even Bessemer ‘became disheartened, and abandoned the solar furnace’.54
Across the Atlantic in Philadelphia, an American inventor, Frank Shuman, turned his attention to the problem of concentrating solar power. At the turn of the twentieth century, using the heat-trapping properties of glass, he raised the temperature of water in something he called his solar hot box to just below boiling point, even when there was snow on the ground.55 ‘I am sure it will be an entire success in all dry tropical countries,’ he wrote. ‘It would be a success here on any sunshiny day; but you know how the weather has been.’56 In Egypt, where the weather was rather more reliable, his solar hot boxes powered steam engines used to pump water for irrigation. Another inventor, Aubrey Eneas, built a series of giant cone reflectors, some over 65 square metres in area, to collect solar radiation in the intensely sunny states of California and Arizona. Eneas was also inspired by the parabolic trough reflectors invented by John Ericsson, the Swedish-American engineer who built the Monitor ironclad during the American Civil War, but who also devoted the last twenty years of his life to building solar machines. Both Bessemer and Ericsson, pioneers in the production and use of iron, were concerned that the coal supplies they used to smelt iron ore and power steam engines would run out, and so they sought alternative sources of energy. Eneas’s plan was to provide a cheap energy source for those living in the desert, far from traditional coal supplies. By increasing the scale of their systems, both inventors had hoped to produce cheaper solar power; but even at scale, these concentrated solar power systems could not provide electricity which was competitive with that generated from conventional sources. And that remains the case today. On the arid plains near Fuentes de Andalucía, Spain, there are more than 2,500 mirrors, each with an area of 120 square metres, directing sunlight towards a tower placed at their centre. In the tower, molten salt is heated to almost 600 degrees centigrade. The molten salt can be stored in tanks until it is needed, when it can be used to drive steam turbines and generate electricity. But without very large subsidies, even this modern plant is uncompetitive.
Solar power was forgotten until shortly after the Second World War, when scientists at Bell Laboratories in New York began to investigate some unusual electrical properties of silicon. The research of Gerald Pearson, Daryl Chapin and Calvin Fuller led, in 1954, to the creation of the first silicon photovoltaic cell.
Photovoltaics
Daryl Chapin had been tasked by Bell Laboratories with developing a new portable power source which would power their telephone systems in tropical climates, where traditional dry-cell batteries degraded quickly. He began to investigate wind machines, steam engines and solar energy as possibilities. Rather than trying to capture the Sun’s energy using mirrors and heat boxes, Chapin decided to investigate another medium for harnessing solar energy, known as the photovoltaic effect.
Alexandre-Edmond Becquerel, the father of Henri Becquerel (of radiation fame) discovered the photovoltaic effect in 1839.57 Becquerel placed two brass plates in a conductive liquid and shone a light on them. He noticed that the light caused an electric current to flow in the solution. If this current could be used, the energy of the Sun could be harnessed.
Over a hundred years later, scientists had still only succeeded in harnessing one two-hundredths of incoming sunlight using photovoltaic cells. This did not make enough power for Chapin’s needs and so he began to search for alternatives. Word of Chapin’s work reached Gerald Pearson and Calvin Fuller, two other scientists working at Bell Labs, who were experimenting with the unusual electrical properties of silicon semiconductors. They thought the materials they had been developing could be used to create a photovoltaic cell. To their surprise, their idea not only worked but also created a photovoltaic cell that was five times better than anything else available.58
In April 1954, they announced the invention of the Bell Solar Battery, demonstrating to journalists how it would be used to power a radio transmitter. It quickly began to prove its value in providing energy to Bell’s developing markets in the tropics. Solar cells got their first big break, though, when they were used in the American Vanguard space programme in 1958. While the vehicle’s chemical batteries were rapidly depleted, the solar unit continued to function years after the launch. In satellites, solar cells had found their first major market.59
Even today, solar cells are often the most cost-effective way of generating energy for remote regions, avoiding the costly infrastructure of setting up power lines and transporting fuel. Their versatility allows them to be installed as single units in remote, energy-poor regions. In 2001, I visited Indonesia to see BP’s solar rural electrification project, at the time the fifth largest of its kind in the world. Small-scale silicon solar cell arrays were used to generate electricity for almost 40,000 homes in village communities. Electric water pumps can be used to irrigate crops and electric lighting has been brought into homes, schools and medical centres. Solar cells have also indirectly improved education and learning. As I saw, children could study not only during the day but also at night.
Unlike fossil fuels, which occur in pockets dotted about the Earth, the Sun shines everywhere. In one year, more energy reaches the Earth’s surface from the Sun than will ever be extracted from all sources of coal, oil, natural gas and uranium. In one day, the Earth’s surface receives 130,000 times the total world demand for electricity. Des
pite this, solar energy still accounts for only a thousandth of total global electricity production. Part of the reason is that harnessing the energy of the Sun is notoriously inefficient. A small electric current is produced each time a photon of light is absorbed by a silicon solar cell. This is because the photon’s energy is transferred to an electron and its positive counterpart, called a ‘hole’, in the cell.60 However, while all the energy of a photon is transferred when it is absorbed, not many photons are absorbed in the first place. The photon must have just the right amount of energy for this to happen, and only a fraction of photons do. As a result, even the very best laboratory-based solar cells capture and convert only 40 per cent of the light that falls on them into electricity. The cells used in normal commercial applications convert between 10 and 20 per cent. That still makes them several times more efficient than the first solar cells, created at Bell Labs in 1954. That improvement, in the space of only sixty years, is remarkable; after billions of years of evolution, plants, which convert light into stored energy through photosynthesis, have only developed an efficiency of 3 per cent.
The greatest barrier to the success of the solar cell, however, has not been technical, but economic: solar cells have produced costly electricity because they have been expensive to manufacture. This is beginning to change as new technologies, such as using cheaper offcuts from the silicon used in chip manufacture, begin to mature. The cost of manufacturing, too, has fallen rapidly, in large part because of the economies of scale obtained by Chinese producers in China’s growing market. Nonetheless, electricity generated from solar cells has not yet reached grid parity, the point at which cells would be economically competitive with traditional non-renewable fuel sources. That point, though, is getting closer. As more solar cells are made they generally become cheaper; in 2011, manufacturing capacity increased by almost 75 per cent on top of an average annual growth rate of 45 per cent over the past decade. This continued growth will be vital in the transition to a low-carbon energy economy.61
When they announced the invention of the silicon solar cell by Bell Labs in 1954, the New York Times wrote that it marked ‘the beginning of a new era, leading eventually to the realization of one of mankind’s most cherished dreams, the harnessing of the almost limitless energy of the sun for the uses of civilization’.62 That dream may become a reality, and it will be one free of emissions of greenhouse gases. There is still a long way to go before the scale of solar energy comes close to that of fossil fuels or nuclear energy, but, of all renewable energy sources, solar has so far proved itself to be the most promising.
COMPUTERS
Anchorage, Alaska, 1970: red lights were flashing wildly on the control panel. The core memory of the computer had just crashed. Back then a computer crash was literally a crash, the spinning mechanical disks grinding together to a halt. The constant restarts made running even the simplest program extremely arduous. It would be a long night. I was working as a petroleum engineer in my first real job. Because of my experiences at Cambridge University I was, in those days, one of a handful of people who knew how to make the solution of engineering problems easier by using a computer. I was working to an extreme deadline. My boss was about to go to a meeting with some very powerful men from some even more powerful US oil companies. They were to discuss how much of the giant Prudhoe Bay field each of the participating companies owned. He wanted me to find an answer and to make sure that he, from a then small company, impressed the other bigger companies with his technical prowess.
It was not a straightforward problem. The companies owned leases of different pieces of land on top of the field and so what each one owned depended critically on the areal distribution of the oil, and that turned out to be very uneven. A long night turned into an early breakfast when, after a lot of stops and starts, a solution appeared and I went to the office. Overnight working turns out not to be a new phenomenon.
I was doing this in Anchorage’s only ‘computer bureau’ run by Millet Keller, a graduate of Stanford University. It contained only a single computer, an IBM 1130, the state of the art at the time. During the day, Millet would run commercial programs in computer language COBOL, creating financial accounts for local banks. At night, I was able to run my own programs written in FORTRAN, a popular scientific and engineering computer language. Using the advanced IBM technology, which performed ‘as many as 120,000 additions in a second’, I was able to model BP’s Alaskan oilfields to help in their development.63 BP has been an innovator in computer technology since its earliest days. In the early twentieth century, it developed programs to calculate the most efficient routes for oil tankers to travel. But the invention of the IBM 1130 provided a huge leap in the processing power available to the industry. As a geophysicist, Millet was interested in the work I was doing and would often stay up to work with me, watching over the temperamental machine or feeding in the next punch card.
The IBM 1130 was the first computer I had encountered outside Cambridge University. It was less powerful than Titan, the university’s monstrous mainframe computer, and far smaller, cheaper and more accessible. Titan had filled an entire room and required a whole laboratory team to work it. IBM sought to take computing technology out of that sort of setting to a wide variety of industries for which computing was becoming a necessity.
Today, exploring and drilling for oil without computers is unimaginable. By the time BP was producing oil from the Thunder Horse field, whose namesake production platform nearly sank as Hurricane Dennis passed close by in 2005, it had been able to use seismic and other data to construct a three-dimensional visualisation of the reservoir, several kilometres below the surface.64 That allowed teams of scientists and engineers to work together and actually see the implications of a decision, such as where to place a well, on the long-term health of the reservoir. Processing the huge quantities of data needed to do this has been made possible by the extraordinary growth of a computer’s capability over the last sixty years. And at the core of all this technology, back in Anchorage in 1971 as in the high-performance computer age of today, is a simple, tiny device made from silicon: the transistor.
Silicon transistor
In the late 1940s, William Shockley and his team in the solid state physics group at Bell Labs were exploring the unusual electrical properties of a group of elements called semiconductors. Bell’s telephone networks were still operated using mechanical switches and signals were amplified using vacuum tubes.65 These were slow and unreliable and so the director of research was tasked with finding an electronic alternative. Shockley thought the answer could be found in semiconductors, from which he hoped to create an amplifying and switching device.66 Although its theoretical basis seemed flawless, it did not work. His colleague John Bardeen, a brilliant theoretical physicist, then set his mind to the problem. He realised that electrons were becoming trapped at the surface of the semiconductor, stopping current flowing through the device.67 Working with Walter Brattain, whose skilled hands matched and complemented Bardeen’s brain, Bardeen was able to overcome the surface trapping and, in doing so, turned Shockley’s idea into a practical reality, the world’s first transistor.68
At the end of June 1948, Bell Labs announced the invention of the transistor by Shockley, Bardeen and Brattain; they would later win the Nobel Prize in Physics for this breakthrough. At the press conference, they explained that the transistor had the potential to replace the vacuum tube, the device then used to make radios and rudimentary computers. Like the vacuum tube, the transistor could amplify electrical signals and act as an on-off switch, but do so much faster, in a much smaller volume, using much less power.69 At the time, the media thought all this unimportant and made little fuss. The New York Times ‘carried the big news on page 46, buried at the end of a column of radio chitchat’.70 The potential for the transistor to change the world had yet to be realised by the wider public. After all, journalists must have wondered, what impact could these devices and their abstract functions have on our everyd
ay lives? Even today, few people make a connection between these minute pieces of silicon and the complex functioning of the computers, with which we create images, manage communications and generate sounds.
Any computational problem can be broken down into a set of simple logical steps, such as the decision to combine two numbers or choose one or the other. These steps are controlled by ‘logic gates’, which are the basic building blocks of digital circuits. Logic gates are made from transistors and other simple components, and they use transistors as switches to send signals. Most logic gates have two on-off switches which together act as inputs. Each switch can either be off or on, known as ‘0’or ‘1’, and the logic gate’s output is determined by these two inputs as well as the type of logic gate. For example, an ‘AND’ gate will give an output of 1 only if both the first ‘and’ second inputs are 1. All other input combinations (0 and 1; 1 and 0; 0 and 0) will result in an output of 0. A computer, at a fundamental level, is simply a number of these transistor-based logic gates linked together to produce a complex output. The capability and complexity of the computer rises as more and more gates are connected.
Transistors allow this to happen because they are very small, very cheap and use only a little power. Those features allow enormous numbers of them to be put together in one computer. It is, however, their speed that makes computers really useful. A transistor’s on-off switching function is controlled by a small electric current. Its small size and the speed of the electrons enable it to be turned on and off well over 100 billion times each second. If you used your finger, it would take around 2,000 years to turn a light switch on and off as many times. Silicon’s semiconducting properties make it ideal for making these switches, although other semiconductors, such as germanium, were originally used and today transistors can be made of many different alloys. None of them, however, rival silicon’s combination of high performance and low cost.71