by Brin, David
I then define the “Energularity” or Energy Singularity as the time when humanity becomes a Type I civilization according to the Kardashev scale, that is, a civilization that has basically achieved total mastery of the resources of its home planet. Based on our current power needs, humanity is at around 0.72 on the Kardashev scale, but it could reach Type I status in about a century, or earlier. The “Energularity” is somehow similar to the concepts of the “Technological Singularity” (related to an intelligence explosion) and the “Methuselarity” (related to longevity extension). However, the “Energularity” emphasizes the exponential increase in energy consumption by our civilization on Earth, before we begin colonizing the Solar System and beyond.
The energy of the mind is the essence of life.
Aristotle, ca. 350 BC
Humans and Energy
E = mc2. Energy equals mass times the speed of light squared.
Albert Einstein, 1905
Many experts have advanced several theories about what makes humans different from other animal species. Humans have the largest brain-to-body mass and the largest encephalization quotient among all mammals. However, what caused it? Some scientists have written about the development of bipedalism and others about the development of language communication, both of which probably date from over 2 million years ago. Other scientists have considered the use of tools and the creation of technology as characteristically human. However, certain animals, including most primates, also exhibit signs of bipedalism, language communication, and even tool making, at least at a very basic level. But no other animals seem to use fire as humans do. Since fire was the first form of external energy generation (extrasomatic energy) adopted by humans, I believe that the way we use energy has also shaped our own evolution. After our early ancestors began to harness the power of fire, we have become increasingly different from all other species.
There is evidence of cooked food using fire from almost 2 million years ago by our prehuman ancestors, although fire was probably not used in a controlled fashion until about 500,000 years ago by Homo erectus. The ability to control fire was a dramatic change in the habits of early prehumans that eventually became Homo sapiens sapiens about 100,000 years ago. Making fire to generate heat and light allowed people to cook food, increasing the variety and availability of nutrients. Fire also produced heat that helped people stay warm in cold weather, enabling them to live in cooler climates, and fire also helped to keep nocturnal predators at bay.
The development of extrasomatic energy sources like fire has been fundamental to the growth of human civilization, and energy use seems to continue increasing almost exponentially into the future. Humans have used different forms of extrasomatic energy throughout the ages, starting with fire and including animal power, wind mills, hydropower, and different types of biomass until the 18th century. The sources of such extrasomatic energy have changed according to time and place, and such changes have accelerated in the last two centuries.
The evolution of energy sources in the United States of America (USA) is a good example of the changes in extrasomatic energy generation. Until the end of the 18th century, most energy production in the USA came from burning wood and other biomass. This began changing slowly with the growth of the coal industry during the 19th century. Another transition corresponded to the development of the oil industry in the 20th century, and still another “wave” could be identified with the relative growth of the gas industry in the early 21st century. Each one of these waves has been shorter since the different energy transitions have been happening faster and faster as shown in Figure 1.
Similar energy “waves” can also be identified in most other parts of the world. These transitions show a clear “decarbonization” trend going from fuels with more carbon to those with more hydrogen: first wood, second coal, third oil, fourth gas and maybe eventually pure hydrogen and solar energy. In fact, solar energy itself is based precisely on the nuclear fusion of hydrogen into helium, and hydrogen is the lightest and most abundant chemical element in the universe, constituting almost 75% of the estimated chemical mass of all elements (and well over 90% in terms of the number of single atoms) across the known universe. Thus, we can describe such energy waves not only as the “decarbonization” but also as the “hydrogenization” of our energy sources.
Figure 1: Energy “Waves” in the USA
Source: Based on Cordeiro (2011)
Solar energy has been growing exponentially during the last two decades, and most industry forecasts indicate that this trend will continue during the following decades. In fact, solar energy is already reaching “grid parity” in some markets, which means that solar energy has become cheaper than fossil fuels in the first “sunny” markets, and eventually even in places with lower insolation. This exponential trend will radically transform the energy matrix during the following decades, when solar energy will become the largest single source of energy for our current civilization. One of the leading experts on solar cells, Emmanuel Sachs, has proven how the solar industry has been rapidly growing and is now reaching “grid parity” in many markets, as shown in Figure 2.
Figure 2: Growth of the Solar Industry
Such transition from fossil and scarce fuels to more renewable and abundant energy sources might not be easy, but it has already started. Thus, the age of hydrocarbons seems to be approaching its end, as Saudi Arabian politician Sheikh Ahmed Zaki Yamani has famously said: “the Stone Age did not end for lack of stone, and the Oil Age will end long before the world runs out of oil.”
The “Enernet”
Not only will atomic power be released, but someday we will harness the rise and fall of the tides and imprison the rays of the sun.
—Thomas Alva Edison, 1921
Futurist Richard Buckminster Fuller was one of the earliest proponents of renewable energy sources (mostly solar energy, including wind and wave energy ultimately produced by the Sun) which he incorporated into his design and work in the middle of the 20th century. He claimed that “there is no energy crisis, only a crisis of ignorance.” Decades ago, his research demonstrated that humanity could satisfy 100% of its energy needs while phasing out completely fossil fuels and nuclear fission energy, if required.
Fuller also developed the concept of “energy slaves” in order to show how the human condition has been rapidly improving, partly thanks to the vast amounts of cheap energy available to more and more people. Thus, instead of human slaves, we actually had “energy slaves” that were just a concept to indicate how our advancing technology produced more goods and services for everybody without actually having human slaves working for just a few kings and queens (as it was in the past). In his World Energy Map, after his famous Dymaxion Map, Fuller estimated that every person had about 38 “energy slaves” in 1950. Thanks to continuous technological advances, Fuller extrapolated that the number of “energy slaves” would keep increasing, which was also very important to his idea of “accelerating acceleration.” Furthermore, Fuller believed that humanity urgently needed a global energy network and he first suggested the concept of an interconnected global grid linked to distributed renewable resources in his World Game simulation in the 1970s. Fuller concluded that this strategy was the highest priority of the World Game simulation and could positively transform humanity by increasing the global standard of living and connecting everybody around the planet.
The creation of a global energy network has many advantages and has indeed been revisited in different ways by other experts, like electrical engineer Robert Metcalfe, inventor of Ethernet and founder of 3Com. Metcalfe coined the term “Enernet” to describe such an energy network based on its similarities with the Internet. He has said that the Enernet “needs to have an architecture, probably needs some layers, standards, and storage. The Internet has lots of storage here and there; the current grid doesn’t have much storage at all.”
Today, the storage problem is a major obstacle for the Enernet and future smart grids. For e
xample, energy and space expert Gregg Maryniak explains that “our present fixation with energy generation ignores the ‘time value of energy’. Instead of concentrating all of our efforts on generation, we need to pay increased attention to energy storage.” Fortunately, new developments like liquid-metal batteries have the potential to scale up quickly and solve the storage problems during the next two decades. Localized renewable resources like solar and wind will create more decentralized systems, where local storage will also be a priority.
The Enernet, just as the Internet, will create major positive network effects. A larger and more efficient energy network with good storage will help balance energy requirements across different regions. The first smart grids have already improved the efficiency and resilience of energy transmission and distribution. Future technological developments are expected to continue improving energy systems all the way from generation to final use. China is currently developing some advanced smart grids, and India will probably follow shortly. Since China and India are two huge energy markets, their plans are just the beginning of more advanced energy systems in the following decades. Even pessimistic observers have been surprised by the incredible changes in energy infrastructure in China, soon in India (after the 2012 blackouts), and hopefully even in Africa.
According to Metcalfe, the Enernet will bring fundamental changes in the way we produce and consume energy, from generation to transmission, storage, and final utilization. The Enernet should really create a smart energy grid with distributed resources, efficient systems, high redundancy, and high storage capacity. The Enernet should also help the transition to clean energy and renewable sources, with new players and entrepreneurs taking the place of traditional “big oil” and utilities, and old monolithic producers giving more control to energy prosumers (producers and consumers). Finally, we will continue the transition from expensive energy to cheap energy in a world where energy will be recognized as an abundant resource. Table 1 shows most of these major changes possible thanks to the Enernet.
Table 1: Some Possibilities of the Enernet: From the Past to the Future.
Source: Cordeiro based on Metcalfe (2007)
Dumb grid
Smart grid
Centralized sources
Distributed sources
Inefficient systems
Efficient systems
Low redundancy
High redundancy
Low storage capacity
High storage capacity
Dirty energy
Clean energy
Slow response
Fast response
Fossil fuels
Renewable sources
Traditional “big oil” and utilities
New players and entrepreneurs
Producers control
Prosumers control
Energy conservation
Energy abundance
Expensive energy
Cheap energy
Metcalfe has talked about the transition from conservation and expensive information and bandwidth to abundance and almost free Internet services: “When we set out to build the Internet, we began with conserving bandwidth, with compression, packet switching, multiplex terminals, and buffer terminals aimed at conserving bandwidth.” Additionally, Metcalfe has explained that energy and power with the Enernet might follow the same exponential growth as information and bandwidth with the Internet since its beginning:
Now, decades later, are we using less bandwidth now than before? Of course not. We are using million times more bandwidth. If the Internet is any guide, when we are done solving energy, we are not going to use less energy but much, much more—a squanderable abundance, just like we have in computation.
Having abundant and almost free energy might seem hard to believe today, but that has been the trend for many other commodities. As economist Julian Simon said:
During all of human existence, people have worried about running out of natural resources: flint, game animals, what have you. Amazingly, all the evidence shows that exactly the opposite has been true. Raw materials—all of them—are becoming more available rather than more scarce.
It is also worth considering an analogy between energy and telecommunications. The modern telecommunications industry began with very expensive telegraphs in the early 19th century, followed by costly fixed-line telephones in the late 19th century. The first transatlantic phone calls would cost over $100 for a few minutes in the early 20th century. Today, most national and international calls cost nearly zero; in fact, Skype and similar services have revolutionized telecommunications by allowing virtually free calls as long as there is an Internet connection. Niklas Zennström, the Swedish entrepreneur who cofounded the KaZaA peer-to-peer file sharing system and later also cofounded the Skype peer-to-peer internet telephony network in Estonia, is famous for saying: “The telephone is a 100-year-old technology. It’s time for a change. Charging for phone calls is something you did last century.”
Telephone rates have decreased very rapidly, while there has been a continuous increase in the use of telecommunications. The rapid fall in telephone rates can also be compared with the long-term cost reductions in energy (together with the exponential growth of both information and energy usage). For example, economist William Nordhaus calculated the price of light as measured in work hours per 1,000 lumen hours (the lumen is a measure of the flux of light) throughout human history. He compared estimates for fires in the caves of the Peking man using wood, lamps of the Neolithic men using animal or vegetable fat, and lamps of the Babylonians using sesame oil. After reviewing the labor-time costs of candles, oil lamps, kerosene lamps, town gas, and electric lamps, Nordhaus concluded that there has been an exponential decrease of lighting costs, particularly during the last 100 years. However, some of these outstanding costs reductions, a ten thousand-fold decline in the real price of illumination, have not been captured by the standard price indices, as economist James Bradford DeLong has emphasized. Figure 3 shows the staggering reduction of lighting costs through human history. The exponential decrease in energy cost has been even larger during the last century (while energy production has also increased exponentially).
Figure 3: Price of Light (Work Hours per 1000 Lumen Hours)
Source: Cordeiro based on Nordhaus (1997) and DeLong (2000)
Another example of such accelerating changes can be seen in the semiconductor industry. The exponential increase of capabilities, and the corresponding reduction of costs, is commonly called Moore’s Law in semiconductor manufacturing. Caltech professor and VLSI (very large scale integrated circuit) pioneer Carver Mead named this eponymous law in 1970 after scientist and businessman Gordon Moore (cofounder of Intel with fellow inventor Robert Noyce). According to Moore’s original observations in 1965, the number of transistors per computer chip was doubling every two years, even though this trend has recently accelerated to just about 18 months. Figure 4 shows Moore’s Law with an exponential scale in the vertical axis. A further increase in the rate of change can also be identified from the late 1990s.
Moore’s Law and similar conjectures (since they are not really physical laws) have been observed for many processes, for example, the growing number of transistors per integrated circuit, the decreasing costs per transistor, the increasing density at minimum cost per transistor, the augmenting computing performance per unit cost, the reducing power consumption in newer semiconductors, the exponential growth of hard disk storage cost per unit of information, the accelerating expansion of RAM storage capacity, the rapidly improving network capacity and the exponential growth of pixels per dollar. In fact, in the specific case of USB flash memories, the Korean company Samsung follows Hwang’s Law, named after a vice president of Samsung, which states that the amount of memory in such devices doubles every 12 months. Concerning the eponymous Moore’s Law, Gordon Moore himself said that his “law” should still be valid for at least the next two decades or so, until transistors reach the size of single nanometers.<
br />
Figure 4: Moore’s Law
Source: Cordeiro adapted from Intel (2015)
The 20th century experienced a dramatic increase of energy production and consumption in the developed countries. During the 1950s, the peaceful development of nuclear fission contributed to the rapid growth of energy production and to the reduction of energy costs. Naval officer and businessman Lewis Strauss, during his tenure as Chairman of the US Atomic Energy Commission, said:
Our children will enjoy in their homes electrical energy too cheap to meter… It is not too much to expect that our children will know of great periodic regional famines in the world only as matters of history, will travel effortlessly over the seas and under them and through the air with a minimum of danger and at great speeds, and will experience a lifespan far longer than ours, as disease yields and man comes to understand what causes him to age.
Strauss was actually referring not to uranium fission reactors but to hydrogen fusion reactors that were being considered at the time, even if they were not constructed later on. His prediction was ahead of his time, but it is possible that it will finally turn into reality soon. Thus, energy and the Enernet will eventually become “too cheap to meter,” just as information and the Internet have basically become today.
The “Energularity”
It is important to realize that in physics today, we have no knowledge what energy is.
—Richard Feynman, 1964