The Singularity Is Near: When Humans Transcend Biology
Page 69
41. A number of models and simulations have been created based on analyses of individual neurons and interneuronal connections. Tomaso Poggio writes, “One view of the neuron is that it is more like a chip with thousands of logical-gates-equivalents rather than a single threshold element.” Tomaso Poggio, private communication to Ray Kurzweil, January 2005.
See also T. Poggio and C. Koch, “Synapses That Compute Motion,” Scientific American 256 (1987): 46–52.
C. Koch and T. Poggio, “Biophysics of Computational Systems: Neurons, Synapses, and Membranes,” in Synaptic Function, G. M. Edelman, W. E. Gall, and W. M. Cowan, eds. (New York: John Wiley and Sons, 1987), pp. 637–97.
Another set of detailed neuron-level models and simulations is being created at the University of Pennsylvania’s Neuroengineering Research Lab based on reverse engineering brain function at the neuron level. Dr. Leif Finkel, head of the laboratory, says, “Right now we’re building a cellular-level model of a small piece of visual cortex. It’s a very detailed computer simulation which reflects with some accuracy at least the basic operations of real neurons. [My colleague Kwabena Boahen] has a chip that accurately models the retina and produces output spikes that closely match real retinae.” See http://nanodot.org/article.pl?sid=01/12/18/1552221.
Reviews of these and other models and simulations at the neuron level indicate that an estimate of 103 calculations per neural transaction (a single transaction involving signal transmission and reset on a single dendrite) is a reasonable upper bound. Most simulations use considerably less than this.
42. Plans for Blue Gene/L, the second generation of Blue Gene computers, were announced in late 2001. The new supercomputer, planned to be fifteen times faster than today’s supercomputers and one twentieth the size, is being built jointly by the National Nuclear Security Agency’s Lawrence Livermore National Laboratory and IBM. In 2002, IBM announced that open-source Linux had been chosen as the operating system for the new supercomputers. By July 2003, the innovative processor chips for the supercomputer, which are complete systems on chips, were in production. “Blue Gene/L is a poster child for what is possible with the system-on-a-chip concept. More than 90 percent of this chip was built from standard blocks in our technology library,” according to Paul Coteus, one of the managers of the project (Timothy Morgan, “IBM’s Blue Gene/L Shows Off Minimalist Server Design,” The Four Hundred, http://www.midrangeserver.com/tfh/tfh120103-story05.html). By June 2004, the Blue Gene/L prototype systems appeared for the first time on the list of top ten supercomputers. IBM press release, “IBM Surges Past HP to Lead in Global Supercomputing,” http://www.research.ibm.com/bluegene.
43. This type of network is also called peer-to-peer, many-to-many, and “multihop.” In it, nodes in the network can be connected to all the other nodes or to a subset, and there are multiple paths through meshed nodes to each destination. These networks are highly adaptable and self-organizing. “The signature of a mesh network is that there is no central orchestrating device. Instead, each node is outfitted with radio communications gear and acts as a relay point for other nodes.” Sebastian Rupley, “Wireless: Mesh Networks,” PC Magazine, July 1, 2003, http://www.pcmag.com/article2/0,1759,1139094,00.asp; Robert Poor, “Wireless Mesh Networks,” Sensors Online, February 2003, http://www.sensorsmag.com/articles/0203/38/main.shtml; Tomas Krag and Sebastian Büettrich, “Wireless Mesh Networking,” O’Reilly Wireless DevCenter, January 22, 2004, http://www.oreillynet.com/pub/a/wireless/2004/01/22/
wirelessmesh.html.
44. Carver Mead, founder of more than twenty-five companies and holder of more than fifty patents, is pioneering the new field of neuromorphic electronic systems, circuits modeled on the brain and nervous system. See Carver A. Mead, “Neuromorphic Electronic Systems,” IEEE Proceedings 78.10 (October 1990): 1629–36. His work led to the computer touch pad and the cochlear chip used in digital hearing aids. His 1999 start-up company Foveon makes analog image-sensors that imitate the properties of film.
45. Edward Fredkin, “A Physicist’s Model of Computation,” Proceedings of the Twenty-sixth Recontre de Moriond, Texts of Fundamental Symmetries (1991): 283–97, http://digitalphilosophy.org/physicists_model.htm.
46. Gene Frantz, “Digital Signal Processing Trends,” IEEE Micro 20.6 (November/December 2000): 52–59, http://csdl.computer.org/comp/mags/mi/2000/06/m6052abs.htm.
47. In 2004 Intel announced a “right hand turn” switch toward dual-core (more than one processor on a chip) architecture after reaching a “thermal wall” (or “power wall”) caused by too much heat from ever-faster single processors: http://www.intel.com/employee/retiree/circuit/righthandturn.htm.
48. R. Landauer,“Irreversibility and Heat Generation in the Computing Process,” IBM Journal of Research Development 5 (1961): 183–91, http://www.research.ibm.com/journal/rd/053/ibmrd0503C.pdf.
49. Charles H. Bennett, “Logical Reversibility of Computation,” IBM Journal of Research Development 17 (1973): 525–32, http://www.research.ibm.com/journal/rd/176/ibmrd1706G.pdf; Charles H. Bennett, “The Thermodynamics of Computation—a Review,” International Journal of Theoretical Physics 21 (1982): 905–40; Charles H. Bennett, “Demons, Engines, and the Second Law,” Scientific American 257 (November 1987): 108–16.
50. Edward Fredkin and Tommaso Toffoli, “Conservative Logic,” International Journal of Theoretical Physics 21 (1982): 219–53, http://digitalphilosophy.org/download_documents/
ConservativeLogic.pdf. Edward Fredkin, “A Physicist’s Model of Computation,” Proceedings of the Twenty-sixth Recontre de Moriond, Tests of Fundamental Symmetries (1991): 283–97, http://www.digitalphilosophy.org/physicists_model.htm.
51. Knight, “Digital Image Stored in Single Molecule,” referring to Khitrin et al., “Nuclear Magnetic Resonance Molecular Photography”; see note 30 above.
52. Ten billion (1010) humans at 1019 cps each is 1029 cps for all human brains; 1042cps is greater than this by ten trillion (1013).
53. Fredkin, “Physicist’s Model of Computation”; see notes 45 and 50 above.
54. Two such gates are the Interaction Gate, a two-input, four-output universal, reversible-logic gate
and the Feynman Gate, a two-input, three-output reversible, universal-logic gate.
Both images are from ibid., p. 7.
55. Ibid., p. 8.
56. C. L. Seitz et al., “Hot-Clock nMOS,” Proceedings of the 1985 Chapel Hill Conference on VLSI (Rockville, Md.: Computer Science Press, 1985), pp. 1–17, http://caltechcstr.library.caltech.edu/archive/00000365; Ralph C. Merkle,“Reversible Electronic Logic Using Switches,” Nanotechnology 4 (1993): 21–40; S. G. Younis and T. F. Knight, “Practical Implementation of Charge Recovering Asymptotic Zero Power CMOS,” Proceedings of the 1993 Symposium on Integrated Systems (Cambridge, Mass.: MIT Press, 1993), pp. 234–50.
57. Hiawatha Bray, “Your Next Battery,” Boston Globe, November 24, 2003, http://www.boston.com/business/technology/articles/2003/11/24/
your_next_battery.
58. Seth Lloyd, “Ultimate Physical Limits to Computation,” Nature 406 (2000): 1047–54.
Early work on the limits of computation was done by Hans J. Bremermann in 1962: Hans J. Bremermann, “Optimization Through Evolution and Recombination,” in M. C. Yovits, C. T. Jacobi, C. D. Goldstein, eds., Self-Organizing Systems (Washington, D.C.: Spartan Books, 1962), pp. 93–106.
In 1984 Robert A. Freitas Jr. built on Bremermann’s work in Robert A. Freitas Jr., “Xenopsychology,” Analog 104 (April 1984): 41–53, http://www.rfreitas.com/Astro/Xenopsychology.htm#SentienceQuotient.
59. π × maximum energy (1017 kg × meter 2/second2) / (6.6×10–34) joule-seconds = ~ 5 × 1050 operations/second.
60. 5 × 1050 cps is equivalent to 5 × 1021 (5 billion trillion) human civilizations (each requiring 1029 cps).
61. Ten billion (1010) humans at 1016 cps each is 1026 cps for human civilization. So 5 × 1050 cps is equivalent to 5 × 1024 (5 trillion trillion) human civilizations.
62.
This estimate makes the conservative assumption that we’ve had ten billion humans for the past ten thousand years, which is obviously not the case. The actual number of humans has been increasing gradually over the past to reach about 6.1 billion in 2000. There are 3 × 107 seconds in a year, and 3 × 1011 seconds in ten thousand years. So, using the estimate of 1026 cps for human civilization, human thought over ten thousand years is equivalent to certainly no more than 3 × 1037 calculations. The ultimate laptop performs 5 × 1050 calculations in one second. So simulating ten thousand years of ten billion humans’ thoughts would take it about 10–13 seconds, which is one ten-thousandth of a nanosecond.
63. Anders Sandberg, “The Physics of the Information Processing Superobjects: Daily Life Among the Jupiter Brains,” Journal of Evolution & Technology 5 (December 22, 1999), http://www.transhumanist.com/volume5/Brains2.pdf.
64. See note 62 above; 1042 cps is a factor of 10–8 less than 1050 cps, so one ten-thousandth of a nanosecond becomes 10 microseconds.
65. See http://e-drexler.com/p/04/04/0330drexPubs.html for a list of Drexler’s publications and patents.
66. At the rate of $1012 and 1026 cps per thousand dollars ($103), we get 1035 cps per year in the mid-2040s. The ratio of this to the 1026 cps for all of the biological thinking in human civilization is 109 (one billion).
67. In 1984 Robert A. Freitas proposed a logarithmic scale of “sentience quotient” (SQ) based on the computational capacity of a system. In a scale that ranges from – 70 to 50, human brains come out at 13. The Cray 1 supercomputer comes out at 9. Freitas’s sentience quotient is based on the amount of computation per unit mass. A very fast computer with a simple algorithm would come out with a high SQ. The measure I describe for computation in this section builds on Freitas’s SQ and attempts to take into consideration the usefulness of the computation. So if a simpler computation is equivalent to the one actually being run, then we base the computational efficiency on the equivalent (simpler) computation. Also in my measure, the computation needs to be “useful.” Robert A. Freitas Jr., “Xenopsychology,” Analog 104 (April 1984): 41–53, http://www.rfreitas.com/Astro/Xenopsychology.htm#SentienceQuotient.
68. As an interesting aside, engravings on the side of small rocks did in fact represent an early form of computer storage. One of the earliest forms of written language, cuneiform, which was developed in Mesopotamia circa 3000 B.C., used pictorial markings on stones to store information. Agricultural records were maintained as cuneiform markings on stones placed in trays, and organized in rows and columns. These marked stones were essentially the first spreadsheet. One such cuneiform stone record is a prized artifact in my collection of historical computers.
69. One thousand (103) bits is less than the theoretical capacity of the atoms in the stone to store information (estimated at 1027 bits) by a factor of 10 24.
70. 1 cps (10° cps) is less than the theoretical computing capacity of the atoms in the stone (estimated at 1042 cps) by a factor of 10 42.
71. Edgar Buckingham, “Jet Propulsion for Airplanes,” NACA report no. 159, in Ninth Annual Report of NACA-1923 (Washington, D.C.: NACA, 1924), pp. 75–90. See http://naca.larc.nasa.gov/reports/1924/naca-report-159/.
72. Belle Dumé, “Microscopy Moves to the Picoscale,” PhysicsWeb, June 10, 2004, http://physicsweb.org/article/news/8/6/6, referring to Stefan Hembacher, Franz J. Giessibl, and Jochen Mannhart, “Force Microscopy with Light-Atom Probes,” Science 305.5682 (July 16, 2004): 380–83. This new “higher harmonic” force microscope, developed by University of Augsburg physicists, uses a single carbon atom as a probe and has a resolution that is at least three times better than that of traditional scanning tunneling microscopes. How it works: as the tungsten tip of the probe is made to oscillate at subnanometer amplitudes, the interaction between the tip atom and the carbon atom produces higher harmonic components in the underlying sinusoidal-wave pattern. The scientists measured these signals to obtain an ultrahigh-resolution image of the tip atom that showed features just 77 picometers (thousandths of a nanometer) across.
73. Henry Fountain, “New Detector May Test Heisenberg’s Uncertainty Principle,” New York Times, July 22, 2003.
74. Mitch Jacoby, “Electron Moves in Attoseconds,” Chemical and Engineering News 82.25 (June 21, 2004): 5, referring to Peter Abbamonte et al., “Imaging Density Disturbances in Water with a 41.3-Attosecond Time Resolution,” Physical Review Letters 92.23 (June 11, 2004): 237–401.
75. S. K. Lamoreaux and J. R. Torgerson, “Neutron Moderation in the Oklo Natural Reactor and the Time Variation of Alpha,” Physical Review D 69 (2004): 121701–6, http://scitation.aip.org/getabs/servlet/GetabsServlet?prog=normal&id=PRVDAQ 000069000012121701000001&idtype=cvips&gifs=yes; Eugenie S. Reich, “Speed of Light May Have Changed Recently,” New Scientist, June 30, 2004, http://www.newscientist.com/news/news.jsp?id=ns99996092.
76. Charles Choi, “Computer Program to Send Data Back in Time,” UPI, October 1, 2002, http://www.upi.com/view.cfm?StoryID=20021001-125805-3380r; Todd Brun, “Computers with Closed Timelike Curves Can Solve Hard Problems,” Foundation of Physics Letters 16 (2003): 245–53. Electronic edition, September 11, 2002, http://arxiv.org/PS_cache/gr-qc/pdf/0209/0209061.pdf.
Chapter Four: Achieving the Software of Human Intelligence: How to Reverse Engineer the Human Brain
1. Lloyd Watts, “Visualizing Complexity in the Brain,” in D. Fogel and C. Robinson, eds., Computational Intelligence: The Experts Speak (Piscataway, N.J.: IEEE Press/Wiley, 2003), http://www.lloydwatts.com/wcci.pdf.
2. J. G. Taylor, B. Horwitz, and K. J. Friston, “The Global Brain: Imaging and Modeling,” Neural Networks 13, special issue (2000): 827.
3. Neil A. Busis,“Neurosciences on the Internet,” http://www.neuroguide.com; “Neuroscientists Have Better Tools on the Brain,” Bio IT Bulletin, http://www.bio-itworld.com/news/041503_report2345.html; “Brain Projects to Reap Dividends for Neurotech Firms,” Neurotech Reports, http://www.neurotechreports.com/pages/brainprojects.html.
4. Robert A. Freitas Jr., Nanomedicine, vol. 1, Basic Capabilities, section 4.8.6, “Noninvasive Neuroelectric Monitoring” (Georgetown, Tex.: Landes Bioscience, 1999), pp. 115–16, http://www.nanomedicine.com/NMI/4.8.6.htm.
5. chapter 3 analyzed this issue; see the section “The Computational Capacity of the Human Brain.”
6. Speech-recognition research and development, Kurzweil Applied Intelligence, which I founded in 1982, now part of ScanSoft (formerly Kurzweil Computer Products).
7. Lloyd Watts, U.S. Patent Application, U.S. Patent and Trademark Office, 20030095667, May 22, 2003, “Computation of Multi-sensor Time Delays.” Abstract: “Determining a time delay between a first signal received at a first sensor and a second signal received at a second sensor is described. The first signal is analyzed to derive a plurality of first signal channels at different frequencies and the second signal is analyzed to derive a plurality of second signal channels at different frequencies. A first feature is detected that occurs at a first time in one of the first signal channels. A second feature is detected that occurs at a second time in one of the second signal channels. The first feature is matched with the second feature and the first time is compared to the second time to determine the time delay.” See also Nabil H. Farhat, U.S. Patent Application 20040073415, U.S. Patent and Trademark Office, April 15, 2004, “Dynamical Brain Model for Use in Data Processing Applications.”
8. I estimate the compressed genome at about thirty to one hundred million bytes (see note 57 for chapter 2); this is smaller than the object code for Microsoft Word and much smaller than the source code. See Word 2003 system requirements, October 20, 2003, http://www.microsoft.com/office/word/prodinfo/sysreq.mspx.
9. Wikipedia, http://en.wikipedia.org/wiki/Epigenetics.
10. See note 57 in chapter 2 for an analysis of the information content in the genome, which I estimate to be 30 to 100 million bytes, therefore less than 109 bits. See the section “Human Memory Capacity” in chapter 3 (p. 126) for my
analysis of the information in a human brain, estimated at 1018 bits.
11. Marie Gustafsson and Christian Balkenius, “Using Semantic Web Techniques for Validation of Cognitive Models against Neuroscientific Data,” AILS 04 Workshop, SAIS/SSLS Workshop (Swedish Artificial Intelligence Society; Swedish Society for Learning Systems), April 15–16, 2004, Lund, Sweden, www.lucs.lu.se/People/Christian.Balkenius/PDF/Gustafsson.Balkenius.2004.pdf.
12. See discussion in chapter 3. In one useful reference, when modeling neuron by neuron, Tomaso Poggio and Christof Koch describe the neuron as similar to a chip with thousands of logical gates. See T. Poggio and C. Koch, “Synapses That Compute Motion,” Scientific American 256 (1987): 46–52. Also C. Koch and T. Poggio, “Biophysics of Computational Systems: Neurons, Synapses, and Membranes,” in Synaptic Function, G. M. Edelman, W. E. Gall, and W. M. Cowan, eds. (New York: John Wiley and Sons, 1987), pp. 637–97.
13. On Mead, see http://www.technology.gov/Medal/2002/bios/Carver_A._Mead.pdf. Carver Mead, Analog VLSI and Neural Systems (Reading, Mass.: Addison-Wesley, 1986).
14. See note 172 in chapter 5 for an algorithmic description of a self-organizing neural net and note 175 in chapter 5 for a description of a self-organizing genetic algorithm.
15. See Gary Dudley et al., “Autonomic Self-Healing Systems in a Cross-Product IT Environment,” proceedings of the IEEE International Conference on Autonomic Computing, New York City, May 17–19, 2004, http://csdl.computer.org/comp/proceedings/icac/2004/2114/