Know This

Home > Other > Know This > Page 13
Know This Page 13

by Mr. John Brockman


  The now-abandoned space shuttle was a reusable spacecraft but failed to reduce launch costs and violated one of the cardinal rules of transport: Separate the passengers from the freight. Someday we will look back and recognize one of the other roadblocks to an efficient launch system: separating the propellant from the fuel.

  There is no reason the source of reaction mass (propellant) has to be the same as the source of energy (fuel). Burning a near-explosive mix of chemicals makes the process inherently dangerous and places a hard limit on specific impulse (ISP), a measure of how much acceleration can be derived from a given amount of propellant/fuel. It is also the reason that the original objective of military rocketry—“to make the target more dangerous than the launch site”—took so long to achieve.

  The launch business has been crippled, so far, by a vicious circle that has limited the market to expensive payloads—astronauts, military satellites, communication satellites, deep space probes—consigned by customers who can afford to throw the launch vehicle away after a single use. Reusable rockets are the best hope of breaking this cycle and moving forward on a path leading to low-cost, high-duty-cycle launch systems where the vehicle carries inert propellant and the energy source remains on the ground.

  All the advances in autonomous control, combustion engineering, and computational fluid dynamics that allowed those two rockets to make a controlled descent after only a handful of attempts are exactly what will be needed to develop a new generation of launch vehicles that leave chemical combustion behind to ascend on a pulsed energy beam.

  We took an important first step in this direction in 2015.

  The Space Age Takes Off . . . and Returns to Earth Again

  Peter Schwartz

  Futurist, business strategist; senior vice president for global government relations and strategic planning, Salesforce.com; author, Inevitable Surprises

  As an adolescent in the fifties, along with many others, I dreamt of the Space Age. We knew what the Space Age was supposed to look like: silver, bullet-shaped rockets rising into the sky on a column of flame, and as they return, descending on an identical column of flame, landing gently at the spaceport. Those dreams led me to a degree in astronautical engineering at RPI.

  The reality of spaceflight turned out to be very different. We built multistage booster rockets that were thrown away after every launch. Bringing them back turned out to be too hard. Carrying enough fuel to power the landing, and managing the turbulent flow of the rocket exhaust as the vehicle slowly descends on that violent, roaring column of flaming gas, was too great a challenge. Indeed, even the efforts to build vertical takeoff-and-landing jet fighters in the fifties also failed, for similar reasons.

  The disposable launch vehicle made the Space Age too costly for most applications. Getting any mass into orbit costs many thousands of dollars per pound. Imagine what an airline ticket would cost if the airline threw away the aircraft after every flight. The booster vehicle generally costs a few hundred million dollars—about the cost of a modern jet liner, and we get only one use out of it. No other country, like Russia or China, nor any companies, like Boeing or Lockheed, could solve the technical problems of a reusable booster.

  The space shuttle was intended to meet this challenge by being reusable. Unfortunately, the cost of refurbishing it after each launch was so great that the shuttle launch was far more expensive than a disposable launcher flight. When I worked on mission planning for the space shuttle at the Stanford Research Institute in the early seventies, the assumption was that the cost of each launch would be $118 per pound ($657 in current dollars), justifying many applications, with each shuttle flying once a month. Instead, the shuttles could fly only a couple of times per year, at a cost of $27,000 per pound, meaning most applications were off the table. So space was inaccessible except for those whose needs justified the huge costs either of a shuttle or a single-use booster: the military, telecommunications companies, and some government-funded high-cost science.

  But in the last few weeks of 2015, all that changed, as the teams from two startup rocketry companies, Blue Origin and SpaceX, brought their launchers back to a vertical landing at the launch site. Both of their rockets were able to control that torrent of flaming gas to produce a gentle landing, ready to be prepared for another launch. Provided that we can do this on a regular basis, the economics of spaceflight have suddenly and fundamentally changed. It won’t be cheap yet, but many more applications will be possible. And the costs will continue to fall with experience.

  While both companies solved the hard problem of controlling the vehicle at slow speed on a column of turbulent gas, the SpaceX achievement will be more consequential in the near future. The Blue Origin rocket could fly to an altitude of only 60 miles before returning to Earth and is intended mainly for tourism. The SpaceX vehicle, Falcon 9, could (and did) launch a second stage that achieved Earth orbit. And SpaceX already ferries supplies, and may soon be carrying astronauts, to the International Space Station. The ability to reuse their most expensive component will reduce their launch costs by as much as 90 percent, and over time those costs will decline. Boeing and Lockheed should be worried.

  Of course, the Blue Origin rocket, New Shepard, will also continue to improve. Their real competition is with Virgin Galactic, which has had some difficulties lately—a crash that killed one of the pilots. Both companies are competing for the space tourism market and (for now) Blue Origin appears to be ahead.

  We have turned a corner in spaceflight. We can dream of a Space Age again. Life in orbit becomes imaginable. Capturing asteroids to mine and human interplanetary exploration have both become much more likely. The idea that many of us living today will be able to see Earth from space is no longer a distant dream.

  How Widely Should We Draw The Circle?

  Scott Aaronson

  David J. Bruton Centennial Professor of computer science, University of Texas at Austin; author, Quantum Computing Since Democritus

  For fifteen years, popular-science readers have got used to breathless claims about commercial quantum computers being just around the corner. As far as I can tell, 2015 marked a turning point. For the first time, the most hard-nosed experimentalists are talking about integrating forty or more high-quality quantum bits (qubits) into a small programmable quantum computer—not in the remote future but in the next few years. If built, such a device will probably still be too small to do anything useful, but I honestly don’t care.

  The point is, forty qubits are enough to do something that computer scientists are pretty sure would take trillions of steps to simulate using today’s computers. They’ll suffice to disprove the skeptics, to show that nature really does put this immense computing power at our disposal—just as the physics textbooks have implied since the late 1920s. (And if quantum computing turns out not be possible, for some deep reason? To me that’s unlikely, but even more exciting, since it would mean a revolution in physics.)

  So, is imminent quantum supremacy the “most interesting recent [scientific] news”? I can’t say that with any confidence. The trouble is, which news we find interesting depends on how widely we draw the circle around our own hobbyhorses. And some days, quantum computing seems to me to fade into irrelevance, next to the precarious state of the Earth. Perhaps when people look back a century from now, they’ll say that the most important science news of 2015 was that the West Antarctic Ice Sheet was found to be closer to collapse than even the alarmists predicted. Or, just possibly, they’ll say the most important news was that in 2015 the AI–risk movement finally went mainstream.

  This movement posits that superhuman artificial intelligence is likely to be built within the next century, and that the biggest problem facing humanity today is to ensure that when the AI arrives, it will be “friendly” to human values (rather, than, say, razing the solar system for more computing power to serve its inscrutable ends). I like to tease my AI–risk friends that I’ll be more worried about the impending AI singularity when
my Wi-Fi stays working for more than a week. But who knows? At least this scenario, if it panned out, would render the melting glaciers pretty much irrelevant.

  Instead of expanding my “circle of interest” to encompass the future of civilization, I could also contract it, around my fellow theoretical computer scientists. In that case, 2015 was the year that László Babai of the University of Chicago announced the first “provably fast” algorithm for one of the central problems in computing: graph isomorphism. This problem is to determine whether two networks of nodes and links are “isomorphic” (that is, whether they become the same if you relabel the nodes). For networks with n nodes, the best previous algorithm—which Babai also helped to discover, thirty years ago—took a number of steps that grew exponentially with the square root of n.

  The new algorithm takes a number of steps that grows exponentially with a power of log(n) (a rate that’s called “quasi-polynomial”). Babai’s breakthrough probably has no applications, since the existing algorithms were already fast enough for any networks that would ever arise in practice. But for those who are motivated by an unquenchable thirst to know the ultimate limits of computation, this is arguably the biggest news so far of the 21st century.

  Drawing the circle even more tightly, in “quantum query complexity”—a tiny subfield of quantum computing I cut my teeth on as a student—it was discovered this past year that there are Boolean functions that a quantum computer can evaluate in less than the square root of the number of input accesses that a classical computer needs, a gap that had stood as the record since 1996. Even if useful quantum computers are built, this result will have zero applications, since the functions that achieve this separation are artificial monstrosities, constructed only to prove the point. But it excited me: It told me that progress is possible, that the seemingly eternal puzzles that drew me into research as a teenager do occasionally get solved. So damned if I’m not going to tell you about it.

  At a time when the glaciers are melting, how can I justify getting excited about a new type of computer that will be faster for certain specific problems—let alone about an artificial function for which the new type of computer gives you a slightly bigger advantage? The “obvious” answer is that basic research could give us new tools with which to tackle the woes of civilization, as it’s done many times before. Indeed, we don’t need to go as far as an AI singularity to imagine how.

  By letting us simulate quantum physics and chemistry, quantum computers might spark a renaissance in materials science, and allow (for example) the design of higher-efficiency solar panels. For me, though, the point goes beyond that, and has to do with the dignity of the human race. If, in millions of years, aliens come across the ruins of our civilization and dig up our digital archives, I’d like them to know that before humans killed ourselves off, we at least managed to figure out that the graph-isomorphism problem is solvable in quasi-polynomial time and that there exist Boolean functions with superquadratic quantum speedups. So I’m glad to say that they will know these things, and that now you do, too.

  A New Algorithm Showing What Computers Can and Cannot Do

  John Naughton

  Columnist, the Observer; Emeritus Professor of the Public Understanding of Technology, Open University, U.K.; Emeritus Fellow, Wolfson College, Cambridge; author, From Gutenberg to Zuckerberg

  The most interesting news came late in 2015—on November 10th, to be precise, when László Babai of the University of Chicago announced that he had come up with a new algorithm for solving the graph-isomorphism problem. This algorithm appears to be much more efficient than the previous “best” algorithm, which has ruled for over thirty years. Since graph isomorphism is one of the great unsolved problems in computer science, if Babai’s claim stands up to the kind of intensive peer-review to which it is now being subjected, then the implications are fascinating—not least because we may need to rethink our assumptions about what computers can and cannot do.

  The graph-isomorphism problem seems deceptively simple: how to tell when two different graphs (what mathematicians call networks) are really the same, in the sense that there’s an “isomorphism”—a one-to-one correspondence between their nodes that preserves each node’s connections—between them. Easy to state, but difficult to solve, since even small graphs can be made to look different just by moving their nodes around. The standard way to check for isomorphism is to consider all possible ways to match up the nodes in one network with those in the other. That’s tedious but feasible for very small graphs, but it rapidly gets out of hand as the number of nodes increases. To compare two graphs with just ten nodes, for example, you’d have to check more than 3.6 million (i.e., 10 factorial) possible matchings. For graphs with 100 nodes, you’re probably looking at a number bigger than all the molecules in the universe. And in a Facebook age, networks with millions of nodes are commonplace.

  From the point of view of practical computing, factorials are really bad news, because the running time of a factorial algorithm can quickly escalate into billions of years. So the only practical algorithms are those for problems whose solutions can be expressed as polynomials (e.g., n-squared or n-cubed, where n is the number of nodes), because running times for them increase much more slowly than those for factorial or exponential functions.

  The tantalizing thing about Babai’s algorithm is that it is neither pure factorial nor polynomial but what he calls “quasipolynomial.” It’s not clear yet what this means, but the fuss in the mathematics and computer-science community suggests that while the new algorithm might not be the Holy Grail, it is nevertheless significantly more efficient than what’s gone before.

  If that turns out to be the case, what are the implications? Well, first, there may be some small but discrete benefits. Babai’s breakthrough could conceivably help with other kinds of computationally difficult problems. For example, genomics researchers have been trying for years to find an efficient algorithm for comparing the long strings of chemical letters within DNA molecules. This is a problem analogous to that of graph isomorphism, and any advance in that area may have benefits for genetic research.

  But the most important implication of Babai’s work may be inspirational—in reawakening mathematicians’ interests in other kinds of hard problems that currently lie beyond the reach of even the most formidable computational resources. The classic example is the public-key encryption system on which the security of all online transactions depends. This works on asymmetry: It is relatively easy to take two huge prime numbers and multiply them together to produce an even larger number. But—provided the original primes are large enough—it is computationally difficult (in the sense that it would take an impracticable length of time) to factorize the product—that is, to determine the two original numbers from which it was calculated. If, however, an efficient factorizing algorithm were to be found, then our collective security would evaporate and we would need to go back to the drawing board.

  Designer Humans

  Mark Pagel

  Professor of evolutionary biology, Reading University, U.K., author, Wired for Culture

  The use of CRISPR (Clustered Regularly Interspaced Short Palindromic Repeats) technologies for targeted gene-editing means that an organism’s genome can be cheaply cut and then edited at any location. The implications of such a technology are potentially so great that “crisper” has become a widely heard term outside science, being the darling of radio and television talk shows. And why not? All of a sudden, scientists and biotechnologists have a way of making designer organisms. The technology’s first real successes—in yeast, fish, flies, and even some monkeys—have already been trumpeted.

  But of course what is on everyone’s mind is its use in humans. By modifying genes in potential parental egg or sperm cells, it will produce babies “designed” to have some desired trait (or to lack an undesirable one). By editing genes early enough in embryonic development—a time when only a few cells become the progenitors of all the cells in our bodie
s—the same design features can be obtained in the adult.

  Just imagine: no more Huntington’s chorea, no more sickle-cell anemia, no more cystic fibrosis, or a raft of other heritable disorders. But what about desirable traits—eye and hair color, personality, temperament, even intelligence? The first of these is already within CRISPR’s grasp. The others are probably only partly caused by genes and even then potentially by scores or possibly hundreds of genes. But who’s to say we won’t figure out even those cases someday? The startling progress that genomic and biotechnological workers have made over the last twenty years is not slowing down, and there’s reason to believe that (if not in our own lifetimes, surely in our children’s) knowledge of how genes influence many of the traits we’d like to design into or out of humans will be widely available.

  None of this is lost on the CRISPR community. Already there have been calls for a moratorium on the use of the technology in humans. But that was true in the early days of in-vitro fertilization, although not all were from the scientific community. The point is that our norms of acceptance of technological developments get shifted as those technologies become more familiar.

  The current moratorium on the use of CRISPR technologies in humans probably won’t last long. The technology is remarkably accurate and reliable and this is still early days. Refinements are inevitable, as are demonstrations of CRISPR’s worth in ameliorating, say, agricultural or environmental problems. All this will wear down our resistance to designing humans. Already, CRISPR has been applied successfully to cultured cell lines derived from humans. The first truly and thoroughly designed humans are more than just the subjects of science fiction: They are on our doorstep, waiting to be let in.

 

‹ Prev