Book Read Free

Genes, Giants, Monsters, and Men: The Surviving Elites of the Cosmic War and Their Hidden Agenda

Page 15

by Joseph Farrell


  Enter Dr. Craig Venter, and his private Celera Corporation, founded for the express purpose of mapping the entire human genome. In May of 1998, Venter announced that with the financial backing of the Perkins-Elmer Corporation, he was founding Celera (from the Latin word for “speed”), a “private company to unravel the human genetic code.” Venter announced that he planned to complete the entire project in the unheard-of time of a mere three years!233 It was a bold, perhaps even brazen announcement, for “nothing like the particular scheme he was proposing had been attempted before. If it were broken down into its various technical components, most of them had never even been attempted before, either.”234 In essence, if one wishes to compare the initial strategies of the public Human Genome Project and Venter’s private Celera venture, the aim of the former was quality, whereas Venter’s aim was speed. Thus, the Human Genome Project’s early strategy was to map each individual gene first, and then assemble the pieces — like a gigantic jigsaw puzzle — into their proper sequence later.235 Venter’s goal was much more ambitious, for not only did he wish to map every single human trait,236 but, by using massive amounts of DNA-sequencing machines in a Manhattan Project-sized assembly line that would blast the DNA into millions of tiny segments, reassemble and sequence the entire “book” of human DNA using supercomputers and very complex computer algorithms to reassemble the pieces of the jigsaw in their proper order. It was this “shotgun” approach of Venter that called forth rounds of denunciation from scientists within the public project,237 and yet that galvanized its leader, Dr. Francis Collins, to recentralize what up until then had been a variety of public laboratories and university efforts into a more coordinated effort,238 and that also caused him to re-evaluate the basic strategy the public project was pursuing. After Venter’s announcement, the public Human Genome Project adopted a mediate strategy between its initial “qualitative” approach and Venter’s shotgun approach, determining that it would go after a “rough draft” of the human genome sequence.239

  However, for the public project, there was a fly in the ointment, and that fly was the “Bermuda Accords,” to which all participants in the public project had subscribed. By mutual consent, all participants in the public Human Genome Project had agreed that, once individual pieces of data — the bits of the “jigsaw map” — had been completed by the project, these data would be made publicly accessible to everyone. This meant, of course, that Venter’s “Celera could grab their data off the web like everyone else.” The faster the public project went, “the faster their enemy could go.”240 This placed the Human Genome Project in a Catch-22.241

  By adopting a kind of “Manhattan Project” approach using massive numbers of DNA sequencers and supercomputers with complex algorithms to assemble the pieces, Venter had in fact, reversed the initial roles that the public and private projects had assumed. After a few months into the race, Venter’s Celera was in fact pursuing a detailed quality map of the entire genome, while the public project was aiming for a “rough draft.”242

  In the end, the race was so close that the Clinton Administration stepped in, and brokered what can only be described as a “truce” between Collins’ public Human Genome Project and Venter’s private Celera corporation in a declared “tie.”243

  B. TECHNOLOGIES AND LEGALITIES

  For our purposes, it is the technologies, techniques, and legal ramifications of the genome project and genetic engineering that must hold the center of our attention, for these three things provide the interpretive key by which to pry loose possible hidden meanings in some very ancient texts. The task of sequencing the enormous amount of information coiled up in the human double helix was the most daunting scientific problem mankind had ever faced, for even if every gene was “decoded,” the problem of fitting a billion pieces of data into a coherent map would require not only massive amounts of DNA sequencing machines, but massive amounts of computer power, plus a computer program able to assemble the data spit out by the sequencers into a coherent picture. The benefits, however, were well worth the effort, for with the ability to sequence human DNA, any other organism was, by comparison, a comparative “snap.” Moreover, with such maps in hand, one might be able to derive genetically-based definitions of life itself,244 or even to figure out the minimum amount of genes required in order for there to be life,245 and finally, detailed genetic knowledge of organisms would conceivably resolve one of the thornier problems within biology itself: taxonomy, or how to classify various species or even to determine if something was a distinct species.246 Additionally, as we shall see, the techniques of genetic engineering required accurate maps of the genome. Once that map was had, the implications — both good and ill — multiply like rabbits. So, a closer look at the technologies, techniques, and legal implications is in order.

  1. The Technology: Sequencers

  To appreciate the revolution that Venter’s private Celera corporation brought to the speed of unraveling the genome, one must understand the state of sequencing technology prior to it. The initial technique for reading a DNA sequence was developed by Fred Sanger of Cambridge University, and to appreciate its complexity and difficulty, it is best to cite Shreeve’s description of it at length. It was an entirely manual technique, and therein lay the problem:The difficulty of the problem was much greater than the metaphor of “reading letters” suggests. Unlike letters, the four nucleotides of DNA cannot be distinguished by their shape. Sanger’s method required a great deal of ingenuity. He began by preparing a solution containing millions of copies of a small DNA fragment, and divided the solution up into four equal parts. Then he heated the four test tubes, which separated the double-stranded DNA into single strands. To each tube, he added DNA polymerase, an enzyme that uses a single strand of DNA as a template to re-create its missing partner, a complementary string of base pairs. He also introduced a primer — a small synthesized fragment of DNA of a given sequence of nucleotides. The primer fragments would glom onto their complements on the template, which would tell the enzyme where to start the copying process. He also supplied each tube with plenty of free-floating nucleotides: the As, Ts, Gs, and Cs that the enzyme would need as raw material to construct the complementary strings.

  At this point in the experiment, the solution in each of the test tubes was exactly the same. Next came the ingenious part. Sanger spiked each tube with a smaller amount of a doctored form of one of the four nucleotides, which had been tinkered with to stop the copying process in its tracks. In the first tube, for instance, there might be a generous supply of ordinary Ts, As, Gs, and Cs but a few of the doctored dead-end Ts as well. Whenever the enzyme happened to grab one of these killjoys and attach it to the growing strand, the reaction on that particular strand would cease. Thus, after reheating the solution to separate the double-stranded DNA again, Sanger ended up with a collection of single strands of differing sizes in that particular tube, each one beginning at the start of the sequence, and each ending when a killer “T” was attached. The same process was going on simultaneously in the other three tubes with the three other DNA letters.

  To read the sequence of the entire original DNA fragment, Sanger thus had only to sort the fragments by their size, and read the last letter of each one.247

  But the method was obviously cumbersome, for with such a technique, mapping the human genome “would take 100,000 years, which alone might explain why a lot of very smart people initially thought the Human Genome Project a very stupid idea.”248 Clearly some sort of engineering revolution was needed if the approach was to be less cumbersome and capable of more rapid sequencing.

  Enter Tim Hunkapiller and his revolutionary idea.

  In the technique developed by Sanger, “the DNA letters were read by the researcher’s eyeball, scanning both across and up the gel at the same time, a process that was both error-prone and squintingly slow.”249 What was needed was a process that would mechanize the entire method.Tim’s idea was to color-code the last letters in each fragment by chemically attach
ing a different-colored dye to each of the four doctored, killjoy nucleotides: blue for C, yellow for G, red for T, and green for A. As each fragment in turn arrived at the bottom of the gel, the color of its last letter could be read by a detector, which would feed this information to a computer. If the process worked, several samples could be run at once.... Theoretically, it could sequence more DNA in a day than a single researcher could do in a year.250

  Yet this meant that the gels themselves still had to be manually prepared on slides. While the major revolution had indeed been achieved, something else was still needed if Venter’s goal of mapping the entire genome within three years was to be feasible. The problem with preparing such slab gels was simply that one always ran the chance of one bleeding into another, thus ruining the whole experiment.An alternative to slab gels had been in the air for years. Instead of running dozens of samples down a common gel, each sample might be enclosed in a thin, gel-filled capillary tube, where it couldn’t possibly wander into its neighbor’s lane. The capillary technology depended on the manipulation of incredibly tiny samples of DNA, however, and whenever things get very tiny, the margin of error shrinks in proportion.... Hunkapiller also had a team, sworn to secrecy, working on a multi-capillary machine, code-named the Manhattan Project.251

  Eventually this project was successful, and it was these machines that Venter employed en masse — and the public project as well — to bring about a much swifter conclusion to the project than anyone initially imagined.

  2. The Technology: Super-Computers

  Sequencing the entire human genome into an accurate map required not only the computing power of a supercomputer, but a whole new type of computer architecture. A typical supercomputer — say, a Cray — built its muscle by stringing together enormous numbers of processors. But the all-at-once assembly of the human genome could not be solved by lots of little processors; it required gross amounts of active memory that could handle a single process, very fast. Virtually all computers on the market at the time employed a 32-bit architecture, which was limited by physics to 4 gigabytes of active memory. The assembly algorithms alone were estimated to need 20 gigabytes of RAM, at the same time that the computer would be servicing all the rest of (Celera’s) needs. Compaq had won the Celera contract because its new supercomputer, the Alpha 8400, was built upon a 64-bit architecture that could handle 128 gigabytes of RAM — four thousand times the active memory of an average desktop machine. (Celera) had tested the Alpha 8400 and a competing machine from IBM by seeing how long it took them to assemble the DNA fragments from the H flu genome. Four years earlier, TIGR’s 32-bit Sun mainframe had completed the assembly in seventeen days. IBM’s new supercomputer finished the compute in three days, fifteen hours. Compac’s Alpha 8400 took only eleven hours. Celera ordered a dozen Alphas, to start.252

  All this added up to the fact that, at that time, ca. 1998–2000, Celera had the largest supercomputers in private hands, outdone only by the facilities at the U.S. Department of Defense as Los Alamos.253

  3. The Technique: Computer Algorithms

  In addition to the massive computational power required for the accurate mapping of the genome, there was also a massive software problem as well. The sheer size of the computer program itself required to do so would be immense, not to mention the several discrete tasks that it would have to perform, and to perform flawlessly, for the fact was that “a computing problem the size of the entire human genome had never been attempted before.”254 Indeed, it could honestly be said that the problem of accurately sequencing the entire human genome was as much a triumph of computer programming as it was of biology or revolutions in sequencing technology.

  The initial programming difficulty lay in these two precise things: (1) the sheer size of the genome itself and the amount of data that needed to be processed and assembled, and (2) the discrete tasks that such a complex program would have to perform. It could not, therefore, be reduced to a single principle or mathematical formula.255 The sequenced fragments of DNA coming out of the machines are the puzzle pieces, and the genome is the image they form when all the pieces are locked into place. But there are crucial differences. First, jigsaw puzzle pieces lock together by the shape of their edges, while each piece of a genome puzzle must overlap its neighbor by a handful of base pairs — around fifty, to be statistically sure — in order for them to be candidates for a match. Second, a jigsaw puzzle’s pieces are all carved up (but) form a single image, so each one is unique. The fragments of DNA to be assembled aren’t unique at all. On the contrary, each one must contain bits of sequence that are also represented on other pieces, or else there would be no way to overlap them, and no way to put the picture together.

  The upshot of this is that one needs a whole lot of pieces in order to put the picture together.256

  Even assembling the puzzle of the lowly fruit fly, with its only 120 million base pairs, required some six trillion calculations by a computer!257

  Celera’s programmers “visualized tiny islands of sequenced code like growing crystals — far-flung stars in a great dark sky gradually extending their perimeters toward their distant neighbors until, finally, they touched.”258 But this only highlighted the multitude of discrete tasks that the program would have to do, not to mention its sheer size:...(The) algorithm would actually entail a series of discrete problems, each of them fanning out in an array of subprocesses, and those subprocesses would be further broken down into their own components, and so on, until one reached the level of the hundreds of thousands of individual lines of computer code that would have to be written to make the whole thing work. Ways of checking and refining each component would be incorporated into the program. But whether it worked or not would be revealed only in the last step, when the time came to upend the can and see what tumbled out.259

  In addition to all this, there was also the problem of “statistical noise,” i.e., the accuracy of the data coming from the biologists’ sequencing machines. Thus, the program also had to be designed in such a fashion that it could work with realistic data coming from the sequencers.260

  Yet another problem facing Celera’s programmers was the problem of contiguous DNA fragments and the whole phenomenon of having to fit the “DNA islands” together by looking at overlapping base pairs. The “Overlapper” component of the computer program, however, faced a fundamental difficulty, and that was that...both the (fruit fly) and human genomes were riddled with identical repeating sections. As a consequence, all too often a fragment of DNA would overlap with two, three, or even a dozen other pieces, because the overlapping sequence was represented in more than one place in the genome. At all costs, the assembly program had to avoid making false connections, which could mislead researchers for decades to come. For the next step of the assembler, then (Celera’s programmer) Myers had written a program that essentially broke apart most of what the Overlapper had put together, keeping only those joins where a piece went together with only one other piece. Myers called the result of these uniquely joined pieces a “unitig.” In effect, rather than attacking the problem of a complex genome’s thousands of repeats, the Unitigger stage dodged the issue for the moment, telling the computer, “Let’s just assemble the pieces we know go together correctly and throw the rest into a bin to deal with later.” The Unitigger was an act of genius, in one stroke reducing the complexity of the puzzle by a factor of 100.261

  In short, the “Celera Assembler,” which is what the program was eventually called, was confronted, via the overlaps, with “an infinite chain of forks in a path, and the program had to be able to choose the right turn every time and not overlap one piece of code with an extremely similar one that might actually be miles away in the genome.”262 After testing the program against several already known or well-mapped genomes, Celera’s program was finally successful, and its lawyers moved immediately to patent the program.263

  4. The Legal Implications

  The patenting of Celera’s Assembler is the gateway into the la
rger issue of patenting specific genes and engineered life forms. In a way, the rush to patent various genes or genetically engineered life forms might be considered a kind of genetic or “molecular land grab.”264 Early on, in fact, an isolated human gene was ruled by the U.S. Patent and Trademark office that it could be considered “valid intellectual property, if it fulfilled the requirements that any other invention had to meet in order to get a patent.”265 It is important to note what is being said here. The entire genome of an organism, since it already exists in nature and cannot be invented, cannot be patented. It would be like trying to patent an eagle or an oak tree or, for that matter, a mountain range. But one could patent certain individual genes or sections of the genome if they were separated for some function “by the hand of man.”266

  Under this understanding then, there are four requirements in U.S. patent law that an invention must meet to be patentable intellectual property:The invention must first be original. It cannot have been published before, or be too much like some previous invention. Second, it must be “nonobvious.” You cannot get a patent by wrapping a rock in cloth and calling it a no-scuff doorstop. Third, the invention must have a demonstrable function. If you mix silicone with boric oxide and come up with an exceptionally bouncy rubber, you won’t necessarily get a patent. Demonstrate its value as a toy, however, and you can call it Silly Putty and make a fortune. The final guideline requires “enablement”: The invention must be described clearly enough in writing so that any skilled practitioner in the same trade can read it and fashion the invention himself. A patent does not confer ownership. It is simply a contract between the inventor and the government, whereby the former agrees to make public his invention in exchange for legal protection against others making or using it for commercial purposes for the next twenty years....

 

‹ Prev