The Field

Home > Other > The Field > Page 15
The Field Page 15

by Lynne McTaggart


  As Jahn began considering what he might need to get a program of this size off the ground, he made contact with many of the other new explorers in frontier physics and consciousness studies. In the process, he met and hired Brenda Dunne, a developmental psychologist at the University of Chicago, who had conducted and validated a number of experiments in clairvoyance.

  In Dunne, Jahn had deliberately chosen a counterpoint to himself, which was obvious at first sight by their gaping physical differences. Jahn was spare and gaunt, often neatly turned out in a tidy checked shirt and casual trousers, the informal uniform of conservative academia, and in both his manner and his erudite speech gave off a sense of containment – never a superfluous word or unnecessary gesture. Dunne had the more effusive personal style. She was often draped in flowing clothes, her immense mane of salt-and-pepper hair hung loose or pony-tailed like a Native American. Although also a seasoned scientist, Dunne tended to lead from the instinctive. Her job was to provide the more metaphysical and subjective understanding of the material to bolster Jahn’s largely analytical approach. He would design the machines; she would design the look and feel of the experiments. He would represent PEAR’s face to the world; she would represent a less formidable face to its participants.

  The first task, in Jahn’s mind, was to improve upon the RNG technology. Jahn decided that his Random Event Generators, or REGs (hard ‘G’), as they came to be called, should be driven by an electronic noise source, rather than atomic decay. The random output of these machines was controlled by something akin to the white noise you hear when the dial of your radio is between stations – a tiny roaring surf of free electrons. This provided a mechanism to send out a randomly alternating string of positive and negative pulses. The results were displayed on a computer screen and then transmitted on-line to a data management system. A number of failsafe features, such as voltage and thermal monitors, guarded against tampering or breakdown, and they were checked religiously to ensure that when not involved in experiments of volition, they were producing each of their two possibilities, 1 or 0, more or less 50 per cent of the time.

  All the hardware failsafe devices guaranteed that any deviation from the normal 50 – 50 chance heads and tails would not be due to any electronic glitches, but purely the result of some information or influence acting upon it. Even the most minute effects could be quickly quantified by the computer. Jahn also souped up the hardware, getting it to work far faster. By the time he was finished, it occurred to him that in a single afternoon he could collect more data than Rhine had amassed in his entire lifetime.

  Dunne and Jahn also refined the scientific protocol. They decided that all their REG studies should follow the same design: each participant sitting in front of the machine would undergo three tests of equal length. In the first, they would will the machine to produce more 1s then 0s (or ‘HI’s, as PEAR researchers put it). In the second, they would mentally direct the machine to produce more 0s than 1s (more ‘LO’s). In the third, they would attempt not to influence the machine in any way. This three-stage process was to guard against any bias in the equipment. The machine would then record the operator’s decisions virtually simultaneously.

  When a participant pressed a button, he would set off a trial of 200 binary ‘hits’ of 1 or 0, lasting about one-fifth of a second, during which time he would hold his mental intention (to produce more than the 100 ‘1’s, say, expected by chance). Usually the PEAR team would ask each operator to carry out a run of 50 trials at one go, a process that might only take half an hour but which would produce 10,000 hits of 1 or 0. Dunne and Jahn typically examined scores for each operator of blocks of 50 or 100 runs (2,500 to 5,000 trials, or 500,000 to one million binary ‘hits’) – the minimum chunk of data, they determined, for reliably pinpointing trends.17

  From the outset it was clear that they needed a sophisticated method of analyzing their results. Schmidt had simply counted up the number of hits and compared them to chance. Jahn and Dunne decided to use a tried-and-tested method in statistics called cumulative deviation, which entailed continually adding up your deviation from the chance score – 100 – for each trial and averaging it, and then plotting it on a graph.

  The graph would show the mean, or average, and certain standard deviations – margins where results deviate from the mean but are still not considered significant. In trials of 200 binary hits occurring randomly, your machine should throw an average of 100 heads and 100 tails over time – so your bell curve will have 100 as its mean, represented by a vertical line initiated from top of its highest point. If you were to plot each result every time your machine conducted a trial, you would have individual points on your bell curve – 101, 103, 95, 104 – representing each score. Because any single effect is so tiny, it is difficult, doing it that way, to see any overall trend. But if you continue to add up and average your results and are having effects, no matter how slight, your scores should lead to a steadily increasing departure from expectation. Cumulative averaging shows off any deviation in bold relief.18

  It was also clear to Jahn and Dunne that they needed a vast amount of data. Statistical glitches can occur even with a pool of data as large as 25,000 trials. If you are looking at a binary chance event like coin tossing, in statistical terms you should be throwing heads or tails roughly half the time. Say you decided to toss a coin 200 times and came up with 102 heads. Given the small numbers involved, your slight favouring of heads would still be considered statistically well within the laws of chance.

  But if you tossed that same coin 2 million times, and you came up with 1,020,000 heads, this would suddenly represent a huge deviation from chance. With tiny effects like the REG tests, it is not individual or small clusters of studies but the combining of vast amounts of data which ‘compounds’ to statistical significance, by its increasing departure from expectation.19

  After their first 5000 studies Jahn and Dunne decided to pull off the data and compute what was happening thus far. It was a Sunday evening and they were at Bob Jahn’s house. They took their average results for each operator and began plotting them on a graph, using little red dots for any time their operators had attempted to influence the machine to have a HI (heads) and little green dots for the LO intentions (tails).

  When they finished, they examined what they had. If there had been no deviation from chance, the two bell curves would be sitting right on top of the bell curve of chance, with 100 as the mean.

  Their results were nothing like that. The two types of intention had each gone in a different direction. The red bell curve, representing the ‘HI’ intentions, had shifted to the right of the chance average, and the green bell curve had shifted to the left. This was as rigorous a scientific study as they come, and yet somehow their participants – all ordinary people, no psychic superstars among them – had been able to affect the random movement of machines simply by an act of will.

  Jahn looked up from the data, sat back in his chair and met Brenda’s eye. ‘That’s very nice,’ he said.

  Dunne stared at him in disbelief. With scientific rigor and technological precision they had just generated proof of ideas that were formerly the province of mystical experience or the most outlandish science fiction. They’d proved something revolutionary about human consciousness. Maybe one day this work would herald a refinement of quantum physics. Indeed, what they had in their hands was beyond current science – was perhaps the beginnings of a new science.

  ‘What do you mean, “that’s very nice”?’ she replied. ‘This is absolutely … incredible!’

  Even Bob Jahn, in his cautious and deliberate manner, his dislike of being immoderate or waving a fist in the air, had to admit, staring at the graphs sprawled across his dining-room table, that there were no words in his current scientific vocabulary to explain them.

  It was Brenda who first suggested that they make the machines more engaging and the environment more cosy in order to encourage the ‘resonance’ which appeared to be occurring between
participants and their machines. Jahn began creating a host of ingenious random mechanical, optical and electronic devices – a swinging pendulum; a spouting water fountain; computer screens which switched attractive images at random; a moveable REG which skittled randomly back and forth across a table; and the jewel in the PEAR lab’s crown, a random mechanical cascade. At rest it appeared like a giant pinball machine attached to the wall, a 6-by-10-foot framed set of 330 pegs. When activated, nine thousand polystyrene balls tumbled over the pegs in the span of only 12 minutes and stacked in one of nineteen collection bins, eventually producing a configuration resembling a bell-shaped curve. Brenda put a toy frog on the moveable REGs and spent time selecting attractive computer images, so that participants would be ‘rewarded’ if they chose a certain image by seeing more of it. They put up wood paneling. They began a collection of teddy bears. They offered participants snacks and breaks.

  Year in and year out, Jahn and Dunne carried on the tedious process of collecting a mountain of data – which would eventually turn into the largest database ever assembled of studies into remote intention. At various points, they would stop to analyze all they had amassed thus far. In one 12-year period of nearly 2.5 million trials, it turned out that 52 per cent of all the trials were in the intended direction and nearly two-thirds of the ninety-one operators had overall success in influencing the machines the way they’d intended. This was true, no matter which type of machine was used.20 Nothing else – whether it was the way a participant looked at a machine, the strength of their concentration, the lighting, the background noise or even the presence of other people – seemed to make any difference to the results. So long as the participant willed the machine to register heads or tails, he or she had some influence on it a significant percentage of the time.

  The results with different individuals would vary (some would produce more heads than tails, even when they had concentrated on the exact opposite). Nevertheless, many operators had their own ‘signature’ outcome – Peter would tend to produce more heads than tails, and Paul vice versa.21 Results also tended to be unique to the individual operator, no matter what the machine. This indicated that the process was universal, not one occurring with only certain interactions or individuals.

  In 1987, Roger Nelson of the PEAR team and Dean Radin, both doctors of psychology, combined all the REG experiments – more than 800 – that had been conducted up to that time.22 A pooling together of the results of the individual studies of sixty-eight investigators, including Schmidt and the PEAR team, showed that participants could affect the machine so that it gives the desired result about 51 per cent of the time, against an expected result of 50 per cent. These results were similar to those of two earlier reviews and an overview of many of the experiments performed on dice.23 Schmidt’s results remained the most dramatic with those studies that had leapt to 54 per cent.24

  Although 51 or 54 per cent doesn’t sound like much of an effect, statistically speaking it’s a giant step. If you combine all the studies into what is called a ‘meta-analysis’, as Radin and Nelson did, the odds of this overall score occurring are a trillion to one.25 In their meta-analysis, Radin and Nelson even took account of the most frequent criticisms of the REG studies concerning procedures, data or equipment by setting up sixteen criteria by which to judge each experimenter’s overall data and then assigning each experiment a quality score.26 A more recent meta-analysis of the REG data from 1959 to 2000 showed a similar result.27 The US National Research Council also concluded that the REG trials could not be explained by chance.28

  An effect size is a figure which reflects the actual size of change or outcome in a study. It is arrived at by factoring in such variables as the number of participants and the length of the test. In some drug studies, it is arrived at by dividing the number of people who have had a positive effect from the drug by the total number of participants in the trial. The overall effect size of the PEAR database was 0.2 per hour.29 Usually an effect size between 0.0 to 0.3 is considered small, a 0.3 to 0.6 effect size is medium and anything above that is considered large. The PEAR effect sizes are considered small and the overall REG studies, small to medium. However, these effect sizes are far larger than those of many drugs deemed to be highly successful in medicine.

  Numerous studies have shown that propranolol and aspirin are highly successful in reducing heart attacks. Aspirin in particular has been hailed as a great white hope of heart disease prevention. Nevertheless, large studies have shown that the effect size of propranolol is 0.04 and aspirin is 0.03, respectively – or about ten times smaller than the effect sizes of the PEAR data. One method of determining the magnitude of effect sizes is to convert the figure to the number of persons surviving in a sample of 100 people. An effect size of 0.03 in a medical life-or-death situation would mean that three additional people out of one hundred survived, and an effect size of 0.3 would mean that an additional thirty of one hundred survived.30

  To give some hypothetical idea of the magnitude of the difference, say that with a certain type of heart operation, thirty patients out of a hundred usually survive. Now, say that patients undergoing this operation are given a new drug with an effect size of 0.3 – close to the size of the hourly PEAR effect. Offering the drug on top of the operation would virtually double the survival rate. .3 An additional effect size of 0would turn a medical treatment that had been life-saving less than half the time into one that worked in the majority of cases.31

  Other investigators using REG machines discovered that it was not simply humans who had this influence over the physical world. Using a variation of Jahn’s REG machines, a French scientist named René Peoc’h also carried out an ingenious experiment with baby chicks. As soon as they were born, a moveable REG was ‘imprinted’ on them as their ‘mother’. The robot was then placed outside the chicks’ cage and allowed to move about freely, as Peoc’h tracked its path. After a time, the evidence was clear – the robot was moving toward the chicks more than it would do if it were wandering randomly. The desire of the chicks to be near their mother was an ‘inferred intention’ that appeared to be having an effect in drawing the machine nearer.32 Peoc’h carried out a similar study with baby rabbits. He placed a bright light on the moveable REG that the baby rabbits found abhorrent. When the data from the experiment were analyzed, it appeared that the rabbits were successfully willing the machine to stay away from them.

  Jahn and Dunne began to formulate a theory. If reality resulted from some elaborate interaction of consciousness with its environment, then consciousness, like subatomic particles of matter, might also be based on a system of probabilities. One of the central tenets of quantum physics, first proposed by Louis de Broglie, is that subatomic entities can behave either as particles (precise things with a set location in space) or waves (diffuse and unbounded regions of influence which can flow through and interfere with other waves). They began to chew over the idea that consciousness had a similar duality. Each individual consciousness had its own ‘particulate’ separateness, but was also capable of ‘wave-like’ behavior, in which it could flow through any barriers or distance, to exchange information and interact with the physical world. At certain times, subatomic consciousness would get in resonance with – beat at the same frequency as – certain subatomic matter. In the model they began to assemble, consciousness ‘atoms’ combined with ordinary atoms – those, say, of the REG machine – and created a ‘consciousness molecule’ in which the whole was different from its component parts. The original atoms would each surrender their individual entities to a single larger, more complex entity. On the most basic level, their theory was saying, you and your REG machine develop coherence.33

  Certainly some of their results seemed to favor this interpretation. Jahn and Dunne had wondered if the tiny effect they were observing with individuals would get any larger if two or more people tried to influence the machine in tandem. The PEAR lab ran a series of studies using pairs of people, in which each pair was to act i
n concert when attempting to influence the machines.

  Of 256,500 trials, produced by fifteen pairs in forty-two experimental series, many pairs also produced a ‘signature’ result, which didn’t necessarily resemble the effect of either individual alone.34 Being of the same sex tended to have a very slight negative effect. These types of couples had a worse outcome than they achieved individually; with eight pairs of operators the results were the very opposite of what was intended. Couples of the opposite sex, all of whom knew each other, had a powerful complementary effect, producing more than three and a half times the effect of individuals. However, ‘bonded’ pairs, those couples in a relationship, had the most profound effect, which was nearly six times as strong as that of single operators.35

  If these effects depended upon some sort of resonance between the two participating consciousnesses, it would make sense that stronger effects would occur among those people sharing identities, such as siblings, twins or couples in a relationship.36 Being close may create coherence. As two waves in phase amplified a signal, it may be that a bonded couple has an especially powerful resonance, which would enhance their joint effect on the machine.

  A few years later, Dunne analyzed the database to see if results differed according to gender. When she divided results between men and women, she found that men on the whole were better at getting the machine to do what they wanted it do, although their overall effect was weaker than it was with women. Women, on the whole, had a stronger effect on the machine, but not necessarily in the direction they’d intended.37 After examining 270 databases produced by 135 operators in nine experiments between 1979 and 1993, Dunne found that men had equal success in making the machine do what they wanted, whether heads or tails (or HIs and LOs). Women, on the other hand, were successful in influencing the machine to record heads (HIs), but not tails (LOs). In fact, most of their attempts to get the machine to do tails failed. Although the machine would vary from chance, it would be in the very opposite direction of what they’d intended.38

 

‹ Prev