The Neuroscience of Intelligence
Page 6
Remember, a single test taken at age 13 identified these individuals. Again, you can see the predictive validity of this standardized test score is reasonably strong. Clearly, individuals in the top 1% of scores obtained in childhood have notable future achievements, but even within this rarified group, the higher the scores, the more likely there will be these kinds of achievements. The longitudinal study of the original talent search participants is continuing with additional follow-ups conducted by researchers Professor Camilla Benbow and Professor David Lubinski at Vanderbilt University.
Study 3. The third longitudinal study is the Scottish Mental Survey. This was a truly massive project conducted by the Scottish government. All children born in Scotland in 1921 and in 1936 completed intelligence testing at age 11 years and were re-tested again in old age. This study differed from the other two in that it included virtually all children in the country on a test of general intelligence rather than identifying samples of very high scorers (von Stumm & Deary, 2013). The total number of children in the study was about 160,000.
At the time this study began in the 1930s, there was considerable debate around the world about national intelligence and eugenics. This had profoundly evil consequences in Germany. It’s one of the reasons intelligence testing became a negative topic in academia following the Second World War. However, another reason for using intelligence tests in some countries was the desire to open opportunities for better schooling to all social classes by using test scores as objective evaluation to give all students an opportunity to attend the best schools irrespective of background or wealth. This actually happened in the UK after the Second World War, and this motivation was important in the development and use of the SAT in the USA.
But the Scottish survey was over after the second round of testing in 1932. It only became a longitudinal follow-up study largely by accident when the original records were rediscovered in an old storage room. Today, a team of researchers, directed by Professor Ian Deary at the University of Edinburgh, is using this database and follow-up evaluations to study the impact of intelligence on aging. Several years ago, Dr. Deary got a new grant from the Scottish government, restored the physical handwritten records as much as possible, and then computerized all of them. He also identified 550 original participants still living and willing to be re-tested. So there is now follow-up data. Let’s look at two interesting results from the longitudinal analyses.
1. IQ scores were fairly stable over time as demonstrated by showing scores at age 11 correlated to scores at age 80 (r = .72) (Deary et al., 2004). The intelligence test used at the beginning of the survey and for follow-up is called the Moray House Test. It gives an IQ score essentially equivalent to the Stanford–Binet or the WAIS. Recall that fluid intelligence decreases with age. Crystallized intelligence is more stable, and the IQ score from the test used in this study combined both fluid and crystallized intelligence. Although not part of this study, it should be noted that different components of IQ might rise and fall at different times across the life span (Hartshorne & Germine, 2015).
2. Individuals with higher intelligence scores at age 11 lived longer than their classmates with lower scores, as shown in Figure 1.7 (Batty et al., 2007; Murray et al., 2012; Whalley & Deary, 2001).
Figure 1.7 Childhood IQ scores predict adult mortality. Note many more people in the highest IQ group are alive recently compared to the lowest IQ group.
Reprinted with permission, Whalley & Deary (2001).
The top graph in Figure 1.7 shows the data for women, and the bottom graph shows men. Both show the same trends. On the x-axis, we see ages of participants by decade from age 10 to age 80, and on the y-axis, we see the percentage of the group originally tested who are still alive at each age. The data are shown separately for the lowest and the highest quartile based on IQ.
So, for example, in Figure 1.7, let’s look at the top graph of women, and let’s focus on the data points at the far right side of the graph (about age 80 years old). You can see that more women are alive in the highest IQ quartile, about 70%, compared to the bottom quartile, where about 45% are still alive. This is quite a large difference. And this is true starting around age 20. It’s the same for men, but starting later around age 40 and the trend is not quite as strong. Because the UK has universal healthcare, differential rates of insurance coverage do not influence these data. But why should IQ be related to longevity? Here is one possible explanation. Before age 11, several factors, both genetic and environmental, may influence IQ and then higher IQ leads to healthier environments and behaviors, and to a possibly better understanding of physician instructions, and these in turn influence age at death. However, there is compelling evidence that a better explanation is that mortality and IQ have genetic influences in common. An estimated 84%–95% of the variance in the mortality/IQ correlation may be due to genes (Arden et al., 2015).
To recap the evidence from these three classic studies, Terman’s project helped popularize the importance of IQ scores and demolished the popular but negative stereotype of childhood genius. Gifted student education essentially started with this study. Stanley’s project went further and incorporated ways to foster academic achievement in the most gifted and talented students. Deary’s analyses of the National Survey data in Scotland provided new insights about the stability of IQ scores and the importance of general intelligence for a number of social and health outcomes.
These studies provide compelling data that one psychometric test score at an early age predicts many aspects of later life including professional success, income, healthy aging and even mortality. Bottom line: It’s better to be smart, even if defined by test scores that have meaning only relative to other people.
1.11 Why Do Myths About Intelligence Definitions and Measurement Persist?
Given all this strong empirical evidence that intelligence test scores are meaningful, why does the myth persist that scores have little if any validity? Here is an informative example. From time to time, a college admissions representative will assert that in their institution they find no relationship between grade point average (GPA) and SAT scores. Such observations are virtually always based on a lack of understanding of a basic statistical principle regarding the correlation between two variables. To calculate a correlation between any two variables, there must be a wide range of scores for each variable. At a place like MIT, for example, most students fall in a narrow range of high SAT scores. This is a classic problem of restriction of range. There is little variance among the students, so in this case, the relationship between GPA and SAT scores will not be very strong. Sampling from just the high end or just the low end or just the middle of a distribution restricts range and results in spuriously low or zero correlations. Restriction of range actually accounts for many findings about what intelligence test scores “fail” to predict.
Here’s another classic example of an erroneous finding due to restriction of range. In the 1930s Louis Thurstone challenged Spearman’s finding of a g-factor (Thurstone, 1938) and proposed an alternative model of seven Primary Abilities that he claimed were independent of each other. That is, they were not correlated to each other and there was no common g-factor. There’s Spatial Ability as measured by tests that require mental rotation of pictures and objects. There’s Perceptual Speed as measured by tests of finding small differences in pictures as fast as possible. There’s Number Facility as measured by tests of computation. There’s Verbal Comprehension as measured by tests of vocabulary. There’s Word Fluency as measured by tests that require generating as many words as possible for a given category within a time limit. There’s Memory as tested by recall for digits and objects. And finally, there is Inductive Reasoning as measured by tests of analogies and logic.
However, Thurstone’s model was not supported by subsequent research. It turns out that the original research was flawed because the samples he used did not include individuals across the full range of possible scores. That is, the range was restricted so ther
e was no variance to predict any test from any other. When additional research corrected this problem, the Thurstone “primary” abilities, in fact, were correlated to each other and there was a g-factor. Thurstone retracted his original conclusion (Thurstone & Thurstone, 1941). So why include this example from the 1930s in a modern book? As we will see in later chapters, a surprising number of studies still report erroneous findings because of restricted range.
Differences in factor structure among many models based on factor analysis have given some critics the idea that g is merely a statistical artifact of factor analysis methodology. We now have hundreds of factor analysis studies of intelligence on hundreds of mental tests completed by tens of thousands of people and using many varieties of factor analysis method. The bottom line is that there always is a g-factor. Here’s a key point: g-factors derived from different test batteries correlate nearly perfectly with each other as long as each battery has a sufficient number of tests that sample a broad range of mental abilities, and the tests are given to people sampled from the wide range of ability (Johnson et al., 2004, 2008b). A recent study of 180 college students reported that a g-factor derived from their performance on a battery of video games correlated highly (0.93) with a g-factor extracted from their performance on a battery of cognitive tests (Ángeles Quiroga et al., 2015). Such studies provide strong evidence that g is not a statistical artifact, even though its meaning is limited as an interval scale. And, logically, if it were merely an artifact, g scores would not correlate with other measures of the complexity of everyday life, as we noted, nor with genetic and brain parameters, as we detail in subsequent chapters.
Finally, perhaps the major motivation for diminishing the validity of intelligence tests, and other tests of mental abilities including the SAT, is the desire, shared by many, to explain away group differences in average scores as a mere artifact of the tests. In my view, this motivation is misplaced. The causes of average test score differences among groups are not yet clear, but the differences are a major concern in education and other areas. They deserve full attention with the most sophisticated research possible so causes and potential remediation can be developed based on empirical studies. Imaging studies of brain development and intelligence are beginning to address some issues, as detailed in Chapters 3 and 4, and the goal of enhancing intelligence, discussed in Chapters 5 and 6, is something to consider.
Before we get into the brain itself, in the next chapter we will summarize the overwhelming evidence that intelligence has a major genetic component and how “intelligence genes” may affect the brain. We will also introduce the concept of epigenetic influences of environmental factors on gene expression, all of which work through biological processes to affect the brain. All together, this evidence supports our primary assumption that intelligence is 100% biological.
Chapter 1 Summary
Intelligence can be defined and assessed for scientific research.
The g-factor is a key concept for estimating a person’s intelligence compared to other people.
It is surprising that intelligence is rarely considered explicitly in vigorous debates about why pre-college education appears to be failing many students. The best teachers cannot be expected to attain educational objectives beyond the capabilities of students.
At least four kinds of studies demonstrate the predictive validity of intelligence test scores and the importance of intelligence for academic and life success.
Intelligence tests are the basis for many important empirical research findings, but going forward, the key problem for assessment is that there is no ratio scale for intelligence, so test scores are meaningful only relative to other people.
Despite widespread but erroneous beliefs about definition and assessment, neuroscience studies seek to understand the brain processes that underlie intelligence and how they develop.
Review Questions
1. Is a precise definition of intelligence required for scientific research?
2. What is the difference between specific mental abilities that define savants and the g-factor?
3. Why is an intelligence test score not like a measure of length, liquid, or weight?
4. What is restricted range and why is it an important concept for intelligence research?
5. What are two myths about intelligence and why do they persist?
6. Why do you suppose this chapter begins with a quote from 1980?
Further Reading
Human Intelligence (Hunt, 2011). This is a thorough textbook that covers all aspects of intelligence written by a pioneer of intelligence research. It is clearly written, lively, and balanced.
Straight Talk about Mental Tests (Jensen, 1981). This is a clear examination of all issues surrounding mental testing. Written without jargon by a real expert for students and the general public. Still a classic, but you may find it only in libraries or from online sellers.
The g-Factor (Jensen, 1998). This is a more technical and thorough text on all aspects of the g-factor. It is considered the classic in the field.
“The neuroscience of human intelligence differences” (Deary et al., 2010). This is a concise review article written by long time intelligence researchers.
IQ in The Meritocracy (Herrnstein, 1973). This controversial book put forth an early argument about how the genetic basis of IQ was stratifying society. The Preface is a hair-raising account of the acrimonious climate of the times for unorthodox ideas. This book is hard to find, but try online sellers.
The Bell Curve (Herrnstein & Murray, 1994). This is possibly the most controversial book about intelligence ever written. It expands arguments first articulated in IQ in the Meritocracy. There are considerable data and well-reasoned positions about what intelligence means for public policy.
Chapter Two
Nature More than Nurture: The Impact of Genetics on Intelligence
Give me a dozen healthy infants, well-formed, and my own specified world to bring them up in and I’ll guarantee to take any one at random and train him to become any type of specialist I might select – doctor, lawyer, artist, merchant-chief, and, yes, even beggar man and thief, regardless of his talents, penchants, tendencies, abilities, vocations, and race of his ancestors. I am going beyond my facts and I admit it, but so have the advocates of the contrary and they have been doing it for many thousands of years.
(Watson, 1930, p. 104)
… the Blank Slate is an empirical hypothesis about the functioning of the brain and must be evaluated in terms of whether or not it is true. The modern sciences of mind, brain, genes, and evolution are increasingly showing that it is not true.
(Pinker, 2002, p. 421)
The most far-reaching implications for science, and perhaps for society, will come from identifying genes responsible for the heritability of g … Despite the formidable challenges of trying to find genes of small effect, I predict that most of the heritability of g will be accounted for eventually by specific genes, even if hundreds of genes are needed to do it.
(Plomin, 1999, pp. 27, 28)
Finding genes brings us closer to an understanding of the neurophysiological basis of human cognition. Furthermore, when genes are no longer latent factors in our models but can actually be measured, it becomes feasible to identify those environmental factors that interact and correlate with genetic makeup. This will supplant the long nature/nurture debate with actual understanding.
(Posthuma and de Geus, 2006, p. 151)
It might be argued that it is no longer surprising to demonstrate genetic influence on a behavioral trait, and that it would be more interesting to find a trait that shows no genetic influence.
(Plomin and Deary, 2015, p. 98)
Learning Objectives
Is the nature–nurture debate about intelligence essentially settled?
What is the most compelling evidence that genes influence intelligence?
What is the effect of age on environmental influences on intelligence?
What are ke
y research strategies used in quantitative and molecular genetics?
Why has it been so difficult to identify specific genes related to intelligence?
Introduction
Our brain evolved along with the rest of our body. It would be unlikely if genetics influenced all manner of human physiological differences but had no impact on the brain or the brain mechanisms that underlie intelligence. Nonetheless, genetic explanations of human attributes (even partial explanations) often arouse suspicion and unease. In part this comes from an assumption that anything genetic is essentially immutable, deterministic, and limiting. As we will see, this is not always a correct assumption, and the exact opposite may be true as we now have several powerful techniques for manipulating genes (see Sections 5.6 and 6.3). Some genes are deterministic – you have the gene, you get a specific characteristic – but for complex traits and behaviors like intelligence, the genetic influences are best described as probabilistic rather than deterministic. That is, this gene may increase the chances of having a characteristic, but whether you get it depends on multiple factors. For example, you may be at genetic risk for heart disease, but you can lower your risk with diet and exercise.