The Neuroscience of Intelligence

Home > Other > The Neuroscience of Intelligence > Page 4
The Neuroscience of Intelligence Page 4

by Richard J Haier


  Each deviation point is equal, but these scores only have meaning relative to other people. In technical terms, these scores are not a ratio scale because there is no actual zero point. This is unlike quantitative units of weight or distance or liquid, which are ratio scales. IQ scores and their interpretation depend on having good normative groups. This is one reason that new norms are generated periodically for these tests. It is also why there is a separate version of the test for children called the Wechsler Intelligence Scale for Children (WISC).

  The WAIS can be divided into specific factors other than the Full-scale IQ score that closely resemble the pyramid structure of mental abilities shown in Figure 1.1. The individual subtests are grouped at the next highest level into factors of verbal comprehension, working memory, perceptual organization, and processing speed. These four specific factors are grouped into more general factors of verbal IQ and performance IQ, and these two broad factors have a common general factor defined by the total IQ score, called Full-scale IQ. Full-scale IQ is based on several tests that sample a range of different mental abilities and, therefore, is a good estimate of the g-factor. Each of the factor scores can be used for other predictions, but Full-scale IQ is the most widely used score in research.

  1.7 Some Other Intelligence Tests

  So far, the IQ tests we have described are administered by a trained test-giver interacting with one individual at a time until the test is completed, often taking 90 minutes or more. Other kinds of psychometric intelligence tests can be given in a group setting or without direct interaction with the test-giver. Some tests are designed to assess specific mental abilities and others are designed to assess general intelligence. Typically, the more a test requires complex reasoning, the better it estimates the g-factor. Such tests have a “high g-loading.” Here briefly are three important high-g tests used in neuroscience studies in addition to IQ.

  1. The Raven’s Advanced Progressive Matrices (RAPM) test (named for its developer, Dr. Raven) can be given in a group format and usually has a time limit of 40 minutes. It’s regarded as a good estimate of the g-factor, especially because of the time constraint. Tests with a time limit tend to separate individuals better. It’s a non-verbal test of abstract reasoning. Figure 1.5 is an example of one item. In the large rectangle, you see a matrix of eight symbols and a blank spot in the lower-right corner. The eight symbols are not arranged randomly. There is a pattern or a rule linking them. Once you deduce the pattern or rule, you can decide which of the eight choices below the matrix completes the pattern or rule and goes in the lower-right corner.

  Figure 1.5 Simulated problem from the RAPM test. The lower-right symbol is missing from the matrix. Only one of the eight choices fits that spot once you infer the pattern or rule. In this case the answer is number 7 (add one row or column to the next) (courtesy Rex Jung).

  In this example, the answer is number 7. If you add the left column to the middle column in the matrix, you get the symbols in the right column. If you add the top row to the middle row, you get the bottom row. The actual test items get progressively more and more difficult. The underlying pattern or rule can be quite hard to infer and there are different versions of the test that vary in difficulty. However, because of its simple administration, this test has been used in many research studies. Performance on a test like this is fairly independent of education or culture. Scores are reasonable estimates of the g-factor but they should not be mistaken as the g-factor (Gignac, 2015).

  2. Analogy tests also are very good estimators of g. For example, wing is to bird as window is to _____ (house). Or, helium is to balloon as yeast is to _____ (dough). Or, how about, Monet is to art as Mozart is to _______ (music). Analogy tests look like they could be easily influenced by education and culture so they have been dropped from many assessment test batteries despite the fact that, empirically, they are good estimates of g.

  3. The SAT, widely used for college admission, is an interesting example. Is it an achievement test, an aptitude test, or an intelligence test? Interestingly, the SAT originally was called the Scholastic Aptitude Test, then it was renamed the Scholastic Achievement Test, and now it’s called the Scholastic Assessment Test. Achievement tests measure what you have learned. Aptitude tests measure what you might learn, especially in a specific area like, for example, music or foreign language. It turns out that the SAT, especially the overall total score, is a good estimator of g because the problems require reasoning (Frey & Detterman, 2004). Like IQ scores, SAT scores are normally distributed and interpreted best as percentiles. For example, people in the top 2% of the SAT distribution tend to be in at least the top 2% of the IQ distribution. Sometimes, this surprises people, but why should intelligence not be related to how much someone learns?

  Achievement, aptitude, and intelligence test scores are all related to each other. They are not independent. Remember, the g-factor is common to all tests of mental ability. It would be unusual if learning and intelligence were unrelated. So your performance on achievement tests is related to the general factor, just like IQ scores and aptitude test scores are related to g. It can be confusing because we all know examples of bright students who are underachievers, and students not so bright who are overachievers. However, such examples are the exception. In reality, there are some valid distinctions between achievement, aptitude, and intelligence testing. Each kind of test is useful in different settings, but they also are all related and the common factor is g.

  1.8 Myth: Intelligence Tests are Biased or Meaningless

  Are intelligence test questions fair or do correct answers depend on an individual’s education, social class, or factors other than intelligence? A professor I had in graduate school used to say that most people define a fair question as one they can answer correctly. Is a question unfair or biased because you don’t know the answer?

  Just what do intelligence test scores actually mean? Low test scores result because a person doesn’t know the answers to many questions. There are many possible reasons for not knowing the answer to a question: never were taught it, never learned it on your own, learned it but forgot it a long time ago, learned it but forgot it during the test, were taught it but couldn’t learn it, didn’t know how to reason it out, or couldn’t reason it out. Most but not all of these reasons seem related to general intelligence in some way. High test scores, on the other hand, mean the person knows the answers. Does it matter how you came to know the answer? Is it better education, just good memory, or good test-taking skills, or good learning? The definitions of general intelligence combine all these things.

  Test bias has a specific meaning. If scores on a test consistently over- or underpredict actual performance, the test is biased. For example, if people in a particular group with high SAT scores consistently fail college courses, the test is overpredicting success and it is a biased test. Similarly, if people with low SAT scores consistently excel in college courses, the test is underpredicting success and it is biased. A test is not inherently biased just because it may show an average difference between two groups. A spatial ability test, for example, may have a different mean for men and women, but that does not make the test biased. If scores for men and for women predict spatial ability equally well, the test is not biased even if there is a mean difference. Note that a few cases of incorrect prediction do not constitute bias. For a test to be biased, there needs to be a consistent failure of prediction in the wrong direction. The lack of any prediction is not bias; it means the test is not valid.

  Considerable research on test bias for decades shows this is not the case for IQ and other intelligence test scores (Jensen, 1980). Test scores do predict academic success irrespective of social–economic status (SES), age, sex, race, and other variables. Scores also predict many other important variables, including brain characteristics like regional cortical thickness or cerebral glucose metabolic rate, as we will detail in Chapters 3 and 4. If intelligence test scores were meaningless, they would not predict any other measure
s, especially quantifiable brain characteristics. In this context, “predict” also has a specific meaning. To say a test score predicts something only refers to a higher probability of the something occurring. No test is 100% accurate in its predictions, but the reason intelligence tests are considered by many psychologists to be a great achievement is that the scores are good predictors for success in many areas, and in some areas test scores are very good predictors. Before we review key research that is the basis for this conclusion, there is a fundamental problem to discuss.

  1.9 The Key Problem for “Measuring” Intelligence

  As briefly noted earlier in this chapter, the main problem with all intelligence test scores is that they are not on a ratio scale. This means there is no true zero, unlike measures for height and weight. For example, a person who weighs 200 pounds is literally twice the weight of a person who weighs 100 pounds because a pound is a standard unit on a scale with an actual zero point. Ten miles is twice the distance of five miles. This is not the case for IQ scores. A person with an IQ score of 140 is not literally twice as smart as a person with a score of 70. Even if you believe you have encountered at least one person with zero intelligence, zero is certainly not the case. For IQ, it’s the percentile that counts. Someone with an IQ of 140 is in the top 1% and someone with a score of 70 is in the bottom 2%. A person with an IQ of 130 is not 30% smarter than a person whose score is 100. The person with an IQ of 100 is at the 50th percentile and the person with an IQ of 130 is at the 98th percentile. No psychometric test score is based on a ratio scale. All IQ test scores have meaning only relative to other people.

  Here’s the key point about this limitation of all intelligence test scores: They only estimate intelligence because we don’t yet know how to measure intelligence as a quantity like we measure liquid in liters or kilograms of weight or distance in feet (Haier, 2014). If you take an intelligence test when you are sick and unable to concentrate, your score may be a bad estimate of your intelligence. If you retake the test when you are well, your score is a better estimate. However, just because your score goes up, it does not mean your intelligence increased in the interval between the two tests. This becomes an issue in Chapter 5 when we talk about why claims of increasing intelligence are not yet meaningful.

  Despite this fundamental problem, researchers have made considerable progress. The main point is that measurement is required to do scientific research on intelligence. No one test may be a perfect measure of a single definition, but as research findings accumulate, both definition and measurement evolve and our understanding of the complexities increases. The empirical robustness of research on the g-factor essentially negates the myth that intelligence cannot be defined or measured for scientific study. It is this research base that allows neuroscience approaches to take intelligence research to the next level, as detailed in subsequent chapters. But first, we will summarize some compelling studies of intelligence test validity.

  1.10 Four Kinds of Predictive Validity for Intelligence Tests

  1.10.1. Learning Ability

  IQ scores predict general learning ability, which is central to academic and vocational success and to navigating the complexities of everyday life (Gottfredson, 2003b). For people with lower IQs around 70, simple learning typically is slow and requires concrete, step-by-step teaching with individual instruction. Learning complex material is quite difficult or not possible. People with IQs around 80–90 still require very explicit, structured individual instruction. When it comes to learning from written materials, IQs of at least 100 are usually required, and college-level learning usually works best at 115 and over. Higher IQs over 130 usually mean that more abstract material can be learned relatively quickly, and often independently.

  The US military has a cutoff of around 90 for recruits, although this has moved down a bit when recruitment is strained. Most graduate programs in the USA require tests like the Graduate Record Exam (GRE) or the Medical College Admission Test (MCAT) for medical school or the Law School Admission Test (LSAT) for law school. Cutoffs for these tests usually ensure that individuals with IQs over 120 are most likely to be accepted, and the top programs have higher cutoffs to maximize accepting applicants in the top 1% or 2% of the normal distribution. This doesn’t mean that people with lower scores cannot complete these programs, but the higher-scoring students usually are more efficient, faster learners and more likely to successfully finish the program.

  Keep in mind, these are not perfect relationships and there are exceptions. The relationship between IQ scores and learning ability, however, is strong. Many people find this disturbing because it indicates a limitation on personal achievement that runs counter to a prevalent notion expressed in the phrase, “You can be anything you want to be if you work hard.” This is a restatement of another notion, “If you work hard you can be successful.” The latter may often be true because success comes in many forms, but the former is seldom true unless a caveat is added: “You can be anything you want to be if you work hard and have the ability.” Not everyone has the ability to do everything successfully, although, surprisingly, many students arrive at college determined to succeed but naïve about the role ability plays. Few students with low SAT math scores, for example, are successful majors in the physical sciences even if they are highly motivated and work hard.

  Given the powerful influence of g on educational success, it is surprising that intelligence is rarely considered explicitly in vigorous debates about why pre-college education appears to be failing many students. The best teachers cannot be expected to attain educational objectives beyond the capabilities of students. The best teachers can maximize a student’s learning, but the intelligence level of the student creates some limitations, although it is fashionable to assert that no student has inherent limitations. Many factors that limit educational achievement can be addressed, including poverty, poor motivation, lack of role models, family dysfunction and so on, but, so far, there is no evidence that alleviation of these factors increases g. As we will see in the next chapter, early childhood education has a number of beneficial effects, but increasing intelligence is not one of them. Imagine a pie chart with all the factors that influence a student’s school achievement. Surely the g-factor would deserve representation as a slice greater than zero. The strong correlations between intelligence test scores and academic achievement indicate the slice could represent a sizeable portion of the whole. In my view, this alone should justify more research on intelligence and how it develops.

  1.10.2. Job Performance

  In addition to academic success, IQ scores also predict job performance (Schmidt & Hunter, 1998, 2004), especially when jobs require complex skills. In fact, for complex jobs the g-factor predicts success more than any other cognitive ability (Gottfredson, 2003b). A large study conducted by the US Air Force, for example, found that g predicted virtually all the variance in pilot performance (Ree & Carretta, 1996; Ree & Earles, 1991). Most of us are not pilots, but in general, lower IQ is sufficient for jobs that require a minimum of complex, independent reasoning. The jobs tend to follow specific routines like assembling a simple product, food service, or nurse’s aid. IQs around 100 are necessary for more complex jobs like bank teller and police officer. Successful managers, teachers, accountants, and others in similar professions usually have IQs of at least 115. Professions like attorney, chemist, doctor, engineer, and business executive usually require higher IQs to finish the advanced schooling that is required and to perform at a high level of complexity.

  Complex job performance is largely g-dependent, but of course there are other factors, including how well one deals with other people. This is the concept of emotional intelligence. Emotional intelligence, that is, the personality and social skills one has, may contribute to greater success compared to a person of equal g but lacking people skills. This does not diminish the importance of the g-factor. Typically, emotional intelligence can compensate for a lack of job-appropriate g for only so long, if at all.
>
  As with academic success, intelligence/job performance relationships are general trends and there are always exceptions. However, from a practical point-of-view, a person with an IQ under 100 is not very likely to complete medical school or engineering school. Of course, it’s possible, especially if the IQ score is not a good estimate of intelligence for that person, or if that person has a very specific ability like memorization to compensate for low or average general intelligence. Similarly, a high score does not guarantee success. This is why an IQ score by itself is not usually used to make education or employment decisions. IQ is usually considered in the context of other information, but a low score typically is a red flag in many areas that require complex, independent reasoning.

  Here’s another point about predicting job success. Some researchers suggest that expertise in any area requires at least 10,000 hours of practice. That’s 1,250 8-hour days, or about 3.4 years. This implies that expertise can be achieved in any field with this level of practice irrespective of intelligence or talent. Studies of chess grandmasters, for example, report the group average IQ is about 100. This suggests that becoming a grandmaster may depend more on practice of a specific ability like spatial memory than on general intelligence. Grandmasters may actually have a savant-like spatial memory, but the idea of a chess grandmaster being a super all-purpose giant intellect is not necessarily correct. Many studies refute the idea that 10,000 hours of practice can lead to expertise if there is no pre-existing talent to build on (Detterman, 2014; Ericsson, 2014; Grabner, 2014; Grabner et al., 2007; Plomin et al., 2014a, 2014b).

 

‹ Prev