Tropedia

  • All unique and most-recently-edited pages, images and templates from Original Tropes and The True Tropes wikis have been copied to this wiki. The two source wikis have been redirected to this wiki. Please see the FAQ on the merge for more.

READ MORE

Tropedia
A Treatise of Schemes and Tropes This a Useful Notes page. A Treatise of Schemes and Tropes

IQ scores and IQ testing do not work the way that they are depicted in most fiction. Since there are a couple of tropes, both dealing with the two most common (and unfortunately flawed) portrayals of these, here's the lowdown on how IQ actually works.

Accuracy[]

There is still much debate about what an IQ test actually measures, whether it is actually a good measure of intelligence, a measure of only a part or one type of intelligence, or actually a measure of test taking skills. TV assumes that if IQ tests don't measure intelligence 100% accurately, they're at least measuring something basic and unchanging about a person. Research suggests not: intelligence seems to change over a person's lifetime, and certain types of training help people boost their IQ. Additionally, social and psychological factors can often skew a score: for example, some studies have suggested that African-Americans score better on IQ tests when they think their score is only going to be compared to those of other African-Americans. Lastly, IQ scores have seen something called the Flynn effect: a steady rise over the past half-century, slowly but across the board (although possibly now dying out). To keep 100 as the mean, the tests themselves have to be made harder.

IQ tests are intended to measure a number of areas of intelligence, not just "can do mental arithmetic very fast". Mental arithmetic is a skill, that (like, say, juggling) can be taught, and has more to do with memorizing tricks and log tables than innate intelligence. Pattern recognition is emphasized over math skills in most tests. There's also a hotly-debated theory that there are many kinds of specialized intelligences, some of which an IQ test may not even bother assessing. IQ tests may also require skills that people with brain injuries may have lost but which don't actually affect basic intelligence, such as the ability to do math in one's head or to understand perspective.

This takes us to the last point: with any tool--but especially with psychological tests--you need to be aware of what the thing is trying to measure, and whether it actually accomplishes what it claims to. The original IQ test, the Stanford-Binet, was designed simply to identify what grade of school you should be in, versus which grade you actually were, with an eye towards identifying kids with developmental disabilities. In other words, your Stanford-Binet IQ number gets smaller every year, as you progress through numerically-larger grades, and stops being meaningful the moment you get your diploma. Obviously, a test that does not calculate using school grades as a yardstick (eg. the Wechsler Adult Intelligence Scale or more recent versions of the Stanford-Binet) will not have this problem... But how does it measure? It turns out you can't give the WAIS to a child and expect meaningful results, and there's a "Wechsler Scale for Children" for that exact reason. Psychological tests are designed to be used under very specific circumstances, and if you move outside them, the test and its results may not be applicable at all anymore.

Range[]

IQ test scores follow a normal (bell curve) distribution--meaning that, if an IQ test is normed at 100 and has a standard deviation of 15 points, about 68% of the population has an IQ between 85 and 115 (one standard deviation from the norm), and fully 95% of people are between 70 and 130. Mensa, the best-known international society for people with high IQs, requires a score of at least 132 on the Stanford-Binet or Wechsler tests, corresponding to the 98th percentile. IQs over 145/under 55 number about one in a thousand; IQs over 160/under 40 (four standard deviations from the norm), about one in thirty thousand. That far from the middle, test makers have trouble finding enough people to norm the tests, and produce a nice reliable sample. Even then, the same person would likely achieve different scores each time due to differences in the tests, the specific questions, and the conditions under which the test was taken.

IQs were originally given as ratios: (mental age / chronological age) * 100 = IQ. Thus a 6 year old who scored as well as the average 9 year old would have an IQ of 150 (9 / 6 * 100). This system matches contemporary IQ scores quite well--until you get more than about three standard deviations from the mean, after which point ratio IQ scores stop being distributed on a normal curve. Thus, if IQ is meant in the original sense, then it's at least mathematically possible that a character could have an IQ of 300: they would simply have had to have been functioning at the level of a twelve-year-old when they were four. At any rate, IQs haven't been ratio-based for sixty years.

Precision[]

A test that can successfully classify 95% of the population will be adequate for nearly all conceivable uses... but eventually it will run into a ceiling. If two people get a perfect score, the test provides no way to measure which of them is more intelligent. A more difficult test must be used, and this is not commonly done. Mensa and other high IQ societies are some of the few groups that are interested in quantifying IQ scores at the extreme high end of the range. Since a special test must be used to give an accurate and precise score for these individuals, and since most people never need to do this, it's common for people with high intelligence to not know their exact IQ score beyond a fuzzy range.

Similarly, if two people both get every single question wrong (or get the number of correct answers you'd typically get from randomly guessing), the test does not tell you which one is less intelligent. Most standard IQ tests only go down to 40 or 50 IQ at the lowest. Unlike high IQ, however, there are plenty of people interested in specialized tests able to measure the lower ranges, since this information is important for special education programs and other kinds of services. As a result, several tests are designed to be highly specific for IQs under 50. There are also adaptive behavior tests used to determine what practical self-care skills the person has, which can estimate IQ in someone who is difficult to test.

The Score[]

People in TV shows never ever qualify their IQ tests in any way. This is akin to saying you got a 36 on the college admissions test with out saying which one it is; as a 36 could mean perfect score, top 75%, or below the lowest possible score; depending on whether you're talking about the ACT, the International Baccalaureate, or the SAT respectively. Just giving a score assumes that all IQ tests have the same standard deviation, or measure the same kinds of brainpower. They don't. Tests have standard deviations ranging from 10 to 20 or higher, for example a 132 on the Stanford-Binet is equivalent 148 on the Cattell.

This may be partially justified by the use of the term IQ itself, which can automatically imply the bell-curve / mean=100 / standard deviation=15 pattern described above. What is certainly true is that raw test scores from different tests cannot be compared in any meaningful way; only two transformed values based on said specific bell curve pattern could even come close to making sense in comparison to one another. Also, different tests have been normed to different populations, meaning that differences in their norming samples might make even such values basically incompatible.