Gwern, this whole subject is sloppy. It is trivially simple to improve IQ scores by practicing the types of questions IQ tests make. There is no reliable way to measure IQ, therefore it's not clear that IQ is measuring anything reliably. What, exactly, are you talking about then?
It is trivially simple to show that your objection is worthless. 'A thermometer measures temperature when placed in your armpit. But I can take a thermometer and put it in hot water and the temperature goes up! There is no reliable way to measure temperature, therefore, it's not clear the temperature is measuring anything reliably. What, exactly, are you talking about then?'
IQ tests are notoriously hard to make. And they don't measure IQ, so much as what the tester considers IQ.
An amusing story. There was an IQ comparisson between rural and urban areas and the finding was that people in rural areas do worse on IQ test. But one psychologist thought that this can't be true.
So he went back to test and realized that IQ test that were given to rural/urban areas were same. The people from rural areas did worse on the written part of exams, because the rural areas lacked schools and children started learning/reading much later. Once he went back and revised the tests, removing most of test that had to do with reading and generally type of thought that rural children have less encounter.
Conclusion - the old IQ test didn't measure intelligence, unless you can increase intelligence by going to a better school, in which case, that isn't intelligence but knowledge.
There's another famous, possibly apocryphal, story about an IQ test with the question:
Which of the following doesn't belong:
a) Basketball
b) Polo
c) Hockey
d) Billiards
The answer is obviously Basketball, since it's the only sport that doesn't use a stick to hit anything. Except the answer is obviously Polo, since it's the only one with horses. Except the answer is obviously Hockey, since it's the only one with a puck instead of a ball. Except the answer is obviously Billiards, since it's the only one without a team.
In case you're curious about the RIGHT answer, the test found that Canadians and Americans living in colder climates were slightly smarter than average.
It's a funny story, but they screwed up the riddle. You need to know the answer to a yes/no question that this liar/truth-teller can answer. For example, which road should I take to get to the castle? You only have one question, so you can't ask are you a tree-frog, because then you wouldn't know which road to take.
> IQ tests are notoriously hard to make. And they don't measure IQ, so much as what the tester considers IQ.
They are notoriously hard to make, yes, but psychometricians have put in a ton of effort to make culture-fair tests and investigate claimed biases, so they're pretty good these days. We're a very long way from the WWI Alpha test.
It's a well established fact that thermometers measure temperature. It doesn't matter if its a person's temperature or an object's, it is still going to measure temperature.
It is a highly contested assertion that IQ tests accurately measure I.Q. It's completely possible that if you take a reasonably smart person and make them study for a specific type of I.Q. test, that he will score higher than he would have otherwise, even though his intellectual ability would remain unchanged over a broader spectrum of skills than the test involves. Therefore, it may not be a reliable measurement of I.Q.
Your example makes no sense whatsoever, and you should have already known that before you typed it out.
> It is a highly contested assertion that IQ tests accurately measure I.Q. It's completely possible that if you take a reasonably smart person and make them study for a specific type of I.Q. test, that he will score higher than he would have otherwise, even though his intellectual ability would remain unchanged over a broader spectrum of skills than the test involves. Therefore, it may not be a reliable measurement of I.Q.
It is not contested in the area in question; psychologists routinely use IQ tests without a qualm, and it is a consensus in the field that they are meaningful. See for example the consensus paper on IQ released in the wake of _The Bell Curve_ controversy or look at more recent review articles like Nisbett's "Intelligence: new findings and theoretical developments". Whatever the layman or politicians may think, the debate is over: if you make a good IQ test, it will estimate accurately general performance on all sorts of cognitive performance. If you destroy the accuracy of a given IQ test by training and then measure performance on the original variety of cognitive performance, this will immediately show up in the factorization and demonstrate how the IQ test's accuracy has been destroyed, and IQ tests developed with different kinds of questions will not show the spurious increase, just like if you took a different thermometer and put it in the person's armpit, it would show a different reading from the thermometer stuck in the mug of hot water.
> Your example makes no sense whatsoever, and you should have already known that before you typed it out.
My example is exactly analogous to the argument that was made.
Your example isn't remotely close to being analogous to the argument that was made. If you stick a thermometer in hot water, it is measuring the water's temperature accurately. If an individual studies for a specific type of IQ test, that particular IQ test fails to accurately measure his general intelligence.
The IQ test is generally accepted because there aren't any practical applications where a near-perfect measurement of a person's intelligence is going to be a matter of life and death, the same cannot be said for the measurement of temperature.
Basically, it doesn't matter that an IQ test isn't perfect. It is accepted because it does a merely adequate job in most cases, and even if it happened to be manipulated by someone who studied for a specific version of the test, no significant damage will be done to anything or anyone.
Just to be clear, I don't have a grudge against IQ tests, in fact, I scored in the 140s on a test administered by a psychologist. I'm just saying that they are highly susceptible to manipulation (Don't worry, I didn't even know I was taking it until I arrived at his office).
The entire field of psychology is in its infancy, it is one of the least developed of all sciences. We are making quite a bit of progress, but there's still a long way to go. There are a lot of really strange ideas that are accepted by experts that are going to be proven wrong as soon as our understanding of the mind matures sufficiently.
> Your example isn't remotely close to being analogous to the argument that was made. If you stick a thermometer in hot water, it is measuring the water's temperature accurately. If an individual studies for a specific type of IQ test, that particular IQ test fails to accurately measure his general intelligence.
Yes, it is. IQ tests are designed to reliably measure intelligence under certain reasonable, but not adversarial or universal, conditions. Just like a reading off a thermometer is a reliable way of measuring body temperatures under certain reasonable, non-adversarial, non-universal conditions. Memorizing the answers or training by taking the test repeatedly is akin to a kid pretending to be sick and dunking the thermometer in his hot chocolate to get out of school. It's still reporting something, but not what you think it is.
> The IQ test is generally accepted because there aren't any practical applications where a near-perfect measurement of a person's intelligence is going to be a matter of life and death,
Indeed. Your standard IQ test like a RAPM is not used in adversarial contexts (sadly, 'publish or perish' increasingly means that research is an adversarial context as well), and failure to understand this seems to be leading to a lot of confusion in these comments. If you want to handle even adversarial contexts, you need a procedure way more complex & costly than a 10 minute pen and paper RAPM - you need something like a proctored SAT or GRE.
> The entire field of psychology is in its infancy, it is one of the least developed of all sciences. We are making quite a bit of progress, but there's still a long way to go.
Intelligence and measuring it via IQ tests is some of the oldest and most conceptually & statistically deep and commonly-used parts of psychology, going back a century at this point, which is something few parts of psychology can claim. I wouldn't hold my breath waiting for psychology to abandon it because it's in 'its infancy' and 'is one of the least developed of all sciences'. At this point, it's roughly like expecting the spacing effect to go away.
Reminds me of the surefire way to totally cheat on your tests and always get a great score and never, ever get caught. All you have to do is spend a bunch of time beforehand going over all the material until you understand it.
Exactly, my logic professor always used to say that studying logic improves your IQ (btw, so does programming.) Well it does, because a bunch of the questions on an IQ test are logical riddles. Math and vocabulary have the same effect.
it was always my understanding that "good" IQ tests should have almost zero variance based on your vocabulary, or hell, even your mathematical abilities. A "good" IQ test simply tests your logic facilities.
Example "good" questions would show a sequence of 6 shapes and ask you what the next one in the series should look like. or perhaps a question like "given that all boogles are quinks and some quinks are flets, are all flets boogles?".
A question on a "bullshit" IQ test would be: "'Daring' is an appropriate adjective for: A. Rocks. B. Planets. C. Humans.". Another would be "x^2 = 36, solve for x"
Now, I know that some standardized tests (ACT, SAT, IBTS), will sometimes try to estimate your IQ by correlating your scores, but that is fraught with problems and shouldn't be taken as gospel.
> A "good" IQ test simply tests your logic facilities.
You can dramatically improve your logic facilities by taking a course in logic. Your boogles, quinks, and flets question is trivial if you know even rudimentary set theory.
Not even that - it's the kind of question that becomes dramatically easier to answer if you know how to trick your brain into thinking it's easy. "Given that all cats are animals and some animals are mice, are all mice cats?"
It's actually a harder question if you phrase it with cats, animals, and mice! The correct answer is "possibly" (either yes or no), but the cats, animals, and mice question predisposes you to answer with "no".
For me, that kind of comes under the category of "know how to answer the question": that is, "remember that you have produced an example, which does not constitute proof but merely evidence". You're right that it's still not easy to distinguish between "definitely yes" and "not {definitely no}", though I do still find it easier with nouns-which-I-know than made-up-words, I think. I should have put in an attempt to make the statement true - maybe "Given that all cats are cats, and some cats are cats, are all cats cats?"
I think logical reasoning can be improved by training too.
In my first year in CS at university, we had a logic course. I was definitely better at solving that kind of problems after the course. They usually become easy once you formalize them and you start to recognize patterns once you've seen enough.
The WAIS-III was multimodal when I took it. Most of it was as you described, but some of it was general knowledge testing as well.
Overall it was a fun experience since I like taking tests, but I felt it didn't accurately judge anything, regardless of my satisfaction with the result.
Indeed. Studying vocab is even the example I've used for years on the dual n-back mailing list to explain how a previously valid measure can be rendered meaningless by training: just memorize vocab and boost your scores on the vocab subtest. Of course, if you took a bunch of people who studied SAT/GRE words and looked at the factorization on a battery of tests, you'd find that the g-loading of the vocab subtest had gone way down, but that can't be done in this sort of case and so all you're left with is a meaningless measure.
this was in relation to Moody's critique of the original n-back study: he thought that all the n-back training, which involved tracking moving squares in a grid, was effectively training you to be able to manipulate shapes on a grid better than you would normally, and destroying part of the validity of the matrix tests. And indeed, in the later studies which used more than just matrix tests, the n-back training tends to show more limited gains. It's not officially out yet, but an example of this would be "Adaptive n‐back training does not improve fluid intelligence at the construct level; gains on individual tests suggest training may enhance visuospatial processing", Colom et al 2013.
My mother told me the story of a girl whose IQ she increased by 30 points in one day!
After observing the first IQ test, my mother recommended that the severely abused girl be tested by a woman instead of a man. Voila! Magic! The IQ increased.
(Both IQ tests would have been administered at Stanford back in the 1950s.)
> if you are a bright healthy young man or woman gifted with an IQ in the 130s, there is nothing you can do to increase your underlying intelligence 20 points.
That is to say, you want to make a significant improvement despite already doing really well as a baseline. The higher baseline affects how effective practicing IQ questions is, and how difficult it is to go up each additional point.
I think it's more malleable than people give it credit for. Over time my performance on IQ tests changed from being much higher on math to much higher on verbal. (I largely stopped reading for pleasure in high school, and didn't pick it up again until after college)
I do agree with one point in the article - there is diminishing returns. I've heard Warren Buffett say that after someone is 130 IQ, it all comes down to character. I'm inclined to agree. If I interview someone with 130 IQ, I'd much rather they have integrity and good social skills than another 5 IQ points.
Bench presses are measuring your ability to bench press. Iq tests are measuring your ability to do IQ tests (and not your IQ)
Edit: just to clarify, the fact that you can train to improve an IQ score means that it is not a very good measure of innate ability. You can't really talk about "having an IQ", all you can say is that you got 130 on your last IQ test, a bit like how you might tell someone what you got for your SATs.
If IQ doesn't measure anything then why is it correlated with job performance across a wide range of jobs (and pretty much the only property for which this is the case)?
You remind me of an issue that sort of weighs at the back of my mind.
Scientists hate the Meyers-Briggs Type Indicator, mostly for perfectly good reasons. But I see a lot of claims that, with the exception of the introversion-extraversion axis, which was widely accepted, its dimensions are uninformative to the point of worthlessness. I've also read that along its T-F axis, 75% of men score T and 75% of women score F. To me, that seems to indicate that it's measuring something that's really there (if the T-F axis were meaningless, shouldn't all groups split 50-50?).
I guess in summary, a lot of people don't seem to agree that just because you can extract information from a test item, the test item must be informative. I'm with you though.
Agreed, I don't think the conclusion is reliable because we have yet to understand what exactly IQ is. If you don't know what you're measuring, how do you know IQ can or can't be improved?
My layman's prediction is we will be able to measure IQ in the future by some means of scanning the brain's process of building connections and nerves. An aptitude test is simply measuring the side effects of IQ.
You're right that it doesn't tell the whole story, but one of the major findings in the last century of psychology is that the one-number summary actually contains a surprising amount of information.
"Mental tests may be designed to measure different aspects of cognition. Specific domains assessed by tests include mathematical skill, verbal fluency, spatial visualization, and memory, among others. However, individuals who excel at one type of test tend to excel at other kinds of tests, too, while those who do poorly on one test tend to do so on all tests, regardless of the tests' contents."
"[G factor] is a variable that summarizes positive correlations among different cognitive tasks, reflecting the fact that an individual's performance at one type of cognitive task tends to be comparable to his or her performance at other kinds of cognitive tasks."
> My layman's prediction is we will be able to measure IQ in the future by some means of scanning the brain's process of building connections and nerves.
There's some interesting imaging research over the past decade or two suggesting that if you want to reify IQ as something physical, it might wind up being something along the lines of global connectivity - how well and efficiently distant brain regions can communicate and coordinate activities.
Creativity is so hard to quantify or even describe that I don't think anyone can say anything meaningful about what it's useful for or what its relationship to evolution might be.
Just because they are trainable doesn't mean they aren't measuring anything or aren't useful. Besides his arguments aren't for increasing IQ tests, but increasing intelligence in general. IQ being a rough measure of it.