IQ to INT equivalent

Xeriar said:
So aptitude has little or nothing to do with intelligence in your view?

I going to assume you were responding to me.

No, but it is only partialy related to IQ. I have to good example in my life by simply comparing two of my sisters. One was a lot smarter then the other. If you gave both of the a problem to solve that involed something the both know and the smarter one will not only solve faster but see more beyond the problem. On the other hand, present new material for them to learn and the other sister will learn it not only faster but much more completly then the smarter one.

This is ability to learn new material is closer to what D&D Int represents then pure IQ. Look at what it is tied to. Skill wise, most are skills where study and learning ability means more then other abilitys do. Skills points are used to learn things not do them. For wizards Int helps learn more spells and how to handle magic (bonus spells from Int).
 

log in or register to remove this ad

Umbran said:
What's in a name? Well, in this case, the names stuck. So, we are left with an old, outmoded idea we cannot shake - that the correllation between test scores says something about innate ability, rather than about environment and upbringing.
Once again, many of the biggest problems people have with IQ tests stem from misunderstandings by laymen and the belief that major problems with the earliest tests still exist in current tests.

To a certain degree, it's like criticizing modern physics because earlier physicists believed in ether and phlogiston, or saying modern neurosurgery is quackery because in the past neurosurgeons did frontal lobotomies.

IQ tests do not in any way measure "innate genetic ability" in isolation from environment and upbringing. That's not a fault, it's a simple reality. A person's abilities are always influenced by environment. Would Mozart have been a fantastic composer if he wasn't raised with music? Would olympic runners have been such great athletes if they were malnourished as children? Would great scientists have been so great if their mothers drank alcohol when they were in the womb, or if they went to bad schools? Not likely. That's not to say that there isn't a significant genetic component to IQ tests, just that in any individual person it's impossible to untangle the effects of genes and environment.

There is also a lot more to IQ tests than a single number. As others have mentioned, test reports are typically several pages long. (Good) IQ tests are composed of multiple subtests, each measuring performance on different tasks. Sometimes a person performs consistently on all these subtests, in which case the total IQ number is a reasonable approximation of their abilities. However, when subtest scores vary significantly, that provides information into that person's areas of strengths and weaknesses. Those who would criticize the fact that a person getting 115 scores in both verbal and non-verbal performance subscores has the same overall IQ score (115) as a person getting 100 verbal and 130 performance subscores are missing the point. In the latter, the overall IQ score is essentially meanigless, and should not be given any weight.

Situational and historical factors must also be used when interpreting IQ tests. The differences in scores between different cultures and regions that others have mentioned do exist, though much much better than they used to be. But when interpreting the results of IQ tests these need to be taken into account, not ignored, pretending they don't exist.

As for the statistical basis of IQ tests, yes, many of the earlier ones were seriously flawed. So what? Modern tests have a much sounder statistical foundation.

Overall, Gould seriously overstates his case (as usual - Gould is a propagandist arguing for a specific position, rather than a scientific reviewer aiming to give a balanced view of all the positions). Gould's overemphasis on 'G' as a concept ignores the fact that, in practice, IQ tests are used for a lot more than just the overall IQ score. It's true that sometimes a full-scale IQ score is valid for a person, and sometimes it's not (as I described above), and any qualified tester would tell you that. Arguing that the variable validity of G as a measure is a fundemental flaw with testing seriously misses the point.
 
Last edited:

Michael Tree said:
Overall, Gould seriously overstates his case (as usual - Gould is a propagandist arguing for a specific position, rather than a scientific reviewer aiming to give a balanced view of all the positions). Gould's overemphasis on 'G' as a concept ignores the fact that, in practice, IQ tests are used for a lot more than just the overall IQ score.

Well, yes and no.

First, by today's standards, Gould's position may be overstated, but his essay "Jensen's Last stand was written around 1979, I think. Mismeasure of Man was back in 1981. His arguments stand against the concepts of IQ from 20+ years ago. And he was fighting against decades of misunderstanding, and speaking to laymen. All those generally lead to overstatement for the sake of clarity and focus.

Mind you, the current public's conception of IQ tests is still older than Gould's arguments. Maybe the tests have moved forwards, but education about the tests has not. People still think in the old terms of a single test yielding a single number having a single meaning. If this weren't the case, you wouldn't have had to state what modern tests were like. I see no problem with using a 20 year old argument to dispell a 40+ year old misconception.

In addition, as far as I've been able to discover, the tests are still only a predictor of performance on tests. They may be more sophiticated, but you cannot actually test "intelligence" until you can define intelligence, which psychometricians have yet to do to anyone's satisfaction. Unless you do the old, "intelligence is that which intelligence tests measure", but that is circular.

And lastly - I've yet to see a person who speaks on IQ tests who isn't a propagandist. There are few people who bother to speak on the quantification of human beings who don't have some axe to grind.
 
Last edited:

For those who were wondering:

Using 3d6 rolls, the chance of your character having

1 18 : 1:36.85
2 18s: 1:3168
3 18s: 1:510948
4 18s: 1:36.6 million
5 18s: 1:1.31 billion
6 18s: 1:102 trillion

Obviously, with 4d6 drop one, your odds are much better.

[Edit: Crap, forgot the factorial in the denominator.]
 
Last edited:


Dreaddisease said:
Can someone post the relationship between the two.

There's always debate, but the best answer is simply Int = IQ/10. That was the clear implication of multiple passages by Gygax in the original AD&D books. It was also specified that way in an article back in Dragon Magazine #8.

Other more complicated options generally have various problems, but always have coincidental the side-benefit of portraying D&D players with much higher Int scores than the simple answer. :)

More on the topic: www.superdan.net/dndmisc/int_iq.html
 

Re: Re: IQ to INT equivalent

dcollins said:

Well, this is an interesting article, covering some of the things mentioned in this thread. But its conclusions are based on outdated and flawed arguments.

First, the author rejects the bell curve theory because an IQ of 228 (the world record) would be Int 34 by a bell curve conversion, but a 22 by the 10x conversion, and a 34 is way beyond the 3d6 range of Int scores. Well, in 3E, ability scores aren't quite so limited. Race and age modifiers apply, as well as level up bonuses. An Epic Int score of 34 isn't at all strange.

from the article cited above
The simplest response to the "compare the bell curves" theory is that there’s no necessity for the fantasy population of a D&D world to exhibit the same deviation (or mean) in intelligence as real-world humans. In fact, there’s no strict requirement that the fantasy population actually matches the bell curve of a 3d6 random variable. It may in fact be a good idea to use a character-generation method that creates interesting, outside-the-norm, exceptional characters with greater frequency than exists in the actual population.

No, no, NO. If you adhere to an Int = IQ/10 distribution, you'd have an extraordinary number of super-geniuses and retarded people in your campaign world. This does not sound like ANY campaign world I've ever heard of. You DO want a more-or-less real world representation of intelligence levels.

As I said in my last post, if IQ scores are just a rough approximation to you, then why bring in IQ scores at all? You're not adding realism by using an IQ/10 conversion, you're losing realism.
 

Re: Re: IQ to INT equivalent

dcollins said:


There's always debate, but the best answer is simply Int = IQ/10. That was the clear implication of multiple passages by Gygax in the original AD&D books. It was also specified that way in an article back in Dragon Magazine #8.

Other more complicated options generally have various problems, but always have coincidental the side-benefit of portraying D&D players with much higher Int scores than the simple answer. :)


Aside from the minor point that it's statistically wrong? :-p

Nor is the word of Gary Gygax law.

IQ 180 is well beyond genius by any measure. One in ten million have that kind of score, not one in 216.

It also caused roleplaying issues when you think of it being that extreme. Have you ever met someone that smart?

Chances are pretty good that you don't know anyone who knows anyone of that calibre, even.

You just can't comprehend it that way. At the very least, 5 points per IQ is not only statistically accurate, but it allows for a much wider variety of scores to be played.
 

It also caused roleplaying issues when you think of it being that extreme. Have you ever met someone that smart?

Chances are pretty good that you don't know anyone who knows anyone of that calibre, even.

Well now, I wouldn't say that's true ...I can't actually recall anyone on a roleplaying message board ever claiming to have an IQ less than 150.

Funny, that.
:rolleyes:
 

Re: Re: Re: IQ to INT equivalent

Allow me to emphasize an important point in the article Chun-tzu quoted above:

In fact, there’s no strict requirement that the fantasy population actually matches the bell curve of a 3d6 random variable.

Do the demographics of the fantasy world's ability scores have to match the roll-3d6 method of determination? No, they do not -- that's a faulty assumption. Roll-3d6 might be used in some circumstances, perhaps the common NPCs that players meet, without any requirement that that match the demographics of the population at large.

Remove that faulty assumption, and the whole shaky basis for the "match the bell curves" theory disappears. Hence we're left with the simple, easy-to-apply method actually specified by the authors of the game.
 

Remove ads

Top