• The VOIDRUNNER'S CODEX is coming! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

D&D 5E So 5 Intelligence Huh

Maxperson

Morkus from Orkus
Since your talking to real-world people, you might want to try using real-world definitions for the words you're saying. That way, we don't end up in a pointless conversation only to discover that the words you're saying don't mean the same things to you that they do to everyone else.

Or else we can all just understand that D&D changes definitions all the time. Why are you giving WotC a pass, but not me?
 

log in or register to remove this ad

AaronOfBarbaria

Adventurer
+2 more than someone else is noticeable in the success/failure ratio. So if the best is 20 int plus all the bells and whistles, then someone with a 17 and all the bells and whistles will be noticeably worse. That discrepancy will become more pronounced with each +1 increased difference. I've just been using 16 since that's where +3 first comes into play, but really the line is at 20/17.
So I just batch rolled 100 checks and came out with the same number having a successful result against a DC 15 check whether the modifier being added was X or X+2... I didn't get rolls outside what seems typical of rolling a d20... can you explain why I'm not even kind of seeing this line you claim to be pointing at?

I mean, yeah, +7 is less than +9 - but 68/100 rolls being successful isn't any less than 68/100 rolls being successful.
 

BoldItalic

First Post
The Sherlock Homes in the stories wasn't random. There was bias.

In the Sherlock Holmes stories, as in all fiction, the author didn't roll dice, he chose the results. Statistics don't apply. There is no hypothetical infinite population of normally-distributed detectives in the stories, randomly sometimes solving cases and sometimes failing, of which Sherlock Holmes is an outlier. There's just him. And he always succeeds because the author wants him to.

If we translate Sherlock Holmes into a 5e character and present detective mystery adventures that are designed to challenge him, allowing his success to be determined partly by d20 rolls, sometimes he will succeed and sometimes fail. This does not prove that we have used the wrong numbers on his character sheet, it proves that the DM was creating adventures that challenged the player who was playing Sherlock Holmes. Which is what DMs are supposed to do in 5e; present scenarios where success is not trivially achieved.

Then, if we run the Sherlock Holmes character through a number of adventures and write up blogs of the ones where he succeeded but quietly forget about the others where he failed, we can make it seem as if he is brilliant.

You can role-play any set of stats as Sherlock Holmes if the DM is fair and you choose to report the game sessions selectively.

The analogue of the stories is not a character who is a statistical freak, it is a player who chooses the results of all his dice rolls and boasts afterwards about the adventures where he beat the DM.

Writers of fiction cheat. :lol:
 



BoldItalic

First Post
Here's question, if 18 int is the maximum possible, would tbat change anything in your 3d6 IQ numbers?
To map IQ scores onto Int scores (leaving aside the question of whether it means anything) you have to do something with the values that don't map exactly. You could say that IQ scores in such-and-such a range map onto this Int score and so on, in which case you would say that all IQ scores above a given number map onto Int 18. So, if you use x10, you could say that Int 18 represents all IQ scores above 175 whereas if you use x5 you might say that it represents all IQ scores above 137. So it can be done, it's just arbitrary.
 

Maxperson

Morkus from Orkus
So I just batch rolled 100 checks and came out with the same number having a successful result against a DC 15 check whether the modifier being added was X or X+2... I didn't get rolls outside what seems typical of rolling a d20... can you explain why I'm not even kind of seeing this line you claim to be pointing at?

I mean, yeah, +7 is less than +9 - but 68/100 rolls being successful isn't any less than 68/100 rolls being successful.

When I made that claim, I thought to myself, "I bet that someone is going to claim to have run it and get the exact same number for the result." I was right about that, though wrong about who.
 

Maxperson

Morkus from Orkus
To map IQ scores onto Int scores (leaving aside the question of whether it means anything) you have to do something with the values that don't map exactly. You could say that IQ scores in such-and-such a range map onto this Int score and so on, in which case you would say that all IQ scores above a given number map onto Int 18. So, if you use x10, you could say that Int 18 represents all IQ scores above 175 whereas if you use x5 you might say that it represents all IQ scores above 137. So it can be done, it's just arbitrary.

It's not arbitrary unless they flip a coin to get the result. Any choice made with reason is by definition, not arbitrary. That means if I have a reason, ANY reason, such as int x 10 = IQ was done in 1e and 3e (can't prove 2e, but I remember it being done then too) and I am continuing that in 5e, it's not arbitrary.
 

Yardiff

Adventurer
Spent about a minute doing a web search and found this article....it was written when 3e/3.5 was new.

website 'superdan.net'


How Does Intelligence Relate to IQ?


Overview

Occasionally, D&D enthusiasts will discuss or debate how the game ability "Intelligence" score properly relates to the real-world measurement of "Intelligence Quotient" (IQ). This almost always spawns a heated debate, in large part due to the controversy over IQ-scores general (what it seeks to measure, how valid the measurement is, whether testing procedures are fair, how the tests have changed over time, etc.), and aggravated as some find it problematic to measure IQ in fantasy non-humans, animals, and monsters.

The original AD&D ruleset actually addressed this in the core rulebooks. Each of the AD&D 1st Ed. Monster Manuals included the assertion that "Intelligence indicates the basic equivalent of human ‘IQ’" (MM p. 6, FF p. 7, MM2 p. 6, Deities & Demigods p. 6). Some slight restatement of this assertion appears in the later references; the DDG page adds a clause that the ratings specifically do apply "in monsters", while the FF page adds the parenthetical note "at least in concept even if IQ itself appears now to be much disgraced". Presumably this equivalence would be for intended for adult IQ scores.

Granted that the original designer of D&D (Gary E. Gygax, who created the various ability scores such as Intelligence in the first place, and wrote the original Monster Manual) specified that Intelligence does indicate a certain IQ rating, and that 10 is an average human D&D Intelligence, while 100 is an average real-world IQ, the simplest relation is to assume that Int = IQ / 10. This assumption is in fact borne out by the existence of a somewhat light-hearted article published in Dragon Magazine, issue #8, for converting real-life players’ characteristics into D&D statistics, which asserts: "To determine your intelligence, look up the results of the most recent IQ test you have taken and divide the result by ten. This number is your intelligence rating." (Dragon #8, "So, You Want Realism in D&D?", by Brian Blume).

Finally, the same principle has been upheld for the most recent edition of the game. The Official D&D FAQ says this: "A character with an Intelligence score of 3 is smarter than most animals, but only barely... Ten points of IQ per point of Intelligence is a good rule of thumb, so your example character has an IQ of about 30." (D&D FAQ, Version 3.5; Update Version 09/28/05, p. 2).




Alternate Theories

Some D&D players are never satisfied with a rule as simple as the preceding. One of the most popular alternate theories by D&D gamers who are unaware of this history is to theorize that they should "compare the bell curves". That is, they contend that one should calculate a relation by considering what percentage of real-world people have a certain IQ range (constructed specifically with a 100 mean, and standard deviation of 16), and map that to a range of equal percentage likelihood when rolling 3 six-sided dice (range of 3-18, with a 10.5 mean, and standard deviation of 2.95). The end result is a formula such as Int = (IQ – 100) / 16 * 2.95 + 10.5.

A relation like this has the effect of scaling the extremes of IQ scores further out on the Int scale. This presents several practical problems: (1) animal-level intelligence would correspond to Int 1 = IQ 48 or Int 2 = IQ 53, which would be high enough to learn language; (2) the minimum for humans, Int 3 = IQ 60, is sufficiently high as to entirely miss several categories of real-world intelligence deficiencies (see below); and (3) the maximum for humans, Int 18 = IQ 141, is actually far below the results for some real-world people on standardized IQ tests.

The best example of this last problem is the Guinness record-holder for Highest IQ, Marilyn vos Savant, who reportedly has an IQ score of 228 (subject to some debate; see links at end). Under the "compare the bell curves" theory, this would translate to Int 34, which is wildly beyond the range possible in D&D by rolling 3d6 (in fact, beyond the range of most gods in D&D). Even if we are skeptical of this IQ score, considering the previous record holder’s IQ of 196 results in Int 28, again far beyond the 3-18 result achievable in D&D. In contrast, the simpler linear relation properly brings these scores into the more reasonable range of 19 and 22, which is in fact naturally achievable in D&D via several methods. (E.g., a roll of 18 plus a few age or level-based ability bonuses, reasonable point buy, etc. The "compare the bell curves" method is not remotely correctable even by maximizing such increases.)

One amusing "advantage" of the scaled curve system is that it strokes D&D players’ egos by making them look extremely smart in game terms. I’ve seen multiple online discussions of this topic in which everyone participating gleefully points out that their IQ scores translate to a D&D Intelligence of 18 or more under this model, and feel that that’s entirely reasonable.

The simplest response to the "compare the bell curves" theory is that there’s no necessity for the fantasy population of a D&D world to exhibit the same deviation (or mean) in intelligence as real-world humans. In fact, there’s no strict requirement that the fantasy population actually matches the bell curve of a 3d6 random variable. It may in fact be a good idea to use a character-generation method that creates interesting, outside-the-norm, exceptional characters with greater frequency than exists in the actual population.

Another alternate theory is that IQ shouldn’t be directly related to Intelligence at all, but rather that a formula should be generated that combines all the D&D mental abilities (Intelligence, Wisdom, and Charisma), weighting them in some fashion to generate the IQ score. This can be discounted mostly as a frustrated response to not having a clear real-world measurement or translation for the Wisdom or Charisma abilities. Clearly: (a) "Intelligence Quotient" is by definition a measurement of "Intelligence", and (b) the original designer comments (above) explicitly indicated an IQ:Int relationship and nothing else.


Int and IQ Categories

In each of the original AD&D monster books, a table of descriptive Intelligence categories was included, reproduced below. Among other details, note that Int 10 has always indicated "average (human) intelligence", while the maximum natural score of Int 18 is considered a "genius", and anything beyond that is apparently extra-human, titled "supra-genius" or "godlike".



Int AD&D Rating

0 Non-intelligent or not ratable

1 Animal intelligence

2-4 Semi-intelligent

5-7 Low intelligence

8-10 Average (human) intelligence

11-12 Very intelligent

13-14 Highly intelligent

15-16 Exceptionally intelligent

17-18 Genius

19-20 Supra-genius

21+ Godlike intelligence


IQ ratings, however they are tested or generated, are scaled so that a score of 100 is average for a person taking the test (in a particular age category). The original IQ classifications by Terman are reproduced below (taken from http://members.shaw.ca/delajara/IQBasics.html ). A few similarities can be noted to the table of AD&D Intelligence above: (1) the Int ratings usually go up by factors of 2 points, while the IQ categories usually rise in steps of 20 points, (2) at the upper end, an Int of 17-18 indicates a "genius", while a 170-180 IQ would also be categorized as "genius". This strongly indicates that the D&D system was in fact designed with an eye towards a x10 translation between IQ and Intelligence.



IQ Stanford-Binet I Classification:

to 20 Idiot

20-49 Imbecile

50-69 Moron

70-80 Borderline deficiency

80-90 Dullness

90-110 Normal or average intelligence

110-120 Superior intelligence

120-140 Very superior intelligence

140+ Genius or near genius




What you take from this is up to you.
 

BoldItalic

First Post
Up until 5e, the idea that Int scores were equivalent to IQ was okay-ish. But Int scores aren't used the same way in 5e because Int checks are modified by skills in a way that didn't happen in previous versions. I've said this often enough in this thread and given examples, so I won't labour the point, but it might be interesting if a cognitive psychologist looked at the 5e model of cognitive abilities and wrote a new article about it.
 

Remove ads

Top