D&D 5E So 5 Intelligence Huh

Maxperson

Morkus from Orkus
Up until 5e, the idea that Int scores were equivalent to IQ was okay-ish. But Int scores aren't used the same way in 5e because Int checks are modified by skills in a way that didn't happen in previous versions. I've said this often enough in this thread and given examples, so I won't labour the point, but it might be interesting if a cognitive psychologist looked at the 5e model of cognitive abilities and wrote a new article about it.

Given that IQ modifies nothing whatsoever, what's the big deal about int x 10 = IQ in 5e? Nothing changes with how int checks work. The changes in 5e are meaningless when talking about IQ.
 

log in or register to remove this ad

ChrisCarlson

First Post
Given that IQ modifies nothing whatsoever, what's the big deal about int x 10 = IQ in 5e? Nothing changes with how int checks work. The changes in 5e are meaningless when talking about IQ.
For someone who continues to insist the Int=IQ argument was not his, you seem to bring it up and defend it an awful lot...
 

Ovinomancer

No flips for you!
Ok, so where is the cut-off between reasonable and unreasonable?
I can't say, but I can say that it exists somewhere between a 20 and a 10-11. 10 and 11 being the defined 'average' and all numbers above that being the defined 'above average.' As a rule of thumb, I'd place genius in the +3 and up crowd.

Oh, god.

Ok, once more into the breach.

The impact of a single +1 is only distinguishable with large data sets and statistical analysis; it's not something you will notice in play. You might think you notice it in play, perhaps because you miss that important roll by 1 point, but if you observe the successes & failures of another character, without seeing their rolls, you won't notice a pattern. Or, more accurately, any pattern you think you notice is more likely to be random noise than an actual pattern.

But, as the saying goes, "The existence of dawn does not invalidate the difference between day and night." Just as minute by minute one cannot distinguish the lessening darkness, but the difference between night and day is still apparent, so too with ability bonuses: if your prime stat is a 5 then of course you are going to notice that rolls fail a lot.

The point, however, is that there is no single number at which you can rationally say "This is the Line of Death. The character concept is viable above this number, and non-viable below this number." I mean you can say it if you want, but it's not defensible mathematically. If you can roleplay a genius with Int 20, you can roleplay the genius with Int 18. And if you can do it with 18 you can do it with 16. And so on. At no point do the statistics of the game suddenly plunge off a cliff where it no longer works; it's a consistent step function the whole way down.

If you want to interpret the language in the rule books in a certain way you are free to do so, but there is nothing in the rules that either supports nor contradicts it. It's just something you are allowed to interpret any way you like.
That's a very bad argument. You're using the small difference between steps to say that there isn't a noticeable difference between many steps. There is a difference, and that difference becomes much more noticeable over time as the step difference increases. Yes, I couldn't readily distinguish between a +5 and a +4 over the course of a game or even a campaign, but I could readily distinguish the difference between a +5 and a +2 or +0 and especially between the +5 and a -3.

But, even then, we're talking about distinction against a limited number of random rolls against variable DCs. That's always going to have some variance and can hide the difference. But hiding the difference isn't the same as saying that there isn't one. It's just as likely that the variance will accentuate the difference. You can't argue one side of variance and ignore the other. Your argument is simply that you won't get statistical significance over a small number of rolls. That's a valid argument, but it doesn't mean there isn't a difference, just that the tools of stats don't provide a clear answer.

The probability of any particular person existing in real life is billlions to one against. Therefore, for all practical purposes, that person doesn't exist. This applies to everyone. Therefore no one exists.

It's just as I always suspected. You are all just figments of my imagination.

:D
Actually, the probability of any particular person existing in real life is 1. They exist, so the probability that they exist is 100%.

It is though, by construction. You can dispute the validity of the model, but that doesn't change what the model is. I also don't see what bearing issues of validity have on this discussion, since what's being compared is a theoretical model and the distribution of ability scores in a fantasy world.
So long as you're absolutely good with the understanding that your comparison is as bunk as using rainbow-farting unicorns, we're good.



Does this mean you are recanting the statements you made up-thread to the effect that someone with an Intelligence score of 15 is three times as intelligent as someone with an Intelligence score of 5?

You mean the statement that I already said was a misstatement? Yes, it was a misstatement.

I agree with this. Remember that I did not introduce the idea of comparing Intelligence with IQ. That was [MENTION=23751]Maxperson[/MENTION]. I was merely pointing out that if you're going to do that, you might as well take into account what IQ actually is. Because IQ has a fixed standard deviation, that means that the proportion of the population who have IQs in a particular range is theoretically fixed. This would apply to an IQ-tested fantasy world population just as much as a real world one.
Tu quoque is rarely a good argument, and it's not one here. Regardless of who introduced the idea, my entry into it wasn't to validate either method, but to point out that comparison of IQ to 3d6 is as bunk as the multiplying of INT by 10. Bunk arguments are bunk. My main interest was addressing a misunderstanding of the stats used in your arguments.


Validity, in the sense of accurately corresponding to things in the real world, has no bearing on my argument. IQ is what it is, whether valid or not.
That's a rather interesting statement. Do you also have a fondness of Humpty Dumpty?

But, okay, I'm taking this to mean that you understand that IQ has nothing to do with 3d6 rolls which has the knockon of having nothing to do with INT scores generated by 3d6 rolls. So long as we're okay that what you did was just random mathturbation, and has no meaning, we're good to go.


No, as I said above, it means that the proportion of the population who have IQs in a particular range is theoretically fixed. I'm not sure what you mean by "true meaning."
It's artificially fixed. It has no meaning outside of itself.


First, 3d6 doesn't follow a normal distribution. It has its own set of probabilities. None of what I've said has anything to do with a comparison of normal distributions.
Okay, I've no idea where you're veering now. While it's true that 3d6 isn't a normal distribution (it's a near normal distribution, which is a class of things that are often represented by normals because it's very useful and understandable to do so), you did represent earlier that you were matching the "normal' distribution of IQ to that of 3d6, and your justification was that they both had the same kind of distribution. I didn't mistake that. Now it seems that you're wanting to change that tune and get more narrow? I can do that, if you'd like. So far I haven't because introducing the concepts of near normal hasn't be 1) relevant or b) useful, and I'm not sure how it would become so now.

Second, if, as you say, a normal distribution must be continuous, and such continuity never happens in the real world, then any discussion of a normal distribution can be met with the issues of validity you have raised. It really doesn't add much to the discussion.
No, not the same issues, but yes, different issues of validity. If IQ was a real normal distribution, there would be some validity in comparing it to the 3d6 normal, even though both are using the continuous normal to represent discrete events, which has it's own set of issues -- just not ones relevant to this discussion. And, as you may have noticed, I put that in as an aside and specifically said it wasn't helpful to the discussion. But, I'm glad we're on the same page here.
 

BoldItalic

First Post
...
Actually, the probability of any particular person existing in real life is 1. They exist, so the probability that they exist is 100%.
...
You may have just proved that Sherlock Holmes really existed. :lol:

My post, though flippant, was intended to illustrate the pitfalls of confusing a priori probabilities with a posteriori. According to yesterday's weather forecast, the probability that it was going to have rained today was about 0.5, and yet it has just rained.
 
Last edited:

G

Guest 6801328

Guest
You're using the small difference between steps to say that there isn't a noticeable difference between many steps.

No, I'm not. Did you even read my post that you quoted? Did you miss the part that says "if your prime stat is a 5 then of course you are going to notice that rolls fail a lot"?

You, like Maxperson, are entirely missing the point. I literally have no idea how to explain a simple concept any more clearly. I'm afraid I will have to allow you to wallow in ignorance. Sorry.
 

AaronOfBarbaria

Adventurer
When I made that claim, I thought to myself, "I bet that someone is going to claim to have run it and get the exact same number for the result." I was right about that, though wrong about who.
...and your answer to me pointing out that a difference which only is seen in the times that a player rolls 2 numbers in particular on the d20 because anything higher than those two is a success regardless of the difference in modifiers, and anything below those two is a failure regardless of the modifier, and there is no guarantee that those 2 numbers actually come up in the sample size of "this campaign." is?

Or were you meaning to say that you realized you made a claim that doesn't actually prove what you claimed that it proves?
 

Ovinomancer

No flips for you!
No, I'm not. Did you even read my post that you quoted? Did you miss the part that says "if your prime stat is a 5 then of course you are going to notice that rolls fail a lot"?

You, like Maxperson, are entirely missing the point. I literally have no idea how to explain a simple concept any more clearly. I'm afraid I will have to allow you to wallow in ignorance. Sorry.

Huh, somehow between hitting multiquote and responding I got your argument confused. Mea culpa. In my defense, I did just read 10+ pages of assorted arguments. In my detriment, I didn't re-read the one I happened to quote.

I recall now what I wanted to say. You postulate that you can't tell place a line because the difference between steps is too small to notice. That you can't say when 'genius' stops because the difference between steps is too small, so since you can't tell the difference between 20 and 18, you similarly can't tell the difference between 16 and 18, and so on down to between X-2 and X. While I fully agree each step lacks a clear definition, I disagree with your argument that there can be no line. Let me try to explain.

Let's agree that a +1 difference isn't noticeable. Let's further agree that there is some place where cumulative difference does become noticeable. For the sake of argument, let's say that's at a +4 difference. If two characters have a +4 difference in bonus, that will become noticeable at the table during normal play.

If that can hold, then I can place a line of difference. If I can say that 20 INT is the smartest of geniuses (barring oddities), and that it is the top end of the class I will call genius, then the bottom end is the point at which I can tell a distinct difference, ie, below 14. Once the difference becomes noticeable, then the classification can change, even if each step isn't distinguishable, the total can be. I can place a line with the total -- the point at which it noticeably becomes different.

Now, we can argue as to where that line may be. It may be more than a +4 or less, but the point is that you can draw a line. Much like I can draw a line between nighttime and daytime even though I can't distinguish the minute to minute difference in light during dawn or dusk(presuming I also can't see the sun).
 



Ovinomancer

No flips for you!
Gotcha! BoldItalic does not actually exist.

Responses:

1) I've done my best to stop believing in BoldItalic, yet, there you are.

2) Gotcha back! I was referring to the person behind the avatar of BoldItalic with my cannily unspecific usage of 'you'.

3) POOF! We'll miss BoldItalic.
 

Remove ads

Top