Ok, so where is the cut-off between reasonable and unreasonable?
I can't say, but I can say that it exists somewhere between a 20 and a 10-11. 10 and 11 being the defined 'average' and all numbers above that being the defined 'above average.' As a rule of thumb, I'd place genius in the +3 and up crowd.
Oh, god.
Ok, once more into the breach.
The impact of a single +1 is only distinguishable with large data sets and statistical analysis; it's not something you will notice in play. You might think you notice it in play, perhaps because you miss that important roll by 1 point, but if you observe the successes & failures of another character, without seeing their rolls, you won't notice a pattern. Or, more accurately, any pattern you think you notice is more likely to be random noise than an actual pattern.
But, as the saying goes, "The existence of dawn does not invalidate the difference between day and night." Just as minute by minute one cannot distinguish the lessening darkness, but the difference between night and day is still apparent, so too with ability bonuses: if your prime stat is a 5 then of course you are going to notice that rolls fail a lot.
The point, however, is that there is no single number at which you can rationally say "This is the Line of Death. The character concept is viable above this number, and non-viable below this number." I mean you can say it if you want, but it's not defensible mathematically. If you can roleplay a genius with Int 20, you can roleplay the genius with Int 18. And if you can do it with 18 you can do it with 16. And so on. At no point do the statistics of the game suddenly plunge off a cliff where it no longer works; it's a consistent step function the whole way down.
If you want to interpret the language in the rule books in a certain way you are free to do so, but there is nothing in the rules that either supports nor contradicts it. It's just something you are allowed to interpret any way you like.
That's a very bad argument. You're using the small difference between steps to say that there isn't a noticeable difference between many steps. There is a difference, and that difference becomes much more noticeable over time as the step difference increases. Yes, I couldn't readily distinguish between a +5 and a +4 over the course of a game or even a campaign, but I could readily distinguish the difference between a +5 and a +2 or +0 and especially between the +5 and a -3.
But, even then, we're talking about distinction against a limited number of random rolls against variable DCs. That's always going to have some variance and can hide the difference. But hiding the difference isn't the same as saying that there isn't one. It's just as likely that the variance will accentuate the difference. You can't argue one side of variance and ignore the other. Your argument is simply that you won't get statistical significance over a small number of rolls. That's a valid argument, but it doesn't mean there isn't a difference, just that the tools of stats don't provide a clear answer.
The probability of any
particular person existing in real life is billlions to one against. Therefore, for all practical purposes, that person doesn't exist. This applies to everyone. Therefore no one exists.
It's just as I always suspected. You are all just figments of my imagination.
Actually, the probability of any particular person existing in real life is 1. They exist, so the probability that they exist is 100%.
It is though, by construction. You can dispute the validity of the model, but that doesn't change what the model is. I also don't see what bearing issues of validity have on this discussion, since what's being compared is a theoretical model and the distribution of ability scores in a fantasy world.
So long as you're absolutely good with the understanding that your comparison is as bunk as using rainbow-farting unicorns, we're good.
Does this mean you are recanting the statements you made up-thread to the effect that someone with an Intelligence score of 15 is three times as intelligent as someone with an Intelligence score of 5?
You mean the statement that I already said was a misstatement? Yes, it was a misstatement.
I agree with this. Remember that I did not introduce the idea of comparing Intelligence with IQ. That was [MENTION=23751]Maxperson[/MENTION]. I was merely pointing out that if you're going to do that, you might as well take into account what IQ actually is. Because IQ has a fixed standard deviation, that means that the proportion of the population who have IQs in a particular range is theoretically fixed. This would apply to an IQ-tested fantasy world population just as much as a real world one.
Tu quoque is rarely a good argument, and it's not one here. Regardless of who introduced the idea, my entry into it wasn't to validate either method, but to point out that comparison of IQ to 3d6 is as bunk as the multiplying of INT by 10. Bunk arguments are bunk. My main interest was addressing a misunderstanding of the stats used in your arguments.
Validity, in the sense of accurately corresponding to things in the real world, has no bearing on my argument. IQ is what it is, whether valid or not.
That's a rather interesting statement. Do you also have a fondness of Humpty Dumpty?
But, okay, I'm taking this to mean that you understand that IQ has nothing to do with 3d6 rolls which has the knockon of having nothing to do with INT scores generated by 3d6 rolls. So long as we're okay that what you did was just random mathturbation, and has no meaning, we're good to go.
No, as I said above, it means that the proportion of the population who have IQs in a particular range is theoretically fixed. I'm not sure what you mean by "true meaning."
It's artificially fixed. It has no meaning outside of itself.
First, 3d6 doesn't follow a normal distribution. It has its own set of probabilities. None of what I've said has anything to do with a comparison of normal distributions.
Okay, I've no idea where you're veering now. While it's true that 3d6 isn't a normal distribution (it's a near normal distribution, which is a class of things that are often represented by normals because it's very useful and understandable to do so), you did represent earlier that you were matching the "normal' distribution of IQ to that of 3d6, and your justification was that they both had the same kind of distribution. I didn't mistake that. Now it seems that you're wanting to change that tune and get more narrow? I can do that, if you'd like. So far I haven't because introducing the concepts of near normal hasn't be 1) relevant or b) useful, and I'm not sure how it would become so now.
Second, if, as you say, a normal distribution must be continuous, and such continuity never happens in the real world, then any discussion of a normal distribution can be met with the issues of validity you have raised. It really doesn't add much to the discussion.
No, not the same issues, but yes, different issues of validity. If IQ was a real normal distribution, there would be some validity in comparing it to the 3d6 normal, even though both are using the continuous normal to represent discrete events, which has it's own set of issues -- just not ones relevant to this discussion. And, as you may have noticed, I put that in as an aside and specifically said it wasn't helpful to the discussion. But, I'm glad we're on the same page here.