"Accident of Math"???

Sir Brennen said:
Except some DCs don't scale equally. AC's often lag behind 'to hit' modifiers. Saving throws (especially the "poor" category of a class) are almost irrelevant at mid-levels compared to spell DCs of a even a moderately focused spellcaster.
But that's a different comparision. Comparing modifer level progression with DC level progression is interesting and educating. Comparing the raw modifiers with the range of your die roll is not.
 

log in or register to remove this ad

Oldtimer said:
Let me say that this reasoning (that I've also seen around the boards) is complete nonsense.

When hundreds of people have noted the same thing about something as objective as math, it would behoove you to be very certain of your argument before calling thier collective reasoning nonsense.

Just make this thought experiment: Add +2000 to all skill modifiers and all DCs. Now you have modifiers that are a hundered times higher than what you can roll on a d20. Have you changed the probabilities of success? No, not a bit. It's a linear scale. All you've done is offset the possible outcomes and the target number equally.

Which is fine and true, but stops the reasoning several steps short of where you should be, Vizzini. As others have said, not all modifiers and DCs scale equally. If all modifers were equal to character level, then your reasoning would be perfectly valid. But in fact, a modifer like to hit is based on character level for fighters and on 50% of character level for thieves. So, 'turned up to 2000', an AC of 2000 would be alright for fighters with thier +2000 attack bonuses, but would completely overwhelm the range of the d20 for a wizard with his +1000 attack bonus. Consider saving throws. They don't scale up evenly either. Your good saving throws are better than 1/2 character level, while your bad saving throws are about 1/3 character level. This typically results in a situation at high levels where the size of the modifier and the DC overwhelms the randomness in the d20 throw. Characters with good saves practically cannot fail a typical DC, while characters with bad saves fail about half the time. Consider skill checks. Typically either you have the skill or you don't. At low levels, to challenge a character with skill, the DC must be quite high. But this means that characters without the skill have no real chance of success at all. A challenge of this sort has become binary. Instead of being 20% or 40% more likely to succeed than this unskilled comrades, the skilled character is 2000% more likely to succeed than his unskilled comrades. Instead of being 20% or 40% more likely to hit than the wizard, the fighter is 2000% more likely to hit than the wizard.

In brief, the larger the modifier we are applying, the larger the expected deviation in the modifier between the strong and weak cases we should expect. As the modifier gets larger, the deviation in outcome expected from a random roll of the dice is completely overwhelmed by the deviation in the size of the modifier.
 

Oldtimer said:
But that's a different comparision. Comparing modifer level progression with DC level progression is interesting and educating. Comparing the raw modifiers with the range of your die roll is not.
QFT It doesn't matter how big the bonus to a die roll is as long as the bonus to the opposed die roll is in the same ballpark. The 'accident of math' in 3E is that (as Sir Brennen said), that this is often not the case. For instance, in 3E, bonus to attack rolls soon eclipse ACs greatly (unless someone has gone out of his way to one-trick pony his character into an AC monster), leading to the 'only miss on a 1' syndrome.

So what this means is that the difference between bonuses on opposed die rolls should be (considerably) less than the range of the die being rolled (in 3E, this is a d20). A delta close to 20 means that the task being attempted is very easy. A delta of 20 or more is overkill. I hope this is the 'accident of math' the 4E developers are trying to fix!
 

As for what they mean, I think that they mean this.

Currently, between 1st and 2nd level, your hit points double. Between 19th and 20th level, your hit points only increase by about 5%. They want to smooth that out somewhat, so instead of expecting your HD to be equal to your level, your HD should be equal to something like 5 + 1/2 character level.

Thus your hit points only increase by 16% between 1st level and 3rd, and by 5% between 28th and 30th. This is a much more uniform increase in power.
 

I think it was in fact in another blog post or similar article in which they described a few features of the "sweet spot":
Basically, AC/DCs and attack rolls / saving throw modifiers and skill modifiers are within a reasonable range of the d20.
Monsters hit a bit less than 50 %, characters hit a bit more than 50 %. That's the point where the math works well.
Beyond that, the variance between the numbers is often to high. Many monsters automatically hit with their first attack. Some monsters are always hit with the first attack of a fighter, some barely.

if they hit probabilities are low or high across the board, this might not affect the balance, but it affects the gameplay a lot. If they are very high, it's easy to "accidently" have one-round kills against the players, and in general it easily can turn into a initiative "war".
If the the attack chances are to low, the whole game becomes more dependent on lucky rolls, and fights are dragged out without any real benefit, because no tactical variation will affect the outcome much. This can also lead to spellcasters expending either more spells in that encounter, or having to hold back longer (since there are more rounds in which they can or could cast spells, and if defenses are generally to high, the chances of the spells being effective are lower)
 

It's not just the d20 roll.

At 1st level, the d20 rolls are fairly balanced - so few modifiers, it's easy to get the AC in the right spot. No, what goes wrong there is the hit points. One hit - or one crit - and you're down. This stays with the group for a few levels. A 4th level rogue with a 10 Con has - on average - 16.5 hp, which means that an Ogre can quite easily take him or her down.

At about 5th-12th level, the hit points are finally enough to withstand most attacks, although there are a few monsters that deal too much damage.

At 13th+ level, you begin to run into the problem of cascading bonuses. A 15th level fighter has a +15 Base Attack, but could quite conceivably get another +15 from other sources (strength, magic weapon, spells, feats). Meanwhile, the 15th level rogue, who doesn't use all of those bonuses, starts on a +11, and then might get a +5 to +10 from those other sources. A 4 difference (from the BAB) looks good, but a 10 difference? Uh oh!

Suddenly, against good AC monsters, only the fighter can hit.

Cheers!
 

Celebrim said:
When hundreds of people have noted the same thing about something as objective as math, it would behoove you to be very certain of your argument before calling thier collective reasoning nonsense.



Which is fine and true, but stops the reasoning several steps short of where you should be, Vizzini. As others have said, not all modifiers and DCs scale equally. If all modifers were equal to character level, then your reasoning would be perfectly valid. But in fact, a modifer like to hit is based on character level for fighters and on 50% of character level for thieves. So, 'turned up to 2000', an AC of 2000 would be alright for fighters with thier +2000 attack bonuses, but would completely overwhelm the range of the d20 for a wizard with his +1000 attack bonus. Consider saving throws. They don't scale up evenly either. Your good saving throws are better than 1/2 character level, while your bad saving throws are about 1/3 character level. This typically results in a situation at high levels where the size of the modifier and the DC overwhelms the randomness in the d20 throw. Characters with good saves practically cannot fail a typical DC, while characters with bad saves fail about half the time. Consider skill checks. Typically either you have the skill or you don't. At low levels, to challenge a character with skill, the DC must be quite high. But this means that characters without the skill have no real chance of success at all. A challenge of this sort has become binary. Instead of being 20% or 40% more likely to succeed than this unskilled comrades, the skilled character is 2000% more likely to succeed than his unskilled comrades. Instead of being 20% or 40% more likely to hit than the wizard, the fighter is 2000% more likely to hit than the wizard.
You take exception to my use of the word "nonsense" for the modifier-to-die-range comparison and then you explain something entirely different.

I know that modifiers and DCs don't progress at the same rate. And that modifiers for different classes progress very differently. But that has nothing to do with the given comparison (modifier to die range).

In brief, the larger the modifier we are applying, the larger the expected deviation in the modifier between the strong and weak cases we should expect. As the modifier gets larger, the deviation in outcome expected from a random roll of the dice is completely overwhelmed by the deviation in the size of the modifier.
This is untrue. Unless the modifier itself is taken by some random sampling from a real population of normal distribution. But it's not. It is calculated exactly according to game rules. It's the math inherent in the variance of those game rules that's interesting. It's the variance of the population of equal power level that should be compared to the range of the die roll, not the size of a given modifier. That just confuses the issue.

Maybe this is exactly what people mean, but it surely isn't what was stated.
 

Oldtimer said:
Which is completely wrong.

I wouldn't say completely wrong, but you're right that it misses out a key part of the situation:

At low levels, the total modifier to any dice roll that a really specialised character can muster is fairly low, and not hugely different from the total modifier that a non-skilled character can muster. So, a 1st level Fighter might have +3 or +4 to his attack rolls, whereas the Wizard might have +0 or +1. When compared with the d20 roll, this difference is fairly insignificant, rendering character roles very limited.

On the other hand, at high levels, the difference in modifiers is huge - a 20th level Fighter might easily have +40 or more with his primary weapon, while the Wizard has +12 or so. This means that any AC that the Wizard can hit at all is trivial for the Fighter, while any AC that is challenging for the Fighter is completely out of reach of the Wizard. This in itself wouldn't be a problem, except it applies to all areas of the game - attack rolls, damage output, saving throws, and so on and so forth.

On the other hand, in the middle range, the differences in the total modifiers are large enough to be significant, but not overwhelming. So, an AC that the Fighter might hit on a 5 would require a 15 from the Wizard - it's much less likely, but still doable.

Does that meet with your approval? Or is it still nonsense? :)
 

delericho said:
I wouldn't say completely wrong, but you're right that it misses out a key part of the situation:

At low levels, the total modifier to any dice roll that a really specialised character can muster is fairly low, and not hugely different from the total modifier that a non-skilled character can muster. So, a 1st level Fighter might have +3 or +4 to his attack rolls, whereas the Wizard might have +0 or +1. When compared with the d20 roll, this difference is fairly insignificant, rendering character roles very limited.

On the other hand, at high levels, the difference in modifiers is huge - a 20th level Fighter might easily have +40 or more with his primary weapon, while the Wizard has +12 or so. This means that any AC that the Wizard can hit at all is trivial for the Fighter, while any AC that is challenging for the Fighter is completely out of reach of the Wizard. This in itself wouldn't be a problem, except it applies to all areas of the game - attack rolls, damage output, saving throws, and so on and so forth.

On the other hand, in the middle range, the differences in the total modifiers are large enough to be significant, but not overwhelming. So, an AC that the Fighter might hit on a 5 would require a 15 from the Wizard - it's much less likely, but still doable.

Does that meet with your approval? Or is it still nonsense? :)
I think it's also worth pointing out that despite the fact that in most cases, the Wizard doesn't care what the AC of an enemy is, there are other characters like clerics and rogues who need to seriously optimize their combat efficiency in order to be able to hit enemies at high levels. If the fighter is only succeeding 40% of the time, it's likely that an unoptimized rogue is succeeding only on a natural 20. This has the effect of constraining all classes to dedicate themselves to maximal combat efficiency or else be rendered useless.

I'm hoping that what they're doing is collapsing the range, so that a 4th level fighter has a pretty decent chance of hitting a 15th level opponent, but will barely scratch it. That way, a 15th level rogue, cleric, or even wizard will probably be able to hit that 15th level opponent, but their ability to affect it will be reduced compared to a 15th level fighter. If everyone's AC and to-hit looks pretty similar across 30 levels, but the damage and effects get better, it would solve a lot of these problems.
 

delericho said:
I wouldn't say completely wrong, but you're right that it misses out a key part of the situation:

At low levels, the total modifier to any dice roll that a really specialised character can muster is fairly low, and not hugely different from the total modifier that a non-skilled character can muster. So, a 1st level Fighter might have +3 or +4 to his attack rolls, whereas the Wizard might have +0 or +1. When compared with the d20 roll, this difference is fairly insignificant, rendering character roles very limited.

On the other hand, at high levels, the difference in modifiers is huge - a 20th level Fighter might easily have +40 or more with his primary weapon, while the Wizard has +12 or so. This means that any AC that the Wizard can hit at all is trivial for the Fighter, while any AC that is challenging for the Fighter is completely out of reach of the Wizard. This in itself wouldn't be a problem, except it applies to all areas of the game - attack rolls, damage output, saving throws, and so on and so forth.

On the other hand, in the middle range, the differences in the total modifiers are large enough to be significant, but not overwhelming. So, an AC that the Fighter might hit on a 5 would require a 15 from the Wizard - it's much less likely, but still doable.

Does that meet with your approval? Or is it still nonsense? :)
Since you're talking about "the difference in modifiers" I agree with you completely.

Maybe it's the fact that the level range where the difference of modifiers are reasonable compared to the d20 range happen to coincide with modifiers being of a size comparable to the d20 range. However, while the first comparion is important, the second one is a coincidence and of no import.
 

Remove ads

Top