Comparative defence scores - explain the math to me.

Colmarr

First Post
What you have to think is not how often every 20 rolls he hits, but how often he hits before, compared to how often he does after. By taking a creature fro 10 hits every 20 to 9 hits every 20, the impact is fairly minimal, but by taking a creature from 2 hits every 20 to 1 every twenty, you have halved hit hit rate...same +1, completely different effect.

If you feel you can buy into debates over when a class(/other) has a number that is too high (like the debates over avenger AC) is is ESSENTIAL you understand this principal, cause if you dont, your just wasting everyones time.

The above quote wasn't directed at me, but I'm curious.

I understand the logic behind Bob the Bob's mathematical position, but I'm just not convinced that it has practical application to D&D.

It's clear that changing a 10% hit chance to a 5% hit chance is both a 5% decrease in the probability of a hit and also a 50% decrease in the chance of a hit, depending on how you compare the figures.

But when you look at it from a practical perspective, the 50% position just doesn't seem to make any sense.

If an enemy hits on a 10 and does 1d6+3 damage per hit, then over the course of 20 rolls you would expect to take 11d6+33 damage. If you increase your defences such that the enemy requires an 11, the expected damage drops to 10d6+30; a difference of 1d6+3.

If the same enemy originally needed a 15 to hit and you boost your defences so that the enemy needs a 16... a difference of 1d6+3.

If the same enemy originally needed a 19 to hit and you boost your defences so that the enemy needs a 20... a difference of 1d6+3.

So if the change in damage received is the same no matter what probability your started at, how can it be argued that the outside positions on the dice (eg. 1, 2, 3, 18, 19, 20) are more valuable than the middle ones (eg. 8, 9, 10, 11, 12)?
 

log in or register to remove this ad

Assume each hit does an average of 6 points damage and the target has 60 hit points
given a 50% chance to hit

If you go from a 10 to hit to an 11 to hit then you go from an average of 3 hp per attackto 2.7 hp per attack which means it goes from 20 attacks to 22.2 attacks to take the target down

If you go from 19 to hit to 20 to hit then it goes from 100 attacks to take you down to 200 attacks to take you down.
 

Let me see if I understand.

The difference between the two approaches is that the former looks primarily at change in damage output (which is linear per point of attack/defence) while the latter looks primarily at change in target survivability (which is exponential per point of attack/defence).

And given that we're talking about defences (and thus PC survivability), the latter is the appropriate viewpoint?
 

The above quote wasn't directed at me, but I'm curious.

I understand the logic behind Bob the Bob's mathematical position, but I'm just not convinced that it has practical application to D&D.

It's clear that changing a 10% hit chance to a 5% hit chance is both a 5% decrease in the probability of a hit and also a 50% decrease in the chance of a hit, depending on how you compare the figures.

But when you look at it from a practical perspective, the 50% position just doesn't seem to make any sense.
The perspective that makes the most intuitive sense to me is the following:

You can compare the change in the proportion of your damage you can expect to inflict. It's an unambiguous measure, and it's not a relative measure: if you had +8 to hit or +13 to hit, another +2 on top of that is always a +10% increase to the proportion of your damage you can expect your attack will inflict.

- - -

It seems that some people enjoy confusing themselves -- and others -- by blowing relative measures out of proportion. For example, if you previously missed only on a 2, but with this effect in play you miss on a 1, 2 or 3 -- OH MY GOD, YOU ARE MISSING +50% MORE THAN BEFORE! -- but seriously, who cares, you're still hitting on a 4, and looking at your expected damage expression before & after will clearly show that your expected awesomeness is still well within the range of butt-kicking.

Cheers, -- N
 

Absolute damage kills you. Absolute damage kills your enemies.

As a decent first order approximation (completely leaving status effects and criticals aside, among other things), calculate the average (expectation of the) damage you deal in a round (factor in the chance to hit).

Calculate the average damage you take in a round from an enemy who attacks your AC the fraction of the time you expect AC to be attacked, Fortitude the fraction of the time you expect Fortitude to be attacked, etc.

A "pretty good" arrangement for your defenses is one that minimizes the average damage you take and for attacks, that maximizes the average damage you deal.

With this standard in mind, it's easy to see that relative to bonuses in damage, the lower your attack bonus is the more valuable +1 to hit is relative to +1 to damage.

Average damage= chance to hit * damage on a hit.

+1 to damage increases average damage by [chance to hit].

+1 to hit increases average damage by [damage on a hit]/20

So the higher to-hit is, holding damage on a hit constant, the more valuable +1 to damage is relative to +1 to hit.
 

The perspective that makes the most intuitive sense to me is the following:

You can compare the change in the proportion of your damage you can expect to inflict. It's an unambiguous measure, and it's not a relative measure: if you had +8 to hit or +13 to hit, another +2 on top of that is always a +10% increase to the proportion of your damage you can expect your attack will inflict.

In that perspective, any +1 is equally valuable to another, regardless of starting point (ignoring cases where your mods are so extreme as to enter hit-on-1 miss-on-20 range). What's missing here is length of combat. while it's true that a +1 to AC may represent saving 1d6+3 damage over the space of 20 rounds (if that's what a monster's hit does) regardless of whether you were initially hit 10% of times or 90% of times, if you're being hit 90% of time, you'll probably die before 20 rounds; conversely if you're only hit 10% of time, you'll survive far longer than twenty rounds.

Intuitively then, a +1 AC may be equally valuable per 20 rounds regardless of starting score, but since higher scores mean longer survival, the total sum of damage prevented by raising a high AC even higher is greater than that of raising a low AC.

Of course, this analysis gets a lot more wonky when multiple defenses and multiple creatures are involved.
 

Two opponents each has 100 hp and do 20 damage with each attack, having a 50% chance to hit. This means they do an average of 10 damage to each other each round, scoring a mutual (average) kill on round 10.

One of these has the option of taking +2 AC or +2 to hit.

Assuming he takes +2 AC, the opponent's average damage per round goes from 10 to 8. After 10 rounds, he will have done 100 hp to his opponent, having suffered 80 hp himself.

Assuming he takes +2 to hit, his average damage goes from 10 to 12. After 9 rounds he has done 106 damage while suffering 90 damage himself. This is not quite as good a result as he got with the AC bonus, but the difference is very small.


Lets do this again, but starting with a 20% chance to hit. Each warrior will do 4 average damage and the fight will start at 25 rounds.

Assuming you take +2 to hit, average damage goes to 6. It now takes 17 rounds to do 102 damage, suffering 61 damage in return. While with +2 AC, he takes 2 damage from each blow, killing the enemy in 25 rounds after having taken 50 damage. Again, the defensive option is better, but slower.


What I did this for was to show something else, however; that the effect of the +2 bonus was significantly more important at 20% base chance to hit than it was at 50% base chance to hit. The amount of hit points the winner had at the end of the 50% chance fight was 20; the remainder in the 20% base chance fight was 50 - two and a half times as much! THIS is the most important result, and it why its called escalating returns - the same bonus is worth more as you get to more extreme values.
 

What I did this for was to show something else, however; that the effect of the +2 bonus was significantly more important at 20% base chance to hit than it was at 50% base chance to hit. The amount of hit points the winner had at the end of the 50% chance fight was 20; the remainder in the 20% base chance fight was 50 - two and a half times as much! THIS is the most important result, and it why its called escalating returns - the same bonus is worth more as you get to more extreme values.

Borrowing from another thread:

Suppose you currently get hit 10% of the time by AC attacks and 80% of the time by FRW attacks.

FRW attacks and AC attacks are equally common (and this isn't because opponents are endogenously targeting you based on what your strengths and weaknesses are; it's essentially random: you can't target and more quickly kill opponents that target FRW as opposed to AC either), equally damaging/status effect inflicting, and tend to come in equally difficult encounters. Opponents always hit you on a 20 without the natural 20s rule and miss on a 1 without the natural 1s rule.

Which do you prefer: +1 to AC or +4 to all FRWs?
 



Pets & Sidekicks

Remove ads

Top