• COMING SOON! -- Level Up: Advanced 5th Edition! Level up your 5E game! The standalone advanced 5E tabletop RPG adds depth and diversity to the game you love!
log in or register to remove this ad

 

D&D 5E The mathematics of D&D–Damage and HP

Asisreo

Fiendish Attorney
So, I see very often DPR, Damage Comparisons, and Health being discussed at-length playing D&D.

Often times, in White-Room theory-crafting scenarios, someone will talk about the damage a character can inflict on a target and compare that damage to the health of said target.

For example, someone could talk about how a single scorching ray kills a goblin because 2d6 = 7 average damage and a goblin's average HP is 7. Therefore, if the ray hits, its essentially a guaranteed kill, right?

But we're forgetting the fact that when average damage = 7, it actually means there's only a 58.33% chance to actually kill that enemy. This is because while 7 is the most likely sum of combinations, it still only accounts for 16.66% of the total possible combinations.

So if you have a 65% chance to-hit a goblin with 2d6 damage, you actually only have a 38% chance of killing the goblin, which is really low if you're taking a whole action. Its possible, but its very low.

Now, reverse that but for HP. Imagine the DM decided he wanted to roll for health but he waited until after the grimlock first takes damage. The damage is rolled and it ends up 11, the DM decides to roll the grimlock's health which is 2d8+2. The damage should kill, right? Well, its actually around 50% as well.

If you combine those two mathematical models, the actual percent chance of certain attacks killing a character with rolled dice becomes much swingier. Of course, most DM's don't roll health for their monsters, but it does lead to interesting probabilities.

I just wanted to discuss exactly how damage can be a misleading factor when talking about damage and its relation to HP.


TL;DR
When average damage = average health, it isn't a guaranteed kill. Its actually roughly a 50% chance to kill. Be considerate of these facts when discussing DPR.
 
Last edited:

log in or register to remove this ad

aco175

Legend
Do you think some of the theory discussion involves average damage and average HP? Some of the other threads we talk about has people saying that monsters should only deal average damage to speed things up or aid the DM. Not sure how many players use average damage over rolling the damage.

I'm not big on math so I'm not sure how much of this thread will evolve to be over my head, but it sounds worth talking about.
 

Oofta

Title? I don't need no stinkin' title.
Not a math geek (at all) but I use average HD and average damage for monsters (except crits) because it's easier. I even allow players to use average damage* for those that can't do quick math in their heads.

But I personally don't pay a lot of attention to white room analysis because it tends to focus too much on offense (and combat for that matter), do not pay enough attention to flexibility or defenses and the differences most people see are so minor you'll never notice it in an actual game.

Anyway carry on, I'll be interested to see what comes of the discussion, even if I only understand a third of it. That's understanding .25%, right? ;)

*Rounded up for PC damage, because I don't want them to feel penalized.
 

Not only is that mean damage, but chance to hit on a goblin isn't all that high at 2nd level. A typical caster will have +4 or +5 to attack, and a goblin has 15 AC. If you take the higher of the two values, your chance to kill the goblin with single ray is 34%. With the lower, it's 31%.

 

jasper

Rotten DM
I am an Adventure League dm. I use avg damage and hp most of time. Occasionally I use Avg damage on out going spells.
Back in 1e I generally did roll health for my monsters. Some times it got wacky.
 

6ENow!

I don't debate opinions.
FWIW considering average damage and hp:

As DM, I've always used average HP for monsters/NPCs except BBEG who get max HP. Damage is also average, except if average damage would automatically down a PC, in which case I will roll to give the PC a chance to remain in the fight.

For the players, about 80% of the time, they just accept the "average" HP when they level instead of rolling for hit points. Players can also just use average damage for their attacks and spells (like just using 28 for a fireball instead of rolling). Players use average damage probably about 80% also. shrug
 

Quartz

Hero
Imagine the DM decided he wanted to roll for health but he waited until after the grimlock first takes damage.

I don't know about other GMs but where possible I pre-roll monster HPs or just invoke the mook rule - one hit (or two / three / whatever hits) and they're dead.
 

Snarf Zagyg

Notorious Liquefactionist
So, I see very often DPR, Damage Comparisons, and Health being discussed at-length playing D&D.
(snip)

I just wanted to discuss exactly how damage can be a misleading factor when talking about damage and its relation to HP.

IME, basic math can be a useful tool when it comes to someone looking to compare things in isolation (aka, optimization or efficiency).

To use an easy example, if everything else is equal, what does more damage- 2d6 or 1d12? Easy, right? So if you're presented with an option for damage, with everything else being equal, you would choose the 2d6 option.

It's the same in many fields; we've seen increased reliance on these metrics in sports. The value of a three point shot will be higher than a two point shot (of course), and then you can look at the expected field goal percentage (the chance of "hitting" to use a D&D term) to see the expected "damage" from each shot (the expected points). Which is why, in basketball, a lot of teams now play for either the three-point shot or the dunk/close two, and eschew the long-range two point shot.*

Which gets to two separate issues:

1. What does white room theory have to say about any individual combat? Or, as you put it, how does it account for the "swinginess" of dice? And the answer is- it doesn't. Not at all. The process matters more than the results. Think of it like this by analogy; if someone shoots 40 from the three point line, and 50 percent from within the arc (two points), then they should take the three point shot (1.2 expected from 3, 1 from 2). Even if they happen to miss their three pointer because of swinginess, it was still the correct decision- the theory was correct.

2. On the other hand, as many people point out, white room theory in D&D is often flawed. There is a lot of bad math. There is a failure to account for tradeoffs (AC, health, other effects) in combat. And it does not even try to math out "out of combat" effect in D&D. In short, from what I have seen, it tends to be very limited outside of comparing like things, such as DPR, without full context. No one to my knowledge has done a good, holistic, comprehensive statistic like "WAR" (from baseball) for D&D.





*Of course, as defenses shift, there is now a new efficiency in long twos, but that's a different issue.
 

turnip_farmer

Adventurer
So if you have a 65% chance to-hit a goblin with 2d6 damage, you actually only have a 38% chance of killing the goblin, which is really low if you're taking a whole action. Its possible, but its very low.
You get three scorching rays per one action. So your theoretical wizard will kill at least one goblin a large majority of the time; and has a little over a 5% chance of taking out three in one go.
 

Asisreo

Fiendish Attorney
You get three scorching rays per one action. So your theoretical wizard will kill at least one goblin a large majority of the time; and has a little over a 5% chance of taking out three in one go.
To be precise, if you separated the rays at three individual goblins, there's a 76% chance to kill at least one.

In contrast, there's a 5.48% chance of killing all three.

If you were to point them at an individual goblin, it would be a bit tougher to precisely calculate...
 

Ovinomancer

No flips for you!
So, I see very often DPR, Damage Comparisons, and Health being discussed at-length playing D&D.

Often times, in White-Room theory-crafting scenarios, someone will talk about the damage a character can inflict on a target and compare that damage to the health of said target.

For example, someone could talk about how a single scorching ray kills a goblin because 2d6 = 7 average damage and a goblin's average HP is 7. Therefore, if the ray hits, its essentially a guaranteed kill, right?

But we're forgetting the fact that when average damage = 7, it actually means there's only a 58.33% chance to actually kill that enemy. This is because while 7 is the most likely sum of combinations, it still only accounts for 16.66% of the total possible combinations.

So if you have a 65% chance to-hit a goblin with 2d6 damage, you actually only have a 38% chance of killing the goblin, which is really low if you're taking a whole action. Its possible, but its very low.

Now, reverse that but for HP. Imagine the DM decided he wanted to roll for health but he waited until after the grimlock first takes damage. The damage is rolled and it ends up 11, the DM decides to roll the grimlock's health which is 2d8+2. The damage should kill, right? Well, its actually around 50% as well.

If you combine those two mathematical models, the actual percent chance of certain attacks killing a character with rolled dice becomes much swingier. Of course, most DM's don't roll health for their monsters, but it does lead to interesting probabilities.

I just wanted to discuss exactly how damage can be a misleading factor when talking about damage and its relation to HP.
Generally analysis is done statistically, not for individual cases. The moment you limit the analysis to a specific number of events, you can't apply the results of the statistics. It's like saying the the mean roll of d6 is 3.5 -- you can't actually roll 3.5. This is a good point and worth remembering.

I strongly encourage looking at the assumptions built into statistical models like this OP does. Too often we do math and assume that since math was done it must be right, when, in reality, we've made an assumption in order to do the math. The assumption in most stat models is infinite trials. This is obviously incorrect, but can provide a useful model. Here, the unpacking of the assumption is that the model isn't telling us a single Scorching Ray will kill an average goblin, but that it will do so more often than not. That, on average, over an infinite number of trials, the odds of killing the goblin are better than or equal to 50%
 

Asisreo

Fiendish Attorney
We can also add economics into this discussion when analyzing individual cases.

For example, if you're a fighter at level 1 with a 65% chance to-hit and do 10 average (expected) damage on average. You expect to do 6.5 DPR. That's the damage value of your action.

But when you take an action, there's a 35% chance you miss and a 15% chance you do less than 6.5 damage on hit. Meaning that when you take that action, it can be considered a loss in a gamble.

Spellcasters are at an even greater disadvantage since they usually have to expend a spell slot for their damage. For example, a sorcerer casting Chromatic Orb does an expected 13.5 damage. With a 65% to-hit, that's 8.775 expected damage.

We can isolate the damage value of the spell slot expended itself by comparing their highest damage option without expending the slot. This highest damage option, lets say its firebolt. Firebolt has an expected 5.5 damage with a 65% chance to-hit making it 3.575. Therefore, the spell slot expended for Chromatic Orb had a value of 8.775-3.575 = 5.2 damage/slot.

As this sorcerer, if your damage is less than 8.775 but higher than 3.575, you didn't get the full value for your spell slot. If your damage is less than 3.575, you lost the value of both your action and your spell slot.
 

2. On the other hand, as many people point out, white room theory in D&D is often flawed. There is a lot of bad math. There is a failure to account for tradeoffs (AC, health, other effects) in combat. And it does not even try to math out "out of combat" effect in D&D. In short, from what I have seen, it tends to be very limited outside of comparing like things, such as DPR, without full context. No one to my knowledge has done a good, holistic, comprehensive statistic like "WAR" (from baseball) for D&D.
I strongly encourage looking at the assumptions built into statistical models like this OP does. Too often we do math and assume that since math was done it must be right, when, in reality, we've made an assumption in order to do the math. The assumption in most stat models is infinite trials. This is obviously incorrect, but can provide a useful model. Here, the unpacking of the assumption is that the model isn't telling us a single Scorching Ray will kill an average goblin, but that it will do so more often than not. That, on average, over an infinite number of trials, the odds of killing the goblin are better than or equal to 50%

The underlying problem is a common one. The unspoken assumption is that if something is impossible to measure or compute, it isn't relevant. To compute something analytically, you have to ignore so many important factors just to make the computation tractable that the final result isn't meaningful. This is why analytical probability ends up being an all but useless tool anywhere outside Vegas, and real-world applications use measured statistics.

Even in D&D analytics, people ignore monster AC & saves when comparing unlike things. For example, which spell does more damage, Finger of Death, or Disintegrate upcast to 7th level?

The correct answer is, "it depends on the monster's CON save, DEX save, how many hit points it has left, how many allied turns there are between you and the monster's next turn, and whether it has Magic Resistance or Legendary Resistance." Against a dragon in the first round or two of combat, the expected damage of Disintegrate is zero. Against a Purple Worm, it is much higher. Against a Marilith, Disintegrate's expected damage is 26, while Finger of Death is about 33.

FoD wins, right?

WRONG.

In D&D, I do not care primarily about the amount of damage I do. I care about killing the monster before it hurts the party more. If our Marilith is down to 70 hit points, the chance that Finger of Death will kill her is less than 1%, and the chance that Disintegrate will kill her is 30%. So, despite FoD doing more average damage, it is not necessarily the "theoretically correct choice" to use it. If the Marilith goes next, it's arguably smarter to use the spell with the most potential to end the threat.

(Usual caveat: I may have made errors)
 

Blue Orange

Adventurer
All good points.

I'd add that it's beneficial to take average HP when going up a level since (I believe) it rounds up, so it will get you 0.5 extra HP on average.

A few statistics points:

The law of large numbers states that experimentally obtained sample averages converge to the population mean over time. (The exception is certain pathological distributions like the Cauchy or Pareto with alpha < 1, but those don't apply to dice rolls, which are discrete uniform. Though I'd love to see a monster that does Cauchy damage!) This means that the result of rolling 20d6 will be pretty close to 70...a lot more than 2d6 will be close to 7, relatively speaking.

The central limit theorem states that adding independent, identically distributed variables (like a large number of dice) converges to a normal distribution (the Gaussian 'bell curve') over time. 20d6 looks a lot like a normal distribution, 2d6 less so. Distributions for 2d6 and 1d6 are pretty easy to work out, but after that they get increasingly complicated.
In general for a Gaussian, 68% of values will be within 1 standard deviation of the mean, 94% will be within 2 standard deviations of the mean. Die rolls aren't perfectly Gaussian, but they get closer to Gaussian the more dice you have. The mean of ndx is of course n*0.5(x+1), but the standard deviation is square root ((1/12)(n(x^2-1))). What that means in practice is that 4d6 only varies twice as much as 1d6.

Also, the fewer dice you have, the more likely extreme values are, relatively speaking. 4d6 and 4*(1d6) have the same mean, but you are more likely to get a roll of 4 or 24 by rolling 1d6 and multiplying the result by 4 than by rolling 4d6.
 

Stalker0

Legend
The underlying problem is a common one. The unspoken assumption is that if something is impossible to measure or compute, it isn't relevant. To compute something analytically, you have to ignore so many important factors just to make the computation tractable that the final result isn't meaningful. This is why analytical probability ends up being an all but useless tool anywhere outside Vegas, and real-world applications use measured statistics.

Even in D&D analytics, people ignore monster AC & saves when comparing unlike things. For example, which spell does more damage, Finger of Death, or Disintegrate upcast to 7th level?

The correct answer is, "it depends on the monster's CON save, DEX save, how many hit points it has left, how many allied turns there are between you and the monster's next turn, and whether it has Magic Resistance or Legendary Resistance." Against a dragon in the first round or two of combat, the expected damage of Disintegrate is zero. Against a Purple Worm, it is much higher. Against a Marilith, Disintegrate's expected damage is 26, while Finger of Death is about 33.

FoD wins, right?

WRONG.

In D&D, I do not care primarily about the amount of damage I do. I care about killing the monster before it hurts the party more. If our Marilith is down to 70 hit points, the chance that Finger of Death will kill her is less than 1%, and the chance that Disintegrate will kill her is 30%. So, despite FoD doing more average damage, it is not necessarily the "theoretically correct choice" to use it. If the Marilith goes next, it's arguably smarter to use the spell with the most potential to end the threat.

(Usual caveat: I may have made errors)
This is true in the scenario where a wizard has both spells prepared, or has the chance to know ahead of time what scenario he is likely to deal with.

However standard spell stats are useful for a general adventuring day. Wizard needs a spell, had one slot to prepare, which spell does he pick?

theory would say...barring any knowledge of circumstances, you pick the spell that is more effective more often. For damaging spells, average damage is their unit of effectiveness, so in your example, unless I was an undead lover, I would choose disintegrate over FOD as a more general damage spell.
 

This is true in the scenario where a wizard has both spells prepared, or has the chance to know ahead of time what scenario he is likely to deal with.

However standard spell stats are useful for a general adventuring day. Wizard needs a spell, had one slot to prepare, which spell does he pick?

theory would say...barring any knowledge of circumstances, you pick the spell that is more effective more often. For damaging spells, average damage is their unit of effectiveness, so in your example, unless I was an undead lover, I would choose disintegrate over FOD as a more general damage spell.

Right, so there are even more factors to consider. (Also the fact that Disintegrate can be cast twice, rather than once, if your highest spell slot is 7th).
 

Blue

Ravenous Bugblatter Beast of Traal
Once you are looking at multiple hits to kill, it both bring the damage closer to the mean and also, for D&D which increases HP with more dice, will also have be closer to the mean. So while this is correct, it also has less variance once you get past the first few levels to the point you need mutliple total damage dice to overcome multiple hit dice.
 

Asisreo

Fiendish Attorney
I also want to touch up on the effects that using exclusively average damage does to the difficulty of the game.

Surely, I also use average damage at times, but its important to know that the more you use average damage, the easier the game becomes because of predictability.

If a wizard has 6HP and a goblin does an average of 5 damage per turn. It wouldn't take much to know that the goblin must hit twice before the wizard goes down. There is a 0% chance the wizard goes down on first hit.

However, if you take the probability of the goblin doing 6 damage on round 1, you'll find that the wizard could easily go down with a 50% chance on round 1. If you assume their AC is 15, that's a 25% chance they go down round 1. You might as well roll a d4 and knock unconscious on a 1.
Once you are looking at multiple hits to kill, it both bring the damage closer to the mean and also, for D&D which increases HP with more dice, will also have be closer to the mean. So while this is correct, it also has less variance once you get past the first few levels to the point you need mutliple total damage dice to overcome multiple hit dice.
No, actually! Surprisingly, as the number of dice increases, the variance of the dice also increases. It actually increases significantly too.

For reference, Variance is an actual statistical term that measures how much the values of the data set differ from the mean. Its telling you how spread out the data set it. For example, a set {1,2,3} has a smaller variance than the data set {0,1,2,3,4} despite them having the same mean.

In other words, the higher the dice, the less likely you can be certain it will reliably hit your average and the less precise your rolls will be from the average.

Rolling 2d6 means you have an average of 7 and you're likely to hit that average 16.66% of the time. The standard deviation is 2.42 and therefore the variance is 5.85.

Rolling 8d6 means you have an average of 28, but you only have a 8.09% chance of hitting the average. Its standard deviation is 4.83 and its variance grew to a whopping 23.32.

What does this mean, exactly? Does it mean that rolling an infinite amount of dice actually won't get you closer to the average of those dice?

Well, its more complicated than that. The Law of Large Numbers would agree with you, but there's also the distinction between Relative Frequency and Cumulative Relative Frequency. That difference being that Relative Frequency (RF) is what you actually rolled in the individual trials while Cumulative Relative Frequency (CRF).

True: The CRF will converge towards the average, expected value of the dice. Meaning each die will cumulatively average out to 3.5 across all trials. However, this is irrelevant to play, completely. The reason is because it doesn't matter at all that your dice will converge to an average because damage isn't a cumulative value. Damage is based on the specific trial that you're facing, independent on any previous trials.

TL;DR

The damage that you did to the goblin at 1st level is completely unrelated to the damage you're doing to the dragon at 20th level. This means that having alot of dice doesn't make you more likely to obtain the average, it makes you less likely and the further in play you get, the less reliable your dice alone become (in terms of damage).
 

Blue

Ravenous Bugblatter Beast of Traal
I also want to touch up on the effects that using exclusively average damage does to the difficulty of the game.

Surely, I also use average damage at times, but its important to know that the more you use average damage, the easier the game becomes because of predictability.

If a wizard has 6HP and a goblin does an average of 5 damage per turn. It wouldn't take much to know that the goblin must hit twice before the wizard goes down. There is a 0% chance the wizard goes down on first hit.

However, if you take the probability of the goblin doing 6 damage on round 1, you'll find that the wizard could easily go down with a 50% chance on round 1. If you assume their AC is 15, that's a 25% chance they go down round 1. You might as well roll a d4 and knock unconscious on a 1.
No, actually! Surprisingly, as the number of dice increases, the variance of the dice also increases. It actually increases significantly too.

For reference, Variance is an actual statistical term that measures how much the values of the data set differ from the mean. Its telling you how spread out the data set it. For example, a set {1,2,3} has a smaller variance than the data set {0,1,2,3,4} despite them having the same mean.

In other words, the higher the dice, the less likely you can be certain it will reliably hit your average and the less precise your rolls will be from the average.

Rolling 2d6 means you have an average of 7 and you're likely to hit that average 16.66% of the time. The standard deviation is 2.42 and therefore the variance is 5.85.

Rolling 8d6 means you have an average of 28, but you only have a 8.09% chance of hitting the average. Its standard deviation is 4.83 and its variance grew to a whopping 23.32.

What does this mean, exactly? Does it mean that rolling an infinite amount of dice actually won't get you closer to the average of those dice?

Well, its more complicated than that. The Law of Large Numbers would agree with you, but there's also the distinction between Relative Frequency and Cumulative Relative Frequency. That difference being that Relative Frequency (RF) is what you actually rolled in the individual trials while Cumulative Relative Frequency (CRF).

True: The CRF will converge towards the average, expected value of the dice. Meaning each die will cumulatively average out to 3.5 across all trials. However, this is irrelevant to play, completely. The reason is because it doesn't matter at all that your dice will converge to an average because damage isn't a cumulative value. Damage is based on the specific trial that you're facing, independent on any previous trials.

TL;DR

The damage that you did to the goblin at 1st level is completely unrelated to the damage you're doing to the dragon at 20th level. This means that having alot of dice doesn't make you more likely to obtain the average, it makes you less likely and the further in play you get, the less reliable your dice alone become (in terms of damage).
You seem to be more fluent with the names of the math than I am, but I think you may be missing a point. I don't care if it EXACTLY hits the average because it's most likely not the case that they HP is an exact multiple of the attack damage. So if mean damage is 12 and the foe has a mean of 113 HPs, I have a high chance that ten hits will kill it. Because really I'm caring about tightness of the clustering.

And damaged absolutely is cumulative. Individual damage has absolutely no meaning, it's only in the cumulative where it's met or exceeded the total HPs of the foe does it matter. Looking at them as individual results is misleading because they have no meaning until they hit that threshold.

The only relevant is the boolean (cumulative damage) >= (HPs). And the total of the damage is what is being compared, not any individual roll.
 

Blue Orange

Adventurer
I also want to touch up on the effects that using exclusively average damage does to the difficulty of the game.

Surely, I also use average damage at times, but its important to know that the more you use average damage, the easier the game becomes because of predictability.

If a wizard has 6HP and a goblin does an average of 5 damage per turn. It wouldn't take much to know that the goblin must hit twice before the wizard goes down. There is a 0% chance the wizard goes down on first hit.

However, if you take the probability of the goblin doing 6 damage on round 1, you'll find that the wizard could easily go down with a 50% chance on round 1. If you assume their AC is 15, that's a 25% chance they go down round 1. You might as well roll a d4 and knock unconscious on a 1.
No, actually! Surprisingly, as the number of dice increases, the variance of the dice also increases. It actually increases significantly too.

For reference, Variance is an actual statistical term that measures how much the values of the data set differ from the mean. Its telling you how spread out the data set it. For example, a set {1,2,3} has a smaller variance than the data set {0,1,2,3,4} despite them having the same mean.

In other words, the higher the dice, the less likely you can be certain it will reliably hit your average and the less precise your rolls will be from the average.

Rolling 2d6 means you have an average of 7 and you're likely to hit that average 16.66% of the time. The standard deviation is 2.42 and therefore the variance is 5.85.

Rolling 8d6 means you have an average of 28, but you only have a 8.09% chance of hitting the average. Its standard deviation is 4.83 and its variance grew to a whopping 23.32.

What does this mean, exactly? Does it mean that rolling an infinite amount of dice actually won't get you closer to the average of those dice?

Well, its more complicated than that. The Law of Large Numbers would agree with you, but there's also the distinction between Relative Frequency and Cumulative Relative Frequency. That difference being that Relative Frequency (RF) is what you actually rolled in the individual trials while Cumulative Relative Frequency (CRF).

True: The CRF will converge towards the average, expected value of the dice. Meaning each die will cumulatively average out to 3.5 across all trials. However, this is irrelevant to play, completely. The reason is because it doesn't matter at all that your dice will converge to an average because damage isn't a cumulative value. Damage is based on the specific trial that you're facing, independent on any previous trials.

TL;DR

The damage that you did to the goblin at 1st level is completely unrelated to the damage you're doing to the dragon at 20th level. This means that having alot of dice doesn't make you more likely to obtain the average, it makes you less likely and the further in play you get, the less reliable your dice alone become (in terms of damage).

All very true!

Bit of a nitpick, though; the standard deviation (which is the square root of the variance and the more intuitive number as it's likely to include just-over-two-thirds of the distribution as sample size increases) increases as the square root of the sample size. Quadruple the sample size and your standard deviation doubles.

Now two standard deviations for a normal distribution include 94% of the distribution, or about your odds of not rolling a 1 (or a 20).

With 2d6, a mean of 7, and a standard deviation of 2.42, two standard deviations get you from 3 to 11 (just barely missing 2 and 12), indeed including about 94% of the distribution. It's almost certain to be between 3 and 11? Well, I knew that. How likely is the outer half of the distribution? Well, it's tricky because you can go out 5 on each side, but rounding down you'd include 2-4 and 10-12, and your odds of winding up in this range are about 1 in 3.
With 8d6, a mean of 28, and a standard deviation of 4.83, two standard deviations get you from to 19 to 37 (by n=8 we may be able to get away with a normal distribution for gaming purposes, though in stats class they want at least 10). It's almost certain to be between But the theoretical range is from 8-48. What this means is that those ranges from 8-18 and 38-48 (theoretically being half the range) actually only occupy about 6% (3 on each side) of the distribution. Getting a result in the outer half on 8d6 is about as likely as rolling a natural 20.

So while with 8d6 you are less likely to hit the mean exactly, you are more likely to hit close to the mean than with 2d6. The relative variability decreases even as the absolute variability increases! Your fireball's damage is much more dependable than your greataxe's, and it's all thanks to the law of large numbers.
 

Level Up!

An Advertisement

Advertisement4

Top