Replacing 1d20 with 3d6 is nearly pointless

Ovinomancer

No flips for you!
Have you looked at the graphs I've been linking to, or not?

How math works is you do steps and results come out.

Do I have to take a screen shot? I have to take a screen shot. Fuck.
View attachment 117346
So here we have the CDF (cumulative distribution) of 1d20 and the CDF (cumulative distribution) of 3d6 with different averages and standard deviations normalized.

The 1d20 curve is a line. The 3d6 curve is the set of black points. Notice how the 3d6 curve is close to, but not exactly on, the 1d20 line. It only differs significantly at the 5% "critical hit/miss" cases that correspond to 1 and 20 on the d20 roll.

I horizontally scaled 3d6 by a factor of 2, which corresponds to "bonuses and penalties are twice as large, conceptually, in a 3d6 based situation".

So yes, that is how that works. The distributions are similar in CDF, because you can see it. Yes, one is a flat distribution and the other is a normal(ish) one, but we aren't playing "can you roll a 7", we are playing "can you roll a 7+" when we play D&D. And "can you roll a 7+" corresponds to the CDF (the integral) of the distribution.

And when you integrate things, the differences between a flat distribution and a curved one fade away pretty fast.

This isn't "mathturbation", because I actually checked my results. I even shared links to those results being checked. I am not sure why I expected people to actually click on those results before saying "this is bullshit".

Anyhow, here is the results inline.

Quite possibly a slightly different value than "2" would be more correct once we neglect tails -- a different value than "2" would correspond to a change in the slope of the 3d6 part of the graph, and making it slightly less steep might improve the match (except for the tails). But 2 is so close I really don't care.
Sigh. The screenshot you presented should have clued you into a problem with what you did. The scaked and recentered 3d6 curve has values from -5 to 25 while the d20 is 1 to 20. All you've done is stretch a normal cumulative dustribution and note that, if you strech it enough, the middle part looks straight(ish). You're ignoring ~1/3 of the data points to do this.

This is like blowing up a circle to a large enough circumference that a close look at a tiny part of the arc looks like a straight line. But, despite doing this, a circle is not a straight line. This is why when you do math, you really need to understand what your doing -- what assumptions are necessary. Just doing math doesn't mean you'll get the right answer. Especially with stats.
 

NotAYakk

Adventurer
Sigh. The screenshot you presented should have clued you into a problem with what you did. The scaked and recentered 3d6 curve has values from -5 to 25 while the d20 is 1 to 20.
The total probability of values from -5 to 1 on the 3d6 curve is under 5%.
The total probability of the values from 20 to 25 on the 3d6 curve is under 5%.

All you've done is stretch a normal cumulative dustribution and note that, if you strech it enough, the middle part looks straight(ish). You're ignoring ~1/3 of the data points to do this.
I'm ignoring the 5% most extreme values on both ends, and looking at the middle 90%.

With a d20 on an attack, a natural 1 already misses and a natural 20 already hits; in a sense, it corresponds to a -infinity and +infinity.

I talked about the outlying 5% cases already, and now you bring it up as if it was some big gotcha. Those are the crit/auto hit/miss mechanics.
This is like blowing up a circle to a large enough circumference that a close look at a tiny part of the arc looks like a straight line. But, despite doing this, a circle is not a straight line. This is why when you do math, you really need to understand what your doing -- what assumptions are necessary. Just doing math doesn't mean you'll get the right answer. Especially with stats.
I assume we care about "does this attack hit/miss" experience at the table.

Given a game played with double-modifiers and d20, and another played with 3d6, I claim distinguishing between those games with a log of hit/misses (and not the rolls) will be a herculean task.

You'd basically have to find some creature whose chance of being hit is right on the edge of possible for the 3d6 case (in the "long tail" of 16-18) and tease out if the chance is different than 5%.

Suppose we want a 2 SD error bar on the sample. We have some random variable H. Its true value is either 1/20 or 1/216. How many samples do we need to distinguish that?

Quick napkin math (I think the right answer involves using student's T? It is basically a polling problem.) gives me about that it would take about 100 samples of "creature we know needs to be hit on an 18 on 3d6" to see a significant (2 SD, or 0.03 P-test) difference between the d20 with double modifiers and auto-hit on a 20 and the 3d6 with normal modifiers.

Or, tl;dr, we really don't care about events with really low probability, as they don't happen often enough to care about them. And the entire "tail" you are pointing at adds up to a low probability event.
 

TheCosmicKid

Adventurer
I did an extensive analysis of 3d6 vs d20 a while ago when we considered going to 3d6. Like a lot of people, it works best for skill checks and even saving throws because they are simple "all-or-nothing" rolls. Using 3d6 increases the likelihood of "typical results" which is what I'd expect from a single-roll test.
This is sort of true, but you have to be careful how you understand it. A 50% probability check on a d20 -- say, a +4 roll against a DC of 15 -- is still a 50% probability check on 3d6. Your odds of rolling an 11 are higher, but your odds of rolling an 11 or above are still the same. What the normal distribution on 3d6 does is make the probability of success/failure "fall away" from 50% faster as your bonus or the DC changes.

I'm not attributing any fallacy to you in particular. Maybe you already know this and I'm preaching to the choir -- awesome! But in my experience a lot of people hear "increased likelihood of typical results" and think that it means they're more likely to hit these DC 15s on a +4 because they'll roll more 11s. So I'm just clarifying that the math doesn't work that way.
 
Last edited:

Blue

Ravenous Bugblatter Beast of Traal
Indeed. Not sure where this came from - the only place I've seen this argument is stat generation. 3d6 vs. 4d6 drop lowest vs. a straight d20 roll, etc.
A couple times a year someone announces that they want to move from d20 to 3d6 to "reduce swinginess". When really what it does is make that even the smallest modifier swings the success/failure chance a lot around the middle of the range, which is where bounded accuracy often puts us.

It's usually because people confuse large ranges of numbers as swinginess, when really it's boolean so success/failure is the only swinginess and this exasperates how much modifiers change that.
 

Flexor the Mighty!

18/100 Strength!
If I ever go back to 5e I may tinker with a 3d6 system. Crits IME are the driving force of most encounters and made combat crazy "swingy".
 

BrokenTwin

Explorer
If you're using 3d6 to make crits rarer, why not just make any triple a crit? That's a 1/36 chance, which is less than max on 1d20, but still significantly higher than the odds of rolling 18 on 3d6. It's a lot easier to remember than a range of "crit numbers", and requires zero math to realise at the table.

Plus, it makes reading the dice a bit more dynamic, and adds a bit of suspense for those buggers who insist on rolling one die at a time.
 

Ovinomancer

No flips for you!
The total probability of values from -5 to 1 on the 3d6 curve is under 5%.
The total probability of the values from 20 to 25 on the 3d6 curve is under 5%.


I'm ignoring the 5% most extreme values on both ends, and looking at the middle 90%.

With a d20 on an attack, a natural 1 already misses and a natural 20 already hits; in a sense, it corresponds to a -infinity and +infinity.

I talked about the outlying 5% cases already, and now you bring it up as if it was some big gotcha. Those are the crit/auto hit/miss mechanics.

I assume we care about "does this attack hit/miss" experience at the table.

Given a game played with double-modifiers and d20, and another played with 3d6, I claim distinguishing between those games with a log of hit/misses (and not the rolls) will be a herculean task.

You'd basically have to find some creature whose chance of being hit is right on the edge of possible for the 3d6 case (in the "long tail" of 16-18) and tease out if the chance is different than 5%.

Suppose we want a 2 SD error bar on the sample. We have some random variable H. Its true value is either 1/20 or 1/216. How many samples do we need to distinguish that?

Quick napkin math (I think the right answer involves using student's T? It is basically a polling problem.) gives me about that it would take about 100 samples of "creature we know needs to be hit on an 18 on 3d6" to see a significant (2 SD, or 0.03 P-test) difference between the d20 with double modifiers and auto-hit on a 20 and the 3d6 with normal modifiers.

Or, tl;dr, we really don't care about events with really low probability, as they don't happen often enough to care about them. And the entire "tail" you are pointing at adds up to a low probability event.
Well, we have progress, as now you're not saying the math is correct, but that your argument is correct despite you discarding data. That's good.

As for your argument that we can discard data because it's low probability, you're tossing 10% of the possible rolls. That means that you're discounting 1 out of every 10 rolls. That's not a negligible amount.

Now, if your argument is that you can move the target numbers and bonuses to adjust the needed rolls on a d20 so that it looks more like the center of the 3d6, then you've done a little bit of moving things around to prove something that's largely true without the effort -- the center of the 3d6 is pretty darned close already to the most common needed d20 numbers for most combat efforts. There's plenty of ways to discover this without abusing stats.
 

NotAYakk

Adventurer
I said what math I did, not that it was "correct" because of the math. You seem to be projecting.

I scaled them by the standard deviation. This, observably, had the effect I described. It wasn't "correct" because of the math I did, I did the math then I described the results. I then described why those results aren't all that surprising; that dividing by the ratio 2nd moment and subtracting the difference in the first leaves only higher order components, and those components are bounded in effect (small).

We can formalize that if you want, but my argument has not and never did rely on that formilization. It relied on the actual graphs which I posted and the probabilities on those graphs and what those probabilities mean. (to sketch the formilzation argument using high school calc concepts: you'd basicaly mirror arguments like using the low order terms of a Taylor series, and how the tail of the series has a bounded contribution, so can be neglected if you are willing to accept a known error. Except with statistical moments instead of polynomials. I know this argument is plausible, but I am not claiming it is sufficient or nessicary.)

You grabbed onto my description of why the results aren't surprising and started complaining, seemingly without even looking at the graphs, based on you changing your position once I posted screenshots.

I hope this clears things up for you. Getting things backwards can be confusing, and maybe a reread would help.

Have a nice day.
 

dnd4vr

Supercalifragilisticexpialidocious!
This is sort of true, but you have to be careful how you understand it. A 50% probability check on a d20 -- say, a +4 roll against a DC of 15 -- is still a 50% probability check on 3d6. Your odds of rolling an 11 are higher, but your odds of rolling an 11 or above are still the same. What the normal distribution on 3d6 does is make the probability of success/failure "fall away" from 50% faster as your bonus or the DC changes.

I'm not attributing any fallacy to you in particular. Maybe you already know this and I'm preaching to the choir -- awesome! But in my experience a lot of people hear "increased likelihood of typical results" and think that it means they're more likely to hit these DC 15s on a +4 because they'll roll more 11s. So I'm just clarifying that the math doesn't work that way.
Yeah, know I understand what you mean, but that wasn't really the issue I was talking about. My point was more about how the 3d6 vs d20 concept affects single-roll outcomes versus extended "contests" such as combat.

A skill check is (most often) a single roll, as are many saving throws. This means with the linear d20 the "swinginess" makes your normal efforts as likely as your best effort and your worst effort. That isn't how most peoples' efforts are. The bell curve of the 3d6 more models the likelihood of "typical" results compared to worst and best results.

Combat becomes non-linear because there is a series of rolls involved to determine the outcome (at least in most cases). In an extreme case, you hit on every roll, representing your best effort. However the likelihood of that happening is pretty small (depending on your opponent's AC of course). More commonly, sometimes you will hit and other times you are going to miss. If you look at a particular distribution for a bonus vs. an AC, you see how it is a bell curve. So, you don't need to use 3d6 for combat to make it non-linear.
 

Ovinomancer

No flips for you!
I said what math I did, not that it was "correct" because of the math. You seem to be projecting.

I scaled them by the standard deviation. This, observably, had the effect I described. It wasn't "correct" because of the math I did, I did the math then I described the results. I then described why those results aren't all that surprising; that dividing by the ratio 2nd moment and subtracting the difference in the first leaves only higher order components, and those components are bounded in effect (small).

We can formalize that if you want, but my argument has not and never did rely on that formilization. It relied on the actual graphs which I posted and the probabilities on those graphs and what those probabilities mean. (to sketch the formilzation argument using high school calc concepts: you'd basicaly mirror arguments like using the low order terms of a Taylor series, and how the tail of the series has a bounded contribution, so can be neglected if you are willing to accept a known error. Except with statistical moments instead of polynomials. I know this argument is plausible, but I am not claiming it is sufficient or nessicary.)

You grabbed onto my description of why the results aren't surprising and started complaining, seemingly without even looking at the graphs, based on you changing your position once I posted screenshots.

I hope this clears things up for you. Getting things backwards can be confusing, and maybe a reread would help.

Have a nice day.
Sigh. Okay, when I said you did mathturbation, you got mad, but that's when you do the wrong math and get confident you did something cool because of the wrong math. What you did -- scaling standard deviations and then thinking that made distributions similar? That's wrong math. It's bogus, utterly. That you saw graphs line up was coincidence -- it had nothing to do with what you did but the fact that you kept picking numbers until you managed to make the center 10 data points of the normal cumulative distribution of 3d6 look like a line with a slope of -1. Making post hoc choices with stats is always dangerous, because you're altering the assumptions that go into a statistical model but not altering the model to account. It leads to your assumption that you found something, when you did not.

If you look at the PDFs for d20 vs 3d6, you might not that you have 18% less chance of rolling a 14 or higher on 3d6, a 20.8% less chance of a 15, and a 20.4% less chance of a 16. Those numbers don't show up much, but that's a pretty big delta. In your streched and recentered 2*3d6-11, that same point is rolling a 12 on the 3d6 part. That's what lines up with a 15 on the d20. The 15 on the 3d6 is over 20 on the d20. I have no idea why you thought these were even comparable. Lines on a graph don't matter much if one "line" is a zoomed in circle and the other is an actual line -- they aren't the same at all.
 

Ovinomancer

No flips for you!
Yeah, know I understand what you mean, but that wasn't really the issue I was talking about. My point was more about how the 3d6 vs d20 concept affects single-roll outcomes versus extended "contests" such as combat.

A skill check is (most often) a single roll, as are many saving throws. This means with the linear d20 the "swinginess" makes your normal efforts as likely as your best effort and your worst effort. That isn't how most peoples' efforts are. The bell curve of the 3d6 more models the likelihood of "typical" results compared to worst and best results.

Combat becomes non-linear because there is a series of rolls involved to determine the outcome (at least in most cases). In an extreme case, you hit on every roll, representing your best effort. However the likelihood of that happening is pretty small (depending on your opponent's AC of course). More commonly, sometimes you will hit and other times you are going to miss. If you look at a particular distribution for a bonus vs. an AC, you see how it is a bell curve. So, you don't need to use 3d6 for combat to make it non-linear.
The thing about 3d6 is that it makes easier challenges more likely and harder ones less. If you need to roll 12 to succeed, for instance, 3d6 is almost like (but not actually at all) a -1 penalty on a d20 roll for the same check. Needing a 12 isn't horribly uncommon.
 

Esker

Abventuree
The thing about 3d6 is that it makes easier challenges more likely and harder ones less. If you need to roll 12 to succeed, for instance, 3d6 is almost like (but not actually at all) a -1 penalty on a d20 roll for the same check. Needing a 12 isn't horribly uncommon.
But so does doubling modifiers and stretching out DCs. That's what @NotAYakk has been saying.
 

Esker

Abventuree
I feel like a lot of people are commenting on this based on "what they already know to be true" without really considering the argument being made.
 

Esker

Abventuree
Sigh. Okay, when I said you did mathturbation, you got mad, but that's when you do the wrong math and get confident you did something cool because of the wrong math.
I would have thought mathturbation was doing a bunch of math which is ultimately inconsequential in practice because the process is enjoyable... But that's neither here nor there.

What you did -- scaling standard deviations and then thinking that made distributions similar? That's wrong math. It's bogus, utterly. That you saw graphs line up was coincidence -- it had nothing to do with what you did but the fact that you kept picking numbers until you managed to make the center 10 data points of the normal cumulative distribution of 3d6 look like a line with a slope of -1.
It's not a coincidence, and the numbers weren't picked arbitrarily. The scaling was by the standard deviation, in order to match the first two moments of the distributions. Except that really it should have been 2*3d6-10.5, not 11, but @NotAYakk acknowledged that that was done because AnyDice doesn't like non-integers. I guess we should do 4*3d6-21 vs 2*1d20 and just halve the numbers on the axis. But it won't look hugely different.

Rescaling doesn't make the two distributions identical, but it does self-evidently make them more similar than before the scaling. Probabilists and statisticians of a more theoretical bent do this sort of thing all the time: approximate one distribution with another by matching lower order moments and then show that the error (measured by cumulative probabilities) is bounded by a function of the higher order moments.

If you look at the PDFs for d20 vs 3d6, you might not that you have 18% less chance of rolling a 14 or higher on 3d6, a 20.8% less chance of a 15, and a 20.4% less chance of a 16. Those numbers don't show up much, but that's a pretty big delta.
I'm not sure why the comparison to the non-standard-deviation-matched version of the distribution is relevant to the argument. (I assume also that you mean CDFs rather than PDFs? Not trying to be pedantic, just make sure I'm following you)

In your streched and recentered 2*3d6-11, that same point is rolling a 12 on the 3d6 part. That's what lines up with a 15 on the d20. The 15 on the 3d6 is over 20 on the d20.
Here I have to confess that I don't follow what you are trying to say. What same point? You need 13 on 3d6 to have 2*3d6-11 equal 15, so that's presumably not where the 12 is coming from. Maybe you mean that 12 or higher on 3d6 is where you have about the same likelihood as 15 or higher on 1d20, and 15 or higher on 3d6 is less likely than 20 on d20. Ok, but what does that imply about the argument? Just that even after matching standard deviations, you still have a lower chance of getting a 20 with 3d6-11 than you do with 1d20? Everyone agrees on that point.

I have no idea why you thought these were even comparable. Lines on a graph don't matter much if one "line" is a zoomed in circle and the other is an actual line -- they aren't the same at all.
But nobody is arguing that the curve is actually a line... they're similar to the extent that the probabilities are similar. Now if you wanted to make an argument that linear comparison of probabilities isn't necessarily the right metric, you might have a point (I have at times argued that "rolls per success" is a more useful metric in some contexts than "successes per roll", for example), but that doesn't seem to be what you're doing.
 

Ovinomancer

No flips for you!
It's not a coincidence, and the numbers weren't picked arbitrarily. The scaling was by the standard deviation, in order to match the first two moments of the distributions. Except that really it should have been 2*3d6-10.5, not 11, but @NotAYakk acknowledged that that was done because AnyDice doesn't like non-integers. I guess we should do 4*3d6-21 vs 2*1d20 and just halve the numbers on the axis. But it won't look hugely different.
Because the relationship is accidental and selected, not by analysis and correct math, but by selection of values that cause parts (PARTS!) of the curves to look similar.


Rescaling doesn't make the two distributions identical, but it does self-evidently make them more similar than before the scaling. Probabilists and statisticians of a more theoretical bent do this sort of thing all the time: approximate one distribution with another by matching lower order moments and then show that the error (measured by cumulative probabilities) is bounded by a function of the higher order moments.
No, rescaling, arbitrary centering, eliminating 1/3 of the data points of one distribution, and then comparing 10 data points to 20 data points makes those look similar. The only decision made here that is remotely based on actual characteristics of the curves was the rescaling, which is questionable (because multiplying the 3d6 distribution gives data points spaced 2 apart which you then compare against data spaced 1 apart). After that it's literally making choices to achieve the goal of making the curves look similar.


I'm not sure why the comparison to the non-standard-deviation-matched version of the distribution is relevant to the argument. (I assume also that you mean CDFs rather than PDFs? Not trying to be pedantic, just make sure I'm following you)
Technically, what's been discussed is 1 - the cumulative probability function. Is this the point where we actually start using proper terminology in this thread? I figured having that argument wouldn't help understanding, so I've been working informally using the language similar to what's been previously used.

So, using probability density function is technically correct, if imprecise.


Here I have to confess that I don't follow what you are trying to say. What same point? You need 13 on 3d6 to have 2*3d6-11 equal 15, so that's presumably not where the 12 is coming from. Maybe you mean that 12 or higher on 3d6 is where you have about the same likelihood as 15 or higher on 1d20, and 15 or higher on 3d6 is less likely than 20 on d20. Ok, but what does that imply about the argument? Just that even after matching standard deviations, you still have a lower chance of getting a 20 with 3d6-11 than you do with 1d20? Everyone agrees on that point.
Typo.

It points out that the stretched distribution distributed very different behaviors due to being stretched, which makes relying only on the visual similarity in that range even less good.

But nobody is arguing that the curve is actually a line... they're similar to the extent that the probabilities are similar. Now if you wanted to make an argument that linear comparison of probabilities isn't necessarily the right metric, you might have a point (I have at times argued that "rolls per success" is a more useful metric in some contexts than "successes per roll", for example), but that doesn't seem to be what you're doing.
Huh. The OP (and later posts) have relied on the fact that there's a similarity that supports the argument that stretching the d20 line is functionally similar to a stretched 3d6 line, therefore 3d6 and d20 aren't much different. My point is that the math suggested by such stretching and skewing is very badly founded and an improper use of math. The point that you can alter the math of 5e to move some breakpoints on the d20 is orthogonal to my point that the math of the graphs is absolutely wrong. The justification that relies on bad math is what I'm arguing against.

I mean, the first point the OP makes is that using 3d6 is the same as rolling a d20, if you change the target numbers and the bonus to the roll. What that example shows is really only that the likelihood of rolling at least a 16 on a d20 is close to the likelihood of rolling at least a 13 on 3d6. Cool, I guess. The reason this schema works isn't any real similarity in the curves of the d20 and the 3d6 but instead skewing the inputs to the d20 to stretch it. What the OP did was change the math of the bonuses so that you need a 16 instead of a 13 on a d20 to hit the new math AC with the new math attack bonuses.
 

Esker

Abventuree
rescaling, arbitrary centering, eliminating 1/3 of the data points of one distribution, and then comparing 10 data points to 20 data points makes those look similar. The only decision made here that is remotely based on actual characteristics of the curves was the rescaling, which is questionable (because multiplying the 3d6 distribution gives data points spaced 2 apart which you then compare against data spaced 1 apart). After that it's literally making choices to achieve the goal of making the curves look similar.
I'm not sure why you're focusing on the difference in spacing... In 5e's d20 system, the only thing that's relevant is your chances of meeting or exceeding some threshold; it doesn't matter at all how likely you are to roll any specific value, X, except insofar as that represents the difference in difficulty between a DC X and a DC X-1 roll.

The OP's suggestion was that using 3d6 is similar to a system where where bonuses are doubled, DCs (and similarly, ACs) are transformed to be DC' = 10 + 2*(DC - 10), and we use a d20 to resolve outcomes.

Note that this is the same in practice as taking the new DC to be 10.5 + 2*(DC-10.5), where we've used the actual expected value of the 1d20 and 3d6 rolls, because this is 0.5 lower than 10+2*(DC-10), and so it yields success on the same integers.

If you need a natural X to succeed in the 3d6 system (that is, you have a +Y bonus and the DC is X+Y), that becomes a +2Y bonus and a DC of 10+2*(X-10)+2Y. So you need a natural 10+2*(X-10), or 2*X - 10 in the modified d20 system. So we could compare success rates for each value of X with the corresponding target natural rolls.

Alternatively we could leave bonuses and DCs the same and scale and shift the roll instead. Compare 1d20 to 2*3d6-10. Using a target of X on the 2*3d6-10 roll is equivalent to a target of ... 2*X-10 on the modified d20, just the same as if we'd rescaled bonuses and DCs.

So the OP was off by one in their graph in terms of illustrating the impact of their proposed system relative to using 3d6 with regular bonuses and DCs. But actually using -10 vs -11 only affects which system makes for easier rolls, not the sizes of the gaps, since 10 and 11 are equal distances from the mean, and so you're essentially just swapping successes and failures and inverting the labels on the x-axis.
 

Charlaquin

Goblin Queen
Maybe I’m missing something because I’m Not A Math Gal, but... If the observation is that rolling 3d6 has a similar effect on the probability of success as doubling the modifiers on 1d20, then the conclusion that rolling 3d6 is nearly pointless does not follow logically from that observation. Doubling the modifiers on 1d20 would have a huge impact on the probability of success, so if rolling 3d6 would have similar results... Clearly rolling 3d6 must likewise have a huge impact on the probability of success, no?
 

Esker

Abventuree
Maybe I’m missing something because I’m Not A Math Gal, but... If the observation is that rolling 3d6 has a similar effect on the probability of success as doubling the modifiers on 1d20, then the conclusion that rolling 3d6 is nearly pointless does not follow logically from that observation. Doubling the modifiers on 1d20 would have a huge impact on the probability of success, so if rolling 3d6 would have similar results... Clearly rolling 3d6 must likewise have a huge impact on the probability of success, no?
Changing to 3d6 definitely has an impact on success rates: easy tasks get easier and harder ones get harder. The point of the OP was that if you want the effects of 3d6, you can achieve nearly the same effect (with away from table added math instead of at the table math) by sticking with 1d20, doubling modifiers, and stretching out DCs (away from 10).
 

Charlaquin

Goblin Queen
Changing to 3d6 definitely has an impact on success rates: easy tasks get easier and harder ones get harder. The point of the OP was that if you want the effects of 3d6, you can achieve nearly the same effect (with away from table added math instead of at the table math) by sticking with 1d20, doubling modifiers, and stretching out DCs (away from 10).
Which might be a useful observation if that wasn’t significantly more work than just using 3d6...
 

Esker

Abventuree
Which might be a useful observation if that wasn’t significantly more work than just using 3d6...
Well, it's trading one type of work for another. Considering the number of d20 rolls made during a session, having to add three dice and a modifier can slow the game down (especially for some players). Whereas altering bonuses and DCs is something you can do on character sheets / in DM notes. It might wind up being more time overall for the DM, but I think if you ask most people, table time is a more precious resource.
 

Advertisement

Top