D&D General Replacing 1d20 with 3d6 is nearly pointless

Ovinomancer

No flips for you!
Ok, since you continue to be hung up on the fact that my graph ends before the 3d6 curve gets to the top and bottom, here:

mwqCNy7.jpg


And for good measure, here's a graph of the differences in success probabilities at each adjusted DC (think of the x-axis of all of these graphs as the DC of the check minus the modifier).

n4ZIKPk.jpg


So, across the range of adjusted DCs, the two methods yield success probabilities within 4.5% of each other; essentially, depending on the DC, switching from one to the other will give some characters the equivalent of somewhere between a -1 and +1.
Actually, the graph should look like this:
1578946178060.png



And that's because you're graphing physical things - the 2*3d6 data DOES NOT EXIST except at certain points. I graphed the -11 vice your correction for simplicity and to avoid explaining how your correction causes this graph vs the -10 graph to exist half of the time resulting in a bit of a Schrodinger's graph. It's all bad assumptions.

Now hopefully we can agree that I haven't tossed any data, as I'm showing the full range of possibilities.
You've still tossed half of the d20 data if you compare where the 2*3d6 curve actually exists. The 2*3d6 curve DOES NOT EXIST at half the data points you're comparing. It creates discrete data points spaced 2 apart. You can't use a model of a physical event non-physically and get coherent answers.


It exists as a target, not as a possible roll. If you have a 25 AC and are facing a monster with a +3 to hit, then the adjusted DC of their roll is 22. RAW they auto-hit you on a 20, but omitting that, they need a 22 to hit you, which means they can't.
Yes, it DOES NOT EXIST, yet you're using it as part of your comparison.


Yes. I'm comparing 0% to roughly 3% and saying those are close. You can object for gameplay reasons (essentially, using this approximation makes some tasks that would be very easy instead automatic, and some that should be very difficult impossible; or vice versa, depending on which you're treating as the reference method and which the approximation method). But it's not a mathematical error.
This is a game that often hinges on a 5% difference and you're willing to cavalierly ignore the impact of 3% (and it's larger than that) just because you did mathemagic and can't acknowledge that's it's flawed. This is, of course, ignoring the parts where it's up to 95% different.


Yes, if you roll a 1, and apply 10 + (roll-10)/2 (rounding down when halving), you get 5. And if you roll a 20, and apply 10 + (roll-10)/2, you get 15. This isn't some purely theoretical exercise. You could, in principle, do that math with your rolls at the table. That's not what @NotAYakk was actually suggesting, but adjusting the die rolls like that is mathematically equivalent to doubling your bonus and doubling the DCs' distance from 10.
I bolded the problem in your thinking I've been trying to point out. If you round down when halving, then rolling a 2 is the same as rolling a 3, rolling a 4 is the same as rolling a 5, etc, etc. You've tossed half of your unique rolls using this method because you're ended up at the same result for comparison to rolls you aren't tossing on a 3d6.

And, yes, @NotAYakk had some small concessions that made their changes to modifiers make those fractions occasionally count (they didn't double attribute bonuses outright, and random die could still produce odd results), but quite a number of modifiers fit the straight doubling model that results in losing half the numbers on the d20 due to rounding. And the graphs certainly lose the data.


I haven't ignored anything. I've been entirely up front all along (as was the OP) about what happens with extreme DCs. The approximation is still good at those extremes as measured by differences in probability. You might not consider approximating 3% with 0% or vice versa to be a good approximation, and that's fine. That's a matter of gaming priorities, not math.

You're losing 10% of the possible results, probability-wise. Not 3%. Focusing on a single delta as if it stands in for the total fidelity loss is not kosher.

You keep saying this but I haven't tossed out anything.
Except half the unique die rolls, again.


Right, because nobody is saying that 3d6 produces similar rolls to rescaled d20 (or vice versa). We are saying that if you use a suitable rescaling that (approximately) equalizes the variance of the two distributions, then the success probabilities are close, for any DC you want to set.
They are not. I provided an example earlier, a very slightly modified version of the OP example, that resulted in an infinite difference because it was still possible on 3d6 but wasn't possible at all on the modified d20 scale. I also showed how moving in the other direction (and following the maths as presented in the OP) we encountered ~10% deltas between outcomes. This is because you're moving up and down the probabilities at twice the rate -- it's not one for one from a giving comparison start point. This is due to the scaling invalidating half of the die rolls possible by treating them functionally the same.


There's nothing unphysical about any of this. It's all something you could do in your game. Either (1) roll 3d6 to resolve checks, double the result, and subtract 10. If the result ties the DC, confirm success with a d2; or (2) roll 1d20 to resolve checks, as written. The claim is that these produce very similar success probabilities, regardless of the DC.
Absolutely, I can do that. What I can't do is do that and get a result of 11. That's impossible to do by rolling 3d6, doubling the amount rolled, and subtracting 10. So, a model that does that an includes 11 in it's comparison is unphysical.
Alternatively, if you want luck to play less of a role in your game, you can either (1) roll 3d6 to resolve checks, confirming ties with a d2; or (2) roll 1d20, halve the distance from 10 and then add 10; or (3) double all bonuses and stretch DCs to be DC' = 10 + 2*(DC - 10). (2) and (3) are exactly identical; (1) is very close, at all DCs.

Any of these are things you could actually do; they're not impractical thought experiments.
Sure, I can use your method, but by doing so I can only get half of the results you're saying correlate to the full range of the other method. That's what's unphysical. You aren't actually fully understanding the data you're creating because the program you're using draws a line between the data points, and you've confused that line for data.
 

Attachments

  • 1578946101176.png
    1578946101176.png
    14.8 KB · Views: 223

log in or register to remove this ad

Esker

Hero
Actually, the graph should look like this:

No, it doesn't look like that. It wouldn't look like that even if I weren't using the confirmation correction mechanism (which you don't seem to ever acknowledge), since we're only ever interested in the probability of getting at least some result. And so even if you can only ever get even numbered rolls, all that does is make the graph have a bunch of little flat segments, where an adjusted DC of 5 is the same difficulty as an adjusted DC of 6, 7 is the same difficulty as 8, etc. It doesn't make the success rates zero.

We can actually graph that: it looks like this:

os9oRXY.jpg


and here are the differences:

hGTzar2.jpg


You can see that it's no longer centered, and as a result is a worse approximation, because without the confirmation correction we're comparing a roll with a mean of 11 to a roll with a mean of 10.5. But it's still never worse than a 9% difference.

With the confirmation correction, however, your objection disappears entirely, because we can distinguish between consecutive DCs, due to the fact that it's possible to tie even DCs but not possible to tie odd ones, and so even though we can only beat a DC 5 by rolling 6 or higher, DC 5 is a little easier than DC 6 due to the fact that we don't have to worry about hitting the DC exactly and having to confirm.

What I can't do is do that and get a result of 11. That's impossible to do by rolling 3d6, doubling the amount rolled, and subtracting 10.

As I've explained, you get 11 if you roll 12 and fail your confirmation roll.

You aren't actually fully understanding the data you're creating because the program you're using draws a line between the data points, and you've confused that line for data.

One of us is neither understanding nor paying attention. But it's not me. I was interpolating in an earlier graph (the one that appeared to show a less than 100% chance of rolling a 1 or higher), but I added in the in between points after that, as I noted. Look at the tables I posted; every DC is there, not just even-numbered ones. Your objections are not only based on faulty thinking, they're based on incorrect information.
 

Ovinomancer

No flips for you!
Ok, since you continue to be hung up on the fact that my graph ends before the 3d6 curve gets to the top and bottom, here:

mwqCNy7.jpg


And for good measure, here's a graph of the differences in success probabilities at each adjusted DC (think of the x-axis of all of these graphs as the DC of the check minus the modifier).

n4ZIKPk.jpg


So, across the range of adjusted DCs, the two methods yield success probabilities within 4.5% of each other; essentially, depending on the DC, switching from one to the other will give some characters the equivalent of somewhere between a -1 and +1.
Actually, the graph should look like this:
View attachment 117443


And that's because you're graphing physical things - the 2*3d6 data DOES NOT EXIST except at certain points. I graphed the -11 vice your correction for simplicity and to avoid explaining how your correction causes this graph vs the -10 graph to exist half of the time resulting in a bit of a Schrodinger's graph. It's all bad assumptions.

Now hopefully we can agree that I haven't tossed any data, as I'm showing the full range of possibilities.
You've still tossed half of the d20 data if you compare where the 2*3d6 curve actually exists. The 2*3d6 curve DOES NOT EXIST at half the data points you're comparing. It creates discrete data points spaced 2 apart. You can't use a model of a physical event non-physically and get coherent answers.


It exists as a target, not as a possible roll. If you have a 25 AC and are facing a monster with a +3 to hit, then the adjusted DC of their roll is 22. RAW they auto-hit you on a 20, but omitting that, they need a 22 to hit you, which means they can't.
Yes, it DOES NOT EXIST, yet you're using it as part of your comparison.


Yes. I'm comparing 0% to roughly 3% and saying those are close. You can object for gameplay reasons (essentially, using this approximation makes some tasks that would be very easy instead automatic, and some that should be very difficult impossible; or vice versa, depending on which you're treating as the reference method and which the approximation method). But it's not a mathematical error.
This is a game that often hinges on a 5% difference and you're willing to cavalierly ignore the impact of 3% (and it's larger than that) just because you did mathemagic and can't acknowledge that's it's flawed. This is, of course, ignoring the parts where it's up to 95% different.


Yes, if you roll a 1, and apply 10 + (roll-10)/2 (rounding down when halving), you get 5. And if you roll a 20, and apply 10 + (roll-10)/2, you get 15. This isn't some purely theoretical exercise. You could, in principle, do that math with your rolls at the table. That's not what @NotAYakk was actually suggesting, but adjusting the die rolls like that is mathematically equivalent to doubling your bonus and doubling the DCs' distance from 10.
I bolded the problem in your thinking I've been trying to point out. If you round down when halving, then rolling a 2 is the same as rolling a 3, rolling a 4 is the same as rolling a 5, etc, etc. You've tossed half of your unique rolls using this method because you're ended up at the same result for comparison to rolls you aren't tossing on a 3d6.

And, yes, @NotAYakk had some small concessions that made their changes to modifiers make those fractions occasionally count (they didn't double attribute bonuses outright, and random die could still produce odd results), but quite a number of modifiers fit the straight doubling model that results in losing half the numbers on the d20 due to rounding. And the graphs certainly lose the data.


I haven't ignored anything. I've been entirely up front all along (as was the OP) about what happens with extreme DCs. The approximation is still good at those extremes as measured by differences in probability. You might not consider approximating 3% with 0% or vice versa to be a good approximation, and that's fine. That's a matter of gaming priorities, not math.

You're losing 10% of the possible results, probability-wise. Not 3%. Focusing on a single delta as if it stands in for the total fidelity loss is not kosher.

You keep saying this but I haven't tossed out anything.
Except half the unique die rolls, again.


Right, because nobody is saying that 3d6 produces similar rolls to rescaled d20 (or vice versa). We are saying that if you use a suitable rescaling that (approximately) equalizes the variance of the two distributions, then the success probabilities are close, for any DC you want to set.
They are not. I provided an example earlier, a very slightly modified version of the OP example, that resulted in an infinite difference because it was still possible on 3d6 but wasn't possible at all on the modified d20 scale. I also showed how moving in the other direction (and following the maths as presented in the OP) we encountered ~10% deltas between outcomes (notable, when you need an 8 or better on 3d6 using normal modifiers).


There's nothing unphysical about any of this. It's all something you could do in your game. Either (1) roll 3d6 to resolve checks, double the result, and subtract 10. If the result ties the DC, confirm success with a d2; or (2) roll 1d20 to resolve checks, as written. The claim is that these produce very similar success probabilities, regardless of the DC.

Alternatively, if you want luck to play less of a role in your game, you can either (1) roll 3d6 to resolve checks, confirming ties with a d2; or (2) roll 1d20, halve the distance from 10 and then add 10; or (3) double all bonuses and stretch DCs to be DC' = 10 + 2*(DC - 10). (2) and (3) are exactly identical; (1) is very close, at all DCs.

Any of these are things you could actually do; they're not impractical thought experiments.
[/QUOTE]
No, it doesn't look like that. It wouldn't look like that even if I weren't using the confirmation correction mechanism (which you don't seem to ever acknowledge), since we're only ever interested in the probability of getting at least some result. And so even if you can only ever get even numbered rolls, all that does is make the graph have a bunch of little flat segments, where an adjusted DC of 5 is the same difficulty as an adjusted DC of 6, 7 is the same difficulty as 8, etc. It doesn't make the success rates zero.

We can actually graph that: it looks like this:

os9oRXY.jpg


and here are the differences:

hGTzar2.jpg


You can see that it's no longer centered, and as a result is a worse approximation, because without the confirmation correction we're comparing a roll with a mean of 11 to a roll with a mean of 10.5. But it's still never worse than a 9% difference.
No, you get the graph I showed because the 2*3d6 only exist on even numbers, so the difference when only the d20 exists is vast. If you look at the dots very close to zero, they map to your graph -- I just unsmoothed it from you ignoring the fact that it's a discrete comparison of data points at places where one set does not exist.

The above is just another way for you to extrapolate values where they don't exist that's less smooth than your first presentation. It's still incorrect because data does not exist at those points.
With the confirmation correction, however, your objection disappears entirely, because we can distinguish between consecutive DCs, due to the fact that it's possible to tie even DCs but not possible to tie odd ones, and so even though we can only beat a DC 5 by rolling 6 or higher, DC 5 is a little easier than DC 6 due to the fact that we don't have to worry about hitting the DC exactly and having to confirm.



As I've explained, you get 11 if you roll 12 and fail your confirmation roll.



One of us is neither understanding nor paying attention. But it's not me. I was interpolating in an earlier graph (the one that appeared to show a less than 100% chance of rolling a 1 or higher), but I added in the in between points after that, as I noted. Look at the tables I posted; every DC is there, not just even-numbered ones. Your objections are not only based on faulty thinking, they're based on incorrect information.
I've tried to not discuss your confirmation mechanic because it doesn't fix the underlying problem - it's another arbitrary kludge on bad math that tries to correct for the centering problem but doesn't address the lack of data problem. I was hoping, apparently forlornly, that you'd catch onto the missing data problem and but I underestimated just how proud you are of that kludge. It's actually kind of clever, if it weren't just putting lipstick on a pig.

You originally presented it as a centering correction because Anydice won't let you create an impossible set of rolls by having rolls be .5. I expect R is just fine with doing this, so I'm not sure why you didn't just center on 10.5 directly -- I'd guess that that put data points on the halfs and you instinctively realized that might be a problem, but I might be a bit optimistic about that given how you're generally okay using smoothed curve values at non-existent data points when it suits your purpose. That said, you decided to create your confirmation correction mechanic with an eye towards having the distribution center on 10.5 by adjusting outcomes after the happen from the true center of 11. As such, your mechanic is somewhat clever, but it's post hoc adjustment of data to fit a conclusion already formed, so bad practice and worse stats. What you seem to think solves the problems nicely does no such thing for any of the problem except the apparent centering at 10.5.

For instance, you suggest it corrects for no odd numbers in the 2*3d6-10 distribution. But it doesn't, because, even with your addition that a failed confirmation actually means you reduce the roll by one, that odds of rolling an 11 in this is the odds of rolling exactly a 12 when needing a 12 and failing the confirmation. Assuming only values that can be rolled are rolled, and assuming that the needed rolls are uniformly distributed (you can use other priors, up to you), then the probability of needing a 12 is 1 in 16 (there are 16 possible values on 3d6 from 3 to 18). The odds you actually roll a 12 are 27/216. The odds you fail to confirm are 1/2. So, that's 1/16*27/216*1/2 or 27/6912 or a tiny, tiny under 0.4%

So, you're claiming that your confirmation mechanic solves the no odd numbers because things will line up about 1% of the time? Granted, that's based on an even distribution of target numbers across the possible values, which I actually limited to the ones you can roll rather than the ones inbetween the rolls (I didn't account for needing an 11, for instance, because you can't roll it). Feel free to make different assumptions, like maybe ALL target numbers are 12, in which case 11 exists 27/3456 or 0.2% of the time. That's best case for 11 existing, by the way, 0.2% of the time.

Yeah, I haven't much talked about your confirmation mechanic, largely because it's a pointless distraction that fixes the centering without correcting the serious flaws in your analysis. Centering was never the serious flaw. Misaligned data sets and truncated tails are.
 

Ovinomancer

No flips for you!
Look, here's the actual data points generated by the two methods d20 vs 2*3d6-10. Note where data exists. These are the discrete events that occur, ie, the actually numbers that are rolled and then scaled. Note how many lines there are where there's only one number for that value. The bits inbetween these data points do not exist and cannot be compared. If you infill by extrapolation, whether that's using a smoothed line or by a confirmation kludge, you're creating data that does not exist in the real and confusing yourself.

It's also been super easy in this discussion to confuse the varied discussions of different comparisons. The actual chart below shows that it's odd numbers that align, not the evens as discussion the last few posts have been about. My bad, but the fundamental points stand, just swap odds and evens.

ValueD20 %2*3d6-10 %
-5​
100.00​
-4​
-3​
99.54​
-2​
-1​
98.15​
0​
1​
100​
95.37​
2​
95​
3​
90​
90.74​
4​
85​
5​
80​
83.80​
6​
75​
7​
70​
74.07​
8​
65​
9​
60​
62.50​
10​
55​
11​
50​
50.00​
12​
45​
13​
40​
37.50​
14​
35​
15​
30​
25.93​
16​
25​
17​
20​
16.20​
18​
15​
19​
10​
9.26​
20​
5​
21​
4.63​
22​
23​
1.85​
24​
25​
0.46​
 

Esker

Hero
No, you get the graph I showed because the 2*3d6 only exist on even numbers, so the difference when only the d20 exists is vast.

Ok, we evidently need a little primer on probability.

I'm going to put it in a spoiler, because it got a little long:

When we talk about success chances, we are measuring the chance of some event occurring, right? In this case, the type of event we're interested in is "Our roll against DC X is successful". In vanilla 5e, a roll against DC X is successful if and only if the number we get on the die plus a modifier equals or exceeds X. So we're talking about the event R + M >= X, and we care about P(R+M >= X). For simplicity, we can move M to the other side and lump it in with the DC -- the event R >= X-M will be TRUE for exactly the same rolls that R+M >=X will be true, so we're not changing anything by doing that.

Now if I just have a mathematical statement, like R >= 3, I can write down a list of all the values of R for which that statement holds. If R comes from a d20, the list is 3, 4, 5, ..., 20. Since R can only take on one value, the sub-events R=3, R=4, R=5, ..., R=20 are non-overlapping, and so the probability that one of them occurs is the sum of the probabilities of each one.

Ok, but you knew that. Now what happens if I decide that I'm going to double the roll I get on the die? Now, my original event is 2R + M >= X, but we can still move M over, to get 2R >= X-M; no problem there. Just to simplify the notation, I'm going to define Y=X-M, so I can write things like 2R >= Y instead. Alright. So now I can either write down the list of values that 2R can take, or I can just write down the list of values that R can take, and for each one, decide whether it makes the event true or not.

So, on a d20, 2R can only take on even numbers: 2, 4, 6, ..., 40, and each of these has a 5% chance of occurring, because each one corresponds to exactly one value on the original d20, which (we presume) are all equally likely. So, how do I determine P(2R >= Y)? Well, if Y is, say, 10, I add up P(2R = 10) + P(2R = 12) + ... + P(2R = 40). So far so good.

What if Y is 11? Well, 2R can't actually equal 11, but we didn't ask what the probability that 2R = Y is, we asked what the probability that 2R >= Y is. What values of 2R satisfy that inequality? Well, they're almost the same ones as before, except we have to throw out 10. How about Y=12? Turns out it's the same values as Y=11 --- we would throw out 11 if 11 were a possible roll (if we're being very formal about it we still do get rid of P(Y=11), but this is zero, so subtracting out its probability doesn't do anything. So we still have P(2R >= 12) = P(2R = 12) + P(2R = 14) + ... + P(2R = 40), the same as P(2R >= 11).

How about Y = -74? Well now, the list of values of 2R that satisfy 2R >= Y expands to... well... all of them. So we get P(2R >= -74) = P(2R = 2) + P(2R = 4) + ... + P(2R = 40). Did we do anything wrong or sneaky or break anything in either the Y = 11 or Y = -74 case? What about Y = 3.14159? That's fine too. The event and the probability are perfectly well defined; we just have to consider the set of possible rolls that make the statement true (4 and up in the case of pi), and add up the probabilities of each of those rolls. If Y goes "out of bounds" to the left we'll get 1, since every roll makes the event true; if Y goes out of bounds to the right, we get 0, since none of them do. And if Y sits in between two possible rolls, it still has a set of possible rolls to the right of it, so we just look at those; though that means that the success chance is flat for a bit between each pair of successive rolls.

If we made a function, f(Y) = P(2R >= Y), it would actually be well defined at every real number value for Y; it would just be flat between values of 2R, since only when y crosses those values do we actually add anything to the set of outcomes that make the event true.

That's why the first graph in my last post had that saw shape with the flat bits. I can still talk about a DC 11 check even if I can't roll an 11, it just has the same difficulty as a DC 12 check unless I make some adjustment... not being able to roll 11 doesn't make the chance of succeeding at a DC 11 check 0. The important distinction, which I think is what's causing the confusion, is that the x-axis in the graph isn't showing rolls, it's showing DCs. Well, adjusted DCs, where we're moving the modifier over to the DC side instead of the roll side.

This sort of thing happens with RAW too, by the way, just to reassure you that I'm not introducing any voodoo. Take a rogue with a stealth modifier of +11, trying to sneak around a monster with passive perception of 10. The unadjusted DC is 10, but the adjusted DC is -2. I can talk about the probability that that rogue succeeds at that check, and I can talk about it as the probability that the d20 roll is -2 or better. Of course that winds up being the same as the probability that the d20 is 1 or better, but it's not wrong to write P(R >= -2).

Ok, now what about the confirmation correction I keep talking about? Well, we could perfectly well just use 2*3d6 - 10 and be done with it, and get the step-shaped graph I posted for our success probabilities. The game would run fine, albeit with a loss of granularity in the DC distinctions that actually matter. But it's not only aesthetically unsatisfying that the graph is jagged like that, nor is it only aesthetically unsatisfying that the 2*3d6-10 curve is above the d20 curve more often than it is below it, it's also a worse approximation than if we make an adjustment. And in this case it's a worse approximation because the 2*3d6-10 distribution has (slightly) the wrong mean.

So how does the confirmation correction fix this? It actually isn't just a smoothing mechanic; it fixes the whole die roll distribution to have the right mean. As I said, you can really think of the confirmation correction as having a 50% chance of subtracting 1 from every roll; in essence subtracting 1/2 on average from every roll (in the sense that the number of heads in a single coin flip is "on average" 1/2), and therefore subtracting 1/2 from the mean. But because I wanted a system that was not only physical but practically efficient, I noted that you don't care whether or not you subtract 1 from your roll unless doing so changes the outcome. And this will only happen if your roll was exactly equal to the adjusted DC. So though I can see why this makes it seem like I'm adjusting one point in an ad hoc fashion, it only seems that way because I'm ignoring meaningless rolls.

I'm not sure why you didn't just center on 10.5 directly.

I didn't do that precisely because I wanted a system to correspond to something physical -- something you could actually implement.

What you seem to think solves the problems nicely does no such thing for any of the problem except the apparent centering at 10.5.

Let's clarify something else here: the graphs of success probabilities aren't distributions at all; they're CDFs. The graphs (or the quantities we're depicting on the axes) don't have means or variances, as neither the DC nor the success chance is a random variable. So when I say we want to center a distribution at 10.5, I'm not talking about the DCs that are on the graph; this time I'm talking about the actual rolls. As it happens, if a symmetric distribution is centered at 10.5, then it also has its median at 10.5, meaning we are equally likely to get a value above the mean and below it.

The OP's original observation is that we could match the first two moments (the mean and variance) of the two roll distributions. I realized that since we started out noting that we wanted to double the 3d6 roll to match the variances, and since shifting by 10.5 was functionally identical to shifting by 11 as far as success probabilities go, we'd need to do something to "declump" the distribution in order to properly center it. The confirmation roll mechanic effectively turns the discrete roll distribution into a continuous one, making it easier to work with from a centering and scaling perspective (we could use percentile dice for the confirmation roll to enable us to set any fractional DC to a precision of 0.01, but that would be a little silly).

When I graph the success probability with the confirmation die factored in, I'm not just interpolating or smoothing; I'm actually showing you the probabilities of success at each DC (odd and even). Again, to find the success probability for DC 11, we can look at the rolls that satisfy 2R-10-(d2-1) >= 11. We can satisfy this if 2R-10 >= 12 -- that is, if 2R >= 22 (that is, if 2R = 22, 2R = 24, ..., 2R = 36), since for these rolls, subtracting (d2-1) at worst leaves us with 11, which is still a success. And actually that's the only way we can do it, since we can't get 2R-10 = 11, even though if we did d2-1 could be 0, satisfying the event. But if the DC is even (12, say), then there are two ways to get a success: either 2R-10 >= 13, regardless of the d2, or 2R-10 = 12 and the d2 comes up 2.

Assuming only values that can be rolled are rolled, and assuming that the needed rolls are uniformly distributed (you can use other priors, up to you), then the probability of needing a 12 is 1 in 16 (there are 16 possible values on 3d6 from 3 to 18). The odds you actually roll a 12 are 27/216. The odds you fail to confirm are 1/2. So, that's 1/16*27/216*1/2 or 27/6912 or a tiny, tiny under 0.4%.

This is a common mistake: you're conflating joint probabilities with conditional probabilities. The DC isn't a random variable, really, so talking about the probability P(DC = 12 & modified roll >= 12) isn't really meaningful. We care about the conditional probability, P(modified roll >= 12 | DC = 12). But if you don't believe me, apply your own calculation to a d20 roll. What are the odds that you need a 12 and roll one? By your reasoning, it would be (1/16)*(1/20), or 0.003 (that is, 0.3%). But that's not what we care about when we talk about the likelihood of rolling a 12.

Again, by the way, if the confusion is due to my suggestion that we only roll to confirm when the roll is exactly equal to the DC, that was only to avoid pointless rolls. If you roll the d2 on every roll, then the probability of rolling an 11 is P(roll 12) * P(d2 = 1). On 2*3d6-10, that's P(3d6 = 11) * 1/2, or 0.125 * 0.5 = 0.0625. Pretty close actually to the 0.05 chance you have on a d20.

Misaligned data sets and truncated tails are.

Dude, read what I post if you're going to reply. I untruncated the tails for you. And I've been explaining at great length why nothing is misaligned.
 

Ovinomancer

No flips for you!
Ok, we evidently need a little primer on probability.

I'm going to put it in a spoiler, because it got a little long:


When we talk about success chances, we are measuring the chance of some event occurring, right? In this case, the type of event we're interested in is "Our roll against DC X is successful". In vanilla 5e, a roll against DC X is successful if and only if the number we get on the die plus a modifier equals or exceeds X. So we're talking about the event R + M >= X, and we care about P(R+M >= X). For simplicity, we can move M to the other side and lump it in with the DC -- the event R >= X-M will be TRUE for exactly the same rolls that R+M >=X will be true, so we're not changing anything by doing that.

Now if I just have a mathematical statement, like R >= 3, I can write down a list of all the values of R for which that statement holds. If R comes from a d20, the list is 3, 4, 5, ..., 20. Since R can only take on one value, the sub-events R=3, R=4, R=5, ..., R=20 are non-overlapping, and so the probability that one of them occurs is the sum of the probabilities of each one.

Ok, but you knew that. Now what happens if I decide that I'm going to double the roll I get on the die? Now, my original event is 2R + M >= X, but we can still move M over, to get 2R >= X-M; no problem there. Just to simplify the notation, I'm going to define Y=X-M, so I can write things like 2R >= Y instead. Alright. So now I can either write down the list of values that 2R can take, or I can just write down the list of values that R can take, and for each one, decide whether it makes the event true or not.

So, on a d20, 2R can only take on even numbers: 2, 4, 6, ..., 40, and each of these has a 5% chance of occurring, because each one corresponds to exactly one value on the original d20, which (we presume) are all equally likely. So, how do I determine P(2R >= Y)? Well, if Y is, say, 10, I add up P(2R = 10) + P(2R = 12) + ... + P(2R = 40). So far so good.

What if Y is 11? Well, 2R can't actually equal 11, but we didn't ask what the probability that 2R = Y is, we asked what the probability that 2R >= Y is. What values of 2R satisfy that inequality? Well, they're almost the same ones as before, except we have to throw out 10. How about Y=12? Turns out it's the same values as Y=11 --- we would throw out 11 if 11 were a possible roll (if we're being very formal about it we still do get rid of P(Y=11), but this is zero, so subtracting out its probability doesn't do anything. So we still have P(2R >= 12) = P(2R = 12) + P(2R = 14) + ... + P(2R = 40), the same as P(2R >= 11).

How about Y = -74? Well now, the list of values of 2R that satisfy 2R >= Y expands to... well... all of them. So we get P(2R >= -74) = P(2R = 2) + P(2R = 4) + ... + P(2R = 40). Did we do anything wrong or sneaky or break anything in either the Y = 11 or Y = -74 case? What about Y = 3.14159? That's fine too. The event and the probability are perfectly well defined; we just have to consider the set of possible rolls that make the statement true (4 and up in the case of pi), and add up the probabilities of each of those rolls. If Y goes "out of bounds" to the left we'll get 1, since every roll makes the event true; if Y goes out of bounds to the right, we get 0, since none of them do. And if Y sits in between two possible rolls, it still has a set of possible rolls to the right of it, so we just look at those; though that means that the success chance is flat for a bit between each pair of successive rolls.

If we made a function, f(Y) = P(2R >= Y), it would actually be well defined at every real number value for Y; it would just be flat between values of 2R, since only when y crosses those values do we actually add anything to the set of outcomes that make the event true.
Good, all true.

That's why the first graph in my last post had that saw shape with the flat bits. I can still talk about a DC 11 check even if I can't roll an 11, it just has the same difficulty as a DC 12 check unless I make some adjustment... not being able to roll 11 doesn't make the chance of succeeding at a DC 11 check 0. The important distinction, which I think is what's causing the confusion, is that the x-axis in the graph isn't showing rolls, it's showing DCs. Well, adjusted DCs, where we're moving the modifier over to the DC side instead of the roll side.
And, as long as you're talking about a single PDF, you're still okay for doing this. The problem is, of course, that we're not talking about looking at all possible events, but only those that occur in the reality we're using the probabilities to model. The scale of targets is relevant to the scale of the die rolls. We don't have half-step target numbers because we can't roll half-step numbers. I hope that sinks in, because it's important in a bit.
This sort of thing happens with RAW too, by the way, just to reassure you that I'm not introducing any voodoo. Take a rogue with a stealth modifier of +11, trying to sneak around a monster with passive perception of 10. The unadjusted DC is 10, but the adjusted DC is -2. I can talk about the probability that that rogue succeeds at that check, and I can talk about it as the probability that the d20 roll is -2 or better. Of course that winds up being the same as the probability that the d20 is 1 or better, but it's not wrong to write P(R >= -2).

Ok, now what about the confirmation correction I keep talking about? Well, we could perfectly well just use 2*3d6 - 10 and be done with it, and get the step-shaped graph I posted for our success probabilities. The game would run fine, albeit with a loss of granularity in the DC distinctions that actually matter. But it's not only aesthetically unsatisfying that the graph is jagged like that, nor is it only aesthetically unsatisfying that the 2*3d6-10 curve is above the d20 curve more often than it is below it, it's also a worse approximation than if we make an adjustment. And in this case it's a worse approximation because the 2*3d6-10 distribution has (slightly) the wrong mean.

So how does the confirmation correction fix this? It actually isn't just a smoothing mechanic; it fixes the whole die roll distribution to have the right mean. As I said, you can really think of the confirmation correction as having a 50% chance of subtracting 1 from every roll; in essence subtracting 1/2 on average from every roll (in the sense that the number of heads in a single coin flip is "on average" 1/2), and therefore subtracting 1/2 from the mean. But because I wanted a system that was not only physical but practically efficient, I noted that you don't care whether or not you subtract 1 from your roll unless doing so changes the outcome. And this will only happen if your roll was exactly equal to the adjusted DC. So though I can see why this makes it seem like I'm adjusting one point in an ad hoc fashion, it only seems that way because I'm ignoring meaningless rolls.
It only fixes the mean, and that by way of a kludged system that doesn't address anything but the mean. When you apply a change to fix the mean that also results in being able to have half-step results, you need to question what it is you've done.



I didn't do that precisely because I wanted a system to correspond to something physical -- something you could actually implement.
Fair enough.

Let's clarify something else here: the graphs of success probabilities aren't distributions at all; they're CDFs. The graphs (or the quantities we're depicting on the axes) don't have means or variances, as neither the DC nor the success chance is a random variable. So when I say we want to center a distribution at 10.5, I'm not talking about the DCs that are on the graph; this time I'm talking about the actual rolls. As it happens, if a symmetric distribution is centered at 10.5, then it also has its median at 10.5, meaning we are equally likely to get a value above the mean and below it.

The OP's original observation is that we could match the first two moments (the mean and variance) of the two roll distributions. I realized that since we started out noting that we wanted to double the 3d6 roll to match the variances, and since shifting by 10.5 was functionally identical to shifting by 11 as far as success probabilities go, we'd need to do something to "declump" the distribution in order to properly center it. The confirmation roll mechanic effectively turns the discrete roll distribution into a continuous one, making it easier to work with from a centering and scaling perspective (we could use percentile dice for the confirmation roll to enable us to set any fractional DC to a precision of 0.01, but that would be a little silly).

When I graph the success probability with the confirmation die factored in, I'm not just interpolating or smoothing; I'm actually showing you the probabilities of success at each DC (odd and even). Again, to find the success probability for DC 11, we can look at the rolls that satisfy 2R-10-(d2-1) >= 11. We can satisfy this if 2R-10 >= 12 -- that is, if 2R >= 22 (that is, if 2R = 22, 2R = 24, ..., 2R = 36), since for these rolls, subtracting (d2-1) at worst leaves us with 11, which is still a success. And actually that's the only way we can do it, since we can't get 2R-10 = 11, even though if we did d2-1 could be 0, satisfying the event. But if the DC is even (12, say), then there are two ways to get a success: either 2R-10 >= 13, regardless of the d2, or 2R-10 = 12 and the d2 comes up 2.



This is a common mistake: you're conflating joint probabilities with conditional probabilities. The DC isn't a random variable, really, so talking about the probability P(DC = 12 & modified roll >= 12) isn't really meaningful. We care about the conditional probability, P(modified roll >= 12 | DC = 12). But if you don't believe me, apply your own calculation to a d20 roll. What are the odds that you need a 12 and roll one? By your reasoning, it would be (1/16)*(1/20), or 0.003 (that is, 0.3%). But that's not what we care about when we talk about the likelihood of rolling a 12.

Again, by the way, if the confusion is due to my suggestion that we only roll to confirm when the roll is exactly equal to the DC, that was only to avoid pointless rolls. If you roll the d2 on every roll, then the probability of rolling an 11 is P(roll 12) * P(d2 = 1). On 2*3d6-10, that's P(3d6 = 11) * 1/2, or 0.125 * 0.5 = 0.0625. Pretty close actually to the 0.05 chance you have on a d20.

Dude, read what I post if you're going to reply. I untruncated the tails for you. And I've been explaining at great length why nothing is misaligned.
I strongly advise you do the same. All of your above talks about how do deal with a single rolling method, independent of others, and I have no real beef with it (except your patting yourself on the back for your cleverness about the confirmation mechanic, which is still kludge to address the fact that you wanted to compare at a mean of 10.5 and couldn't commit to being unphysical to start with). The issue is, and has been, in the comparison. Recall what I said above, as it's now important.

We're talking about systems that do not do half-step increments in practice, nor do dice allow for half-step increments. So, when you compare, you MUST avoid half-step increments or you're not comparing the same things. When you compare a d20 incremented by 1 per step to 2*3d6 incremented by 2 per step, comparing anything in a half-step of the 2*3d6 isn't meaningful in any way. You're comparing a real outcome on d20 to an impossible outcome on 2*3d6. This goes exactly the same for comparing 3d6 to d20/2, no matter how you recenter, because d20/2 steps in .5 increments while 3d6 steps in increments of 1. If you compare a 6.5 on the d20/2, it doesn't match anything possible on the 3d6. This is what I mean when I say you toss half the data, you just ignore this because there's an extrapolation and you're assuming it's a valid comparison at that point because you can derive a number. That you invented a confirmation method just continues to let you confuse yourself that you've created a system that has half-step values when it does not have them.

You even missed the boat on the fact that your confirmation mechanic produces minuscule probabilities at the half steps (a fact you glossed in your hurry to point out you know the difference between conditional an joint probabilities -- I presented the joint probability when I set the first conditional to all, because I actually knew that was an argument to make against what I was saying). Your method sets the half-steps at half of the probability of the full step above it. This helps you recenter, but it doesn't create a useful comparison because you've created data where it doesn't exist via a kludge.

I posted raw data above. You cannot compare the probabilities of rolling a 12 on d20 with the probability of rolling a 12 on 2*3d6-10, because a 12 does not exist with the latter. If you kludge it in with a post-hoc confirmation method that reduces the likelihood of rolling exactly a thirteen by half and gives that to 12, I question if you've thought through what you've done or have you just arrived at a way to make 1+1 look like 2+2 and stopped thinking about it.

Maybe it's because I'm an engineer, so I always have to examine my models to see if they do what I assume they do, but the above bits about how data doesn't align is glaringly obvious to me. You cannot compare data points where one set doesn't exist. Data is data. Statistics is often how you lie to yourself with math. Always check your assumptions against reality and run a test. Which is why I took the OP example and showed how a skew of 3 on the normal modifiers takes the near match to impossible in one method in one direction and a 10% delta in the other. That's not the hallmark of a stable system (and it does this because of the half-step problem, a skew of 3 on the 3d6 is a skew of 6 in the scaled d20 version. A little movement on the 3d6 curve is a lot of movement on the d20 curve, a fact I've been trying to point out to you for many posts and you've just glossed as if there's some fundamental basic I've failed to understand. I get the basics, I'm actually looking at what the models tell us while you're still looking at lines.[/spoiler]
 

Esker

Hero
There is no statistics going on here; just probability. That's because there is no data; just calculations. Statistics is trying to find a good model to describe a set of observations from the world (the data). Probability is examining the properties of models in and of themselves. This is the latter. There is no comparison between assumptions and reality to be made here (except I guess for the basic assumptions of fair and independent dice that everybody takes as given) because there are only the probability models, no observations for them to fit.

We have two mechanical systems for stochastically producing successes and failures. All that matters as far as the game is concerned is whether the probabilities of success translate reasonably, which they do. The actual numbers that show up on the die are a means to adjudicating success or failure; they have no other purpose or meaning in themselves. I'm honestly not sure what you think I'm claiming that has you so riled up, except that you really seem to want the actual numbers on the dice to be the same.

You cannot compare the probabilities of rolling a 12 on d20 with the probability of rolling a 12 on 2*3d6-10, because a 12 does not exist with the latter.

Presumably you meant to say 11 there (12 happens if 3d6 = 11). The probability of rolling an 11 only matters insofar as it is normally the difference between the difficulty of a DC 11 check and a DC 12 check. That's it. And we don't necessarily even care that much about that, as long as the success rate for DC 11 and the success rate for DC 12 are individually about right (or, if we're comparing two systems that use different DC calculations, then that the corresponding probabilities match up). It still doesn't seem like you're getting that, since you keep hammering on the gaps in the dice distribution itself.
 
Last edited:

Ovinomancer

No flips for you!
There is no statistics going on here; just probability. That's because there is no data; just calculations. Statistics is trying to find a good model to describe a set of observations from the world (the data). Probability is examining the properties of models in and of themselves. This is the latter. There is no comparison between assumptions and reality to be made here (except I guess for the basic assumptions of fair and independent dice that everybody takes as given) because there are only the probability models, no observations for them to fit.

We have two mechanical systems for stochastically producing successes and failures. All that matters as far as the game is concerned is whether the probabilities of success translate reasonably, which they do. The actual numbers that show up on the die are a means to adjudicating success or failure; they have no other purpose or meaning in themselves. I'm honestly not sure what you think I'm claiming that has you so riled up, except that you really seem to want the actual numbers on the dice to be the same.
Earlier in the thread I cautioned against reification of the models, as that's a trap that's easy to fall into when using models, statistical or probability. And, largely, we're doing a good bit of both, here, with mean shifting, discussion of variance, discussion of deviation, and looking at how closely two probability models match, none of which true probability math cares about. What we're doing is building a model of a physical system where we plan to use the physical system. Thinking that we can look at the maths in the model and that tells us what reality is, or, even worse, thinking that truth exists because the models tell us something without validating it in the real is the sin of reification, which you latch onto here.


Presumably you meant to say 11 there (12 happens if 3d6 = 6). The probability of rolling an 11 only matters insofar as it is normally the difference between the difficulty of a DC 11 check and a DC 12 check. That's it. And we don't necessarily even care that much about that, as long as the success rate for DC 11 and the success rate for DC 12 are individually about right (or, if we're comparing two systems that use different DC calculations, then that the corresponding probabilities match up). It still doesn't seem like you're getting that, since you keep hammering on the gaps in the dice distribution itself.
Because you're determining the probability for an event that cannot happen and pretending that, because you can do the math, it does. Again, you're believing the model and not reality.

As you said above, scaling d20 DCs by 2 is mathematically the same as halving d20 rolls. If this is true, the either we use the original DCs and the half d20, which means that half of the results on the D20 are in-between DCs, or we expand the DC range and use a normal d20 in which half the results on the d20 are in-between available DCs. Both of these approaches shrink the useful d20 range by half, meaning the d20 range is half as useful as it was. Essentially, we're taking the d20 from 20 useful steps to 10 useful steps.

This halved d20 is then being compared to 3d6, but not the full 16 step range of 3d6, but the central 10 values. Only, the comparison ignores the fact that the d20 range has been effectively halved from 20 to 10 steps, and you get a pretend game that every step on the d20 now matters against the more spaced 10 steps of the central part of 3d6. Various reasons why is is okay are presented -- we can find probabilities, we can pretend those DCs exist, the part of the 3d6 we toss isn't that big, etc -- each brought up and levied independently to defeat an objection and then forgotten when those become a challenge for another excuse. It's a circle of special pleading, always ignoring that the transformation of one of the die methods fundamentally alters the function of the game just in time to compare to a truncated but unaltered other method.

In simpler words, when you scale the die method, you change the steps size for DCs in that scale. You cannot compare to a different scale of DCs using a different die method and pretend you can use the same DC scale for both. This is the core failed assumption to the whole endeavour, and I've shown it to be so with the OP examples -- examples that have so far been ignored. The DCs scale differently in the different scales of die and that matters.
 

Esker

Hero
Thinking that we can look at the maths in the model and that tells us what reality is, or, even worse, thinking that truth exists because the models tell us something without validating it in the real is the sin of reification, which you latch onto here.

Not sure what you mean here. The properties of the model follow directly from the basic assumption that we have fair independent dice. That's the connection between model and reality. If that holds, then the model is the reality, because everything else is derived using mathematical laws.

Because you're determining the probability for an event that cannot happen and pretending that, because you can do the math, it does.

This also doesn't make any sense. The probability of the event is the probability of the event. If it couldn't happen it would have a probability of zero. If there's an error in my math, please point it out, but again, the probability of the event follows from the assumption that when we roll a die, every side is equally likely, and that multiple dice rolls don't influence each other. That's all the reality check we need for the math to match the reality.

As you said above, scaling d20 DCs by 2 is mathematically the same as halving d20 rolls.

Well, halving d20 rolls and bonuses both, but yeah.

If this is true, the either we use the original DCs and the half d20, which means that half of the results on the D20 are in-between DCs, or we expand the DC range and use a normal d20 in which half the results on the d20 are in-between available DCs. Both of these approaches shrink the useful d20 range by half, meaning the d20 range is half as useful as it was. Essentially, we're taking the d20 from 20 useful steps to 10 useful steps.

Yes, that's true, but the impact of this is only that we're coarsening the granularity of the DC scale by, essentially, ignoring the differences between (adjusted) DC 3 and DC 6 checks, etc. and similarly between (adjusted) DC 16 and DC 19 checks. But we already mostly do that by switching to 3d6, since the difference between the chance of rolling a 2 or better vs a 6 or better is small (about 4.6% total over a 4 DC range), as is the difference between 15 or better and 19 or better. Even though we only have 10 useful steps to work with, we allocate those to distinguish within the most useful part of the DC range: i.e., between 6 and 15.

Keep in mind, the comparison here isn't between RAW and anything else, it's between a 3d6 system and a 1d20 modified system, so we've already upended the system. So we need to be careful not to fall into the trap of using our intuitions about how granular D&D normally is, since that's thrown out either way.

This halved d20 is then being compared to 3d6, but not the full 16 step range of 3d6, but the central 10 values.

Again, keep in mind that there's no need to compare the actual roll distributions; just the success vs DC curves, since the roll distribution only matters to the extent that it affects success chances. We can compare those curves at any point, not just the middle 10 values. As I've said, if you think it makes a big difference for gameplay that the d20 version equalizes DCs that would otherwise differ by a little less than the equivalent of 1 point in the vanilla system, that's fine. It doesn't particularly bother me, except for special case crit mechanics, which we set aside from the start since they have to be dealt with separately.

Various reasons why is is okay are presented -- we can find probabilities, we can pretend those DCs exist, the part of the 3d6 we toss isn't that big, etc -- each brought up and levied independently to defeat an objection and then forgotten when those become a challenge for another excuse. It's a circle of special pleading...

Where have I engaged in special pleading? I don't believe I've forgotten any of the points I've made. I presented two pairs of systems (one of which is vanilla d20) and showed that within each pair of systems, the two methods produce nearly identical outcomes in practice.

In simpler words, when you scale the die method, you change the steps size for DCs in that scale. You cannot compare to a different scale of DCs using a different die method and pretend you can use the same DC scale for both.

Where did I do that? If I'm comparing one method to a different scale of DCs using another method, then I'm clearly not using the same DC scale for both. But what I am doing (which is only a slight modification on what @NotAYakk originally proposed) is preserving the same range of difficulties, even if I use different numbers to describe them. It doesn't matter if a "moderately difficult" task is described as DC 15 or DC 20, as long as the other elements (rolls and bonuses) are altered correspondingly to keep the success chance about the same across a range of possible characters trying to succeed at those tasks.
 

Esker

Hero
@Ovinomancer, here's an analogy for you. I'm curious what your intuitions are.

Suppose all this time 5e had used a percentile system to resolve checks, instead of the d20, and you had to roll at or under the DC to succeed. A medium difficulty task was DC 60, a really hard task was DC 10, etc. The proficiency bonus started at +20 and went up in increments of 10, and ability scores went from 0 to 100, with modifiers set to 0 at a 50 and going up by 10 whenever the ten's digit goes up, so, when you hit 60 you're at +10, 70 is a +20, etc., all the way up to +50 at 100. Also, instead of adding your bonus to the roll, you added it to the DC (thus making it easier to succeed).

Now someone comes along and says, "It's a pain to have to roll two dice for every check, and also wouldn't it be nice if the DM could keep some DCs secret without having to know everybody's bonuses? What if we scaled ability scores and bonuses down by a factor of 10, rolled a d10 instead of a d100 to resolve outcomes, added bonuses to the roll instead of the DC, and said that a success was rolling at or above the target instead of at or below it? To keep things comparable, we'll modify all the DCs to be DC' =1 + (100 - DC)/10, so 60 becomes 5, 50 becomes 6, 40 becomes 7, and so on."

First question for you: if nobody ever used DCs that weren't multiples of 10, would this change have any effect on the outcomes in the game? (I'm not asking whether it would have an effect on how much work it is, just whether it would affect outcomes)

Second question: Suppose somebody objected to this change, saying: "You can't say that this won't have an impact! We used to have 100 increments, and now we only have 10!"

The designer shows the objector a line graph, with two different sets of labels on the x-axis: The first set of labels go from 0 to 100, representing DCs in the old (percentile) system. The second shows the corresponding DC in the new system: 0 is aligned with 11, 5 is aligned with 10.5, 10 is aligned with 10, 20 with 9, 30 with 8, etc. Then there are two lines. The one for the old system shows that a DC 50 check has a 50% success rate, a DC 55 has a 55% success rate, a DC 60 check has a 60% success rate, etc. The second only has points at whole numbers, but at those spots, lines up with the first one.

"Nobody uses DCs that aren't a multiple of 10," they say. "The graph lines up where it matters."

Supposing it's true that DCs are always multiples of 10, who is right?

Third: Suppose the objector were a DM who actually liked to use DCs in multiples of 5. They approach the redesigner, red-faced, saying: I have a DC 65 check, which worked perfectly well before, but now you're telling me the DC is 4.5! You can't roll a 4.5!"

"You're right," says the designer. "How about this: round your DCs down to the nearest whole number, but keep track of whether it was a half originally. So your 4.5 becomes a 4. But if the player rolls exactly 4 on their d10 (after modifiers), have them then roll a d6. If they get 4 or more, they succeed, otherwise they fail."

They then go to their graph, and fill in points on the second line at 10.5, 9.5, etc., which sit at 5% success, 15% success, etc., explaining, "Your DC 65 check becomes almost like a DC 4 check, except it's a little more difficult because there's an extra step involved to succeed. A DC 4 check has a 70% chance of success, since you can roll anything but a 1, 2 or 3. In your case, they have one extra way to fail: by rolling a 4 and then rolling a 1, 2 or 3 on the d6. That happens (1/10) * (3/6) of the time, or 5%. So there's now a 35% chance of failing, and a 65% chance of succeeding, just like there would have been before.

The objector thinks for a minute and says, "That's a B.S. kludge. Those points you're drawing don't exist! You can't just say that you can have a DC 4.5 check, if you can't roll 4.5! What kind of statistics mumbo jumbo is this?"

Is the designer pulling a fast one? Does their suggested fix allow for 55 or 65 DCs, etc. to work as intended? Or is something wrong?
 

Remove ads

AD6_gamerati_skyscraper

Remove ads

Recent & Upcoming Releases

Top