# D&D GeneralReplacing 1d20 with 3d6 is nearly pointless

#### Ovinomancer

##### No flips for you!
Ok, since you continue to be hung up on the fact that my graph ends before the 3d6 curve gets to the top and bottom, here: And for good measure, here's a graph of the differences in success probabilities at each adjusted DC (think of the x-axis of all of these graphs as the DC of the check minus the modifier). So, across the range of adjusted DCs, the two methods yield success probabilities within 4.5% of each other; essentially, depending on the DC, switching from one to the other will give some characters the equivalent of somewhere between a -1 and +1.
Actually, the graph should look like this: And that's because you're graphing physical things - the 2*3d6 data DOES NOT EXIST except at certain points. I graphed the -11 vice your correction for simplicity and to avoid explaining how your correction causes this graph vs the -10 graph to exist half of the time resulting in a bit of a Schrodinger's graph. It's all bad assumptions.

Now hopefully we can agree that I haven't tossed any data, as I'm showing the full range of possibilities.
You've still tossed half of the d20 data if you compare where the 2*3d6 curve actually exists. The 2*3d6 curve DOES NOT EXIST at half the data points you're comparing. It creates discrete data points spaced 2 apart. You can't use a model of a physical event non-physically and get coherent answers.

It exists as a target, not as a possible roll. If you have a 25 AC and are facing a monster with a +3 to hit, then the adjusted DC of their roll is 22. RAW they auto-hit you on a 20, but omitting that, they need a 22 to hit you, which means they can't.
Yes, it DOES NOT EXIST, yet you're using it as part of your comparison.

Yes. I'm comparing 0% to roughly 3% and saying those are close. You can object for gameplay reasons (essentially, using this approximation makes some tasks that would be very easy instead automatic, and some that should be very difficult impossible; or vice versa, depending on which you're treating as the reference method and which the approximation method). But it's not a mathematical error.
This is a game that often hinges on a 5% difference and you're willing to cavalierly ignore the impact of 3% (and it's larger than that) just because you did mathemagic and can't acknowledge that's it's flawed. This is, of course, ignoring the parts where it's up to 95% different.

Yes, if you roll a 1, and apply 10 + (roll-10)/2 (rounding down when halving), you get 5. And if you roll a 20, and apply 10 + (roll-10)/2, you get 15. This isn't some purely theoretical exercise. You could, in principle, do that math with your rolls at the table. That's not what @NotAYakk was actually suggesting, but adjusting the die rolls like that is mathematically equivalent to doubling your bonus and doubling the DCs' distance from 10.
I bolded the problem in your thinking I've been trying to point out. If you round down when halving, then rolling a 2 is the same as rolling a 3, rolling a 4 is the same as rolling a 5, etc, etc. You've tossed half of your unique rolls using this method because you're ended up at the same result for comparison to rolls you aren't tossing on a 3d6.

And, yes, @NotAYakk had some small concessions that made their changes to modifiers make those fractions occasionally count (they didn't double attribute bonuses outright, and random die could still produce odd results), but quite a number of modifiers fit the straight doubling model that results in losing half the numbers on the d20 due to rounding. And the graphs certainly lose the data.

I haven't ignored anything. I've been entirely up front all along (as was the OP) about what happens with extreme DCs. The approximation is still good at those extremes as measured by differences in probability. You might not consider approximating 3% with 0% or vice versa to be a good approximation, and that's fine. That's a matter of gaming priorities, not math.

You're losing 10% of the possible results, probability-wise. Not 3%. Focusing on a single delta as if it stands in for the total fidelity loss is not kosher.

You keep saying this but I haven't tossed out anything.
Except half the unique die rolls, again.

Right, because nobody is saying that 3d6 produces similar rolls to rescaled d20 (or vice versa). We are saying that if you use a suitable rescaling that (approximately) equalizes the variance of the two distributions, then the success probabilities are close, for any DC you want to set.
They are not. I provided an example earlier, a very slightly modified version of the OP example, that resulted in an infinite difference because it was still possible on 3d6 but wasn't possible at all on the modified d20 scale. I also showed how moving in the other direction (and following the maths as presented in the OP) we encountered ~10% deltas between outcomes. This is because you're moving up and down the probabilities at twice the rate -- it's not one for one from a giving comparison start point. This is due to the scaling invalidating half of the die rolls possible by treating them functionally the same.

There's nothing unphysical about any of this. It's all something you could do in your game. Either (1) roll 3d6 to resolve checks, double the result, and subtract 10. If the result ties the DC, confirm success with a d2; or (2) roll 1d20 to resolve checks, as written. The claim is that these produce very similar success probabilities, regardless of the DC.
Absolutely, I can do that. What I can't do is do that and get a result of 11. That's impossible to do by rolling 3d6, doubling the amount rolled, and subtracting 10. So, a model that does that an includes 11 in it's comparison is unphysical.
Alternatively, if you want luck to play less of a role in your game, you can either (1) roll 3d6 to resolve checks, confirming ties with a d2; or (2) roll 1d20, halve the distance from 10 and then add 10; or (3) double all bonuses and stretch DCs to be DC' = 10 + 2*(DC - 10). (2) and (3) are exactly identical; (1) is very close, at all DCs.

Any of these are things you could actually do; they're not impractical thought experiments.
Sure, I can use your method, but by doing so I can only get half of the results you're saying correlate to the full range of the other method. That's what's unphysical. You aren't actually fully understanding the data you're creating because the program you're using draws a line between the data points, and you've confused that line for data.

#### Attachments

• Coroc

#### Esker

##### Hero
Actually, the graph should look like this:

No, it doesn't look like that. It wouldn't look like that even if I weren't using the confirmation correction mechanism (which you don't seem to ever acknowledge), since we're only ever interested in the probability of getting at least some result. And so even if you can only ever get even numbered rolls, all that does is make the graph have a bunch of little flat segments, where an adjusted DC of 5 is the same difficulty as an adjusted DC of 6, 7 is the same difficulty as 8, etc. It doesn't make the success rates zero.

We can actually graph that: it looks like this: and here are the differences: You can see that it's no longer centered, and as a result is a worse approximation, because without the confirmation correction we're comparing a roll with a mean of 11 to a roll with a mean of 10.5. But it's still never worse than a 9% difference.

With the confirmation correction, however, your objection disappears entirely, because we can distinguish between consecutive DCs, due to the fact that it's possible to tie even DCs but not possible to tie odd ones, and so even though we can only beat a DC 5 by rolling 6 or higher, DC 5 is a little easier than DC 6 due to the fact that we don't have to worry about hitting the DC exactly and having to confirm.

What I can't do is do that and get a result of 11. That's impossible to do by rolling 3d6, doubling the amount rolled, and subtracting 10.

As I've explained, you get 11 if you roll 12 and fail your confirmation roll.

You aren't actually fully understanding the data you're creating because the program you're using draws a line between the data points, and you've confused that line for data.

One of us is neither understanding nor paying attention. But it's not me. I was interpolating in an earlier graph (the one that appeared to show a less than 100% chance of rolling a 1 or higher), but I added in the in between points after that, as I noted. Look at the tables I posted; every DC is there, not just even-numbered ones. Your objections are not only based on faulty thinking, they're based on incorrect information.

• pemerton

#### Ovinomancer

##### No flips for you!
Ok, since you continue to be hung up on the fact that my graph ends before the 3d6 curve gets to the top and bottom, here: And for good measure, here's a graph of the differences in success probabilities at each adjusted DC (think of the x-axis of all of these graphs as the DC of the check minus the modifier). So, across the range of adjusted DCs, the two methods yield success probabilities within 4.5% of each other; essentially, depending on the DC, switching from one to the other will give some characters the equivalent of somewhere between a -1 and +1.
Actually, the graph should look like this:
View attachment 117443

And that's because you're graphing physical things - the 2*3d6 data DOES NOT EXIST except at certain points. I graphed the -11 vice your correction for simplicity and to avoid explaining how your correction causes this graph vs the -10 graph to exist half of the time resulting in a bit of a Schrodinger's graph. It's all bad assumptions.

Now hopefully we can agree that I haven't tossed any data, as I'm showing the full range of possibilities.
You've still tossed half of the d20 data if you compare where the 2*3d6 curve actually exists. The 2*3d6 curve DOES NOT EXIST at half the data points you're comparing. It creates discrete data points spaced 2 apart. You can't use a model of a physical event non-physically and get coherent answers.

It exists as a target, not as a possible roll. If you have a 25 AC and are facing a monster with a +3 to hit, then the adjusted DC of their roll is 22. RAW they auto-hit you on a 20, but omitting that, they need a 22 to hit you, which means they can't.
Yes, it DOES NOT EXIST, yet you're using it as part of your comparison.

Yes. I'm comparing 0% to roughly 3% and saying those are close. You can object for gameplay reasons (essentially, using this approximation makes some tasks that would be very easy instead automatic, and some that should be very difficult impossible; or vice versa, depending on which you're treating as the reference method and which the approximation method). But it's not a mathematical error.
This is a game that often hinges on a 5% difference and you're willing to cavalierly ignore the impact of 3% (and it's larger than that) just because you did mathemagic and can't acknowledge that's it's flawed. This is, of course, ignoring the parts where it's up to 95% different.

Yes, if you roll a 1, and apply 10 + (roll-10)/2 (rounding down when halving), you get 5. And if you roll a 20, and apply 10 + (roll-10)/2, you get 15. This isn't some purely theoretical exercise. You could, in principle, do that math with your rolls at the table. That's not what @NotAYakk was actually suggesting, but adjusting the die rolls like that is mathematically equivalent to doubling your bonus and doubling the DCs' distance from 10.
I bolded the problem in your thinking I've been trying to point out. If you round down when halving, then rolling a 2 is the same as rolling a 3, rolling a 4 is the same as rolling a 5, etc, etc. You've tossed half of your unique rolls using this method because you're ended up at the same result for comparison to rolls you aren't tossing on a 3d6.

And, yes, @NotAYakk had some small concessions that made their changes to modifiers make those fractions occasionally count (they didn't double attribute bonuses outright, and random die could still produce odd results), but quite a number of modifiers fit the straight doubling model that results in losing half the numbers on the d20 due to rounding. And the graphs certainly lose the data.

I haven't ignored anything. I've been entirely up front all along (as was the OP) about what happens with extreme DCs. The approximation is still good at those extremes as measured by differences in probability. You might not consider approximating 3% with 0% or vice versa to be a good approximation, and that's fine. That's a matter of gaming priorities, not math.

You're losing 10% of the possible results, probability-wise. Not 3%. Focusing on a single delta as if it stands in for the total fidelity loss is not kosher.

You keep saying this but I haven't tossed out anything.
Except half the unique die rolls, again.

Right, because nobody is saying that 3d6 produces similar rolls to rescaled d20 (or vice versa). We are saying that if you use a suitable rescaling that (approximately) equalizes the variance of the two distributions, then the success probabilities are close, for any DC you want to set.
They are not. I provided an example earlier, a very slightly modified version of the OP example, that resulted in an infinite difference because it was still possible on 3d6 but wasn't possible at all on the modified d20 scale. I also showed how moving in the other direction (and following the maths as presented in the OP) we encountered ~10% deltas between outcomes (notable, when you need an 8 or better on 3d6 using normal modifiers).

There's nothing unphysical about any of this. It's all something you could do in your game. Either (1) roll 3d6 to resolve checks, double the result, and subtract 10. If the result ties the DC, confirm success with a d2; or (2) roll 1d20 to resolve checks, as written. The claim is that these produce very similar success probabilities, regardless of the DC.

Alternatively, if you want luck to play less of a role in your game, you can either (1) roll 3d6 to resolve checks, confirming ties with a d2; or (2) roll 1d20, halve the distance from 10 and then add 10; or (3) double all bonuses and stretch DCs to be DC' = 10 + 2*(DC - 10). (2) and (3) are exactly identical; (1) is very close, at all DCs.

Any of these are things you could actually do; they're not impractical thought experiments.
[/QUOTE]
No, it doesn't look like that. It wouldn't look like that even if I weren't using the confirmation correction mechanism (which you don't seem to ever acknowledge), since we're only ever interested in the probability of getting at least some result. And so even if you can only ever get even numbered rolls, all that does is make the graph have a bunch of little flat segments, where an adjusted DC of 5 is the same difficulty as an adjusted DC of 6, 7 is the same difficulty as 8, etc. It doesn't make the success rates zero.

We can actually graph that: it looks like this: and here are the differences: You can see that it's no longer centered, and as a result is a worse approximation, because without the confirmation correction we're comparing a roll with a mean of 11 to a roll with a mean of 10.5. But it's still never worse than a 9% difference.
No, you get the graph I showed because the 2*3d6 only exist on even numbers, so the difference when only the d20 exists is vast. If you look at the dots very close to zero, they map to your graph -- I just unsmoothed it from you ignoring the fact that it's a discrete comparison of data points at places where one set does not exist.

The above is just another way for you to extrapolate values where they don't exist that's less smooth than your first presentation. It's still incorrect because data does not exist at those points.
With the confirmation correction, however, your objection disappears entirely, because we can distinguish between consecutive DCs, due to the fact that it's possible to tie even DCs but not possible to tie odd ones, and so even though we can only beat a DC 5 by rolling 6 or higher, DC 5 is a little easier than DC 6 due to the fact that we don't have to worry about hitting the DC exactly and having to confirm.

As I've explained, you get 11 if you roll 12 and fail your confirmation roll.

One of us is neither understanding nor paying attention. But it's not me. I was interpolating in an earlier graph (the one that appeared to show a less than 100% chance of rolling a 1 or higher), but I added in the in between points after that, as I noted. Look at the tables I posted; every DC is there, not just even-numbered ones. Your objections are not only based on faulty thinking, they're based on incorrect information.
I've tried to not discuss your confirmation mechanic because it doesn't fix the underlying problem - it's another arbitrary kludge on bad math that tries to correct for the centering problem but doesn't address the lack of data problem. I was hoping, apparently forlornly, that you'd catch onto the missing data problem and but I underestimated just how proud you are of that kludge. It's actually kind of clever, if it weren't just putting lipstick on a pig.

You originally presented it as a centering correction because Anydice won't let you create an impossible set of rolls by having rolls be .5. I expect R is just fine with doing this, so I'm not sure why you didn't just center on 10.5 directly -- I'd guess that that put data points on the halfs and you instinctively realized that might be a problem, but I might be a bit optimistic about that given how you're generally okay using smoothed curve values at non-existent data points when it suits your purpose. That said, you decided to create your confirmation correction mechanic with an eye towards having the distribution center on 10.5 by adjusting outcomes after the happen from the true center of 11. As such, your mechanic is somewhat clever, but it's post hoc adjustment of data to fit a conclusion already formed, so bad practice and worse stats. What you seem to think solves the problems nicely does no such thing for any of the problem except the apparent centering at 10.5.

For instance, you suggest it corrects for no odd numbers in the 2*3d6-10 distribution. But it doesn't, because, even with your addition that a failed confirmation actually means you reduce the roll by one, that odds of rolling an 11 in this is the odds of rolling exactly a 12 when needing a 12 and failing the confirmation. Assuming only values that can be rolled are rolled, and assuming that the needed rolls are uniformly distributed (you can use other priors, up to you), then the probability of needing a 12 is 1 in 16 (there are 16 possible values on 3d6 from 3 to 18). The odds you actually roll a 12 are 27/216. The odds you fail to confirm are 1/2. So, that's 1/16*27/216*1/2 or 27/6912 or a tiny, tiny under 0.4%

So, you're claiming that your confirmation mechanic solves the no odd numbers because things will line up about 1% of the time? Granted, that's based on an even distribution of target numbers across the possible values, which I actually limited to the ones you can roll rather than the ones inbetween the rolls (I didn't account for needing an 11, for instance, because you can't roll it). Feel free to make different assumptions, like maybe ALL target numbers are 12, in which case 11 exists 27/3456 or 0.2% of the time. That's best case for 11 existing, by the way, 0.2% of the time.

Yeah, I haven't much talked about your confirmation mechanic, largely because it's a pointless distraction that fixes the centering without correcting the serious flaws in your analysis. Centering was never the serious flaw. Misaligned data sets and truncated tails are.

#### Ovinomancer

##### No flips for you!
Look, here's the actual data points generated by the two methods d20 vs 2*3d6-10. Note where data exists. These are the discrete events that occur, ie, the actually numbers that are rolled and then scaled. Note how many lines there are where there's only one number for that value. The bits inbetween these data points do not exist and cannot be compared. If you infill by extrapolation, whether that's using a smoothed line or by a confirmation kludge, you're creating data that does not exist in the real and confusing yourself.

It's also been super easy in this discussion to confuse the varied discussions of different comparisons. The actual chart below shows that it's odd numbers that align, not the evens as discussion the last few posts have been about. My bad, but the fundamental points stand, just swap odds and evens.

 Value D20 % 2*3d6-10 % -5​ 100.00​ -4​ -3​ 99.54​ -2​ -1​ 98.15​ 0​ 1​ 100​ 95.37​ 2​ 95​ 3​ 90​ 90.74​ 4​ 85​ 5​ 80​ 83.80​ 6​ 75​ 7​ 70​ 74.07​ 8​ 65​ 9​ 60​ 62.50​ 10​ 55​ 11​ 50​ 50.00​ 12​ 45​ 13​ 40​ 37.50​ 14​ 35​ 15​ 30​ 25.93​ 16​ 25​ 17​ 20​ 16.20​ 18​ 15​ 19​ 10​ 9.26​ 20​ 5​ 21​ 4.63​ 22​ 23​ 1.85​ 24​ 25​ 0.46​

#### Esker

##### Hero
No, you get the graph I showed because the 2*3d6 only exist on even numbers, so the difference when only the d20 exists is vast.

Ok, we evidently need a little primer on probability.

I'm going to put it in a spoiler, because it got a little long:

When we talk about success chances, we are measuring the chance of some event occurring, right? In this case, the type of event we're interested in is "Our roll against DC X is successful". In vanilla 5e, a roll against DC X is successful if and only if the number we get on the die plus a modifier equals or exceeds X. So we're talking about the event R + M >= X, and we care about P(R+M >= X). For simplicity, we can move M to the other side and lump it in with the DC -- the event R >= X-M will be TRUE for exactly the same rolls that R+M >=X will be true, so we're not changing anything by doing that.

Now if I just have a mathematical statement, like R >= 3, I can write down a list of all the values of R for which that statement holds. If R comes from a d20, the list is 3, 4, 5, ..., 20. Since R can only take on one value, the sub-events R=3, R=4, R=5, ..., R=20 are non-overlapping, and so the probability that one of them occurs is the sum of the probabilities of each one.

Ok, but you knew that. Now what happens if I decide that I'm going to double the roll I get on the die? Now, my original event is 2R + M >= X, but we can still move M over, to get 2R >= X-M; no problem there. Just to simplify the notation, I'm going to define Y=X-M, so I can write things like 2R >= Y instead. Alright. So now I can either write down the list of values that 2R can take, or I can just write down the list of values that R can take, and for each one, decide whether it makes the event true or not.

So, on a d20, 2R can only take on even numbers: 2, 4, 6, ..., 40, and each of these has a 5% chance of occurring, because each one corresponds to exactly one value on the original d20, which (we presume) are all equally likely. So, how do I determine P(2R >= Y)? Well, if Y is, say, 10, I add up P(2R = 10) + P(2R = 12) + ... + P(2R = 40). So far so good.

What if Y is 11? Well, 2R can't actually equal 11, but we didn't ask what the probability that 2R = Y is, we asked what the probability that 2R >= Y is. What values of 2R satisfy that inequality? Well, they're almost the same ones as before, except we have to throw out 10. How about Y=12? Turns out it's the same values as Y=11 --- we would throw out 11 if 11 were a possible roll (if we're being very formal about it we still do get rid of P(Y=11), but this is zero, so subtracting out its probability doesn't do anything. So we still have P(2R >= 12) = P(2R = 12) + P(2R = 14) + ... + P(2R = 40), the same as P(2R >= 11).

How about Y = -74? Well now, the list of values of 2R that satisfy 2R >= Y expands to... well... all of them. So we get P(2R >= -74) = P(2R = 2) + P(2R = 4) + ... + P(2R = 40). Did we do anything wrong or sneaky or break anything in either the Y = 11 or Y = -74 case? What about Y = 3.14159? That's fine too. The event and the probability are perfectly well defined; we just have to consider the set of possible rolls that make the statement true (4 and up in the case of pi), and add up the probabilities of each of those rolls. If Y goes "out of bounds" to the left we'll get 1, since every roll makes the event true; if Y goes out of bounds to the right, we get 0, since none of them do. And if Y sits in between two possible rolls, it still has a set of possible rolls to the right of it, so we just look at those; though that means that the success chance is flat for a bit between each pair of successive rolls.

If we made a function, f(Y) = P(2R >= Y), it would actually be well defined at every real number value for Y; it would just be flat between values of 2R, since only when y crosses those values do we actually add anything to the set of outcomes that make the event true.

That's why the first graph in my last post had that saw shape with the flat bits. I can still talk about a DC 11 check even if I can't roll an 11, it just has the same difficulty as a DC 12 check unless I make some adjustment... not being able to roll 11 doesn't make the chance of succeeding at a DC 11 check 0. The important distinction, which I think is what's causing the confusion, is that the x-axis in the graph isn't showing rolls, it's showing DCs. Well, adjusted DCs, where we're moving the modifier over to the DC side instead of the roll side.

This sort of thing happens with RAW too, by the way, just to reassure you that I'm not introducing any voodoo. Take a rogue with a stealth modifier of +11, trying to sneak around a monster with passive perception of 10. The unadjusted DC is 10, but the adjusted DC is -2. I can talk about the probability that that rogue succeeds at that check, and I can talk about it as the probability that the d20 roll is -2 or better. Of course that winds up being the same as the probability that the d20 is 1 or better, but it's not wrong to write P(R >= -2).

Ok, now what about the confirmation correction I keep talking about? Well, we could perfectly well just use 2*3d6 - 10 and be done with it, and get the step-shaped graph I posted for our success probabilities. The game would run fine, albeit with a loss of granularity in the DC distinctions that actually matter. But it's not only aesthetically unsatisfying that the graph is jagged like that, nor is it only aesthetically unsatisfying that the 2*3d6-10 curve is above the d20 curve more often than it is below it, it's also a worse approximation than if we make an adjustment. And in this case it's a worse approximation because the 2*3d6-10 distribution has (slightly) the wrong mean.

So how does the confirmation correction fix this? It actually isn't just a smoothing mechanic; it fixes the whole die roll distribution to have the right mean. As I said, you can really think of the confirmation correction as having a 50% chance of subtracting 1 from every roll; in essence subtracting 1/2 on average from every roll (in the sense that the number of heads in a single coin flip is "on average" 1/2), and therefore subtracting 1/2 from the mean. But because I wanted a system that was not only physical but practically efficient, I noted that you don't care whether or not you subtract 1 from your roll unless doing so changes the outcome. And this will only happen if your roll was exactly equal to the adjusted DC. So though I can see why this makes it seem like I'm adjusting one point in an ad hoc fashion, it only seems that way because I'm ignoring meaningless rolls.

I'm not sure why you didn't just center on 10.5 directly.

I didn't do that precisely because I wanted a system to correspond to something physical -- something you could actually implement.

What you seem to think solves the problems nicely does no such thing for any of the problem except the apparent centering at 10.5.

Let's clarify something else here: the graphs of success probabilities aren't distributions at all; they're CDFs. The graphs (or the quantities we're depicting on the axes) don't have means or variances, as neither the DC nor the success chance is a random variable. So when I say we want to center a distribution at 10.5, I'm not talking about the DCs that are on the graph; this time I'm talking about the actual rolls. As it happens, if a symmetric distribution is centered at 10.5, then it also has its median at 10.5, meaning we are equally likely to get a value above the mean and below it.

The OP's original observation is that we could match the first two moments (the mean and variance) of the two roll distributions. I realized that since we started out noting that we wanted to double the 3d6 roll to match the variances, and since shifting by 10.5 was functionally identical to shifting by 11 as far as success probabilities go, we'd need to do something to "declump" the distribution in order to properly center it. The confirmation roll mechanic effectively turns the discrete roll distribution into a continuous one, making it easier to work with from a centering and scaling perspective (we could use percentile dice for the confirmation roll to enable us to set any fractional DC to a precision of 0.01, but that would be a little silly).

When I graph the success probability with the confirmation die factored in, I'm not just interpolating or smoothing; I'm actually showing you the probabilities of success at each DC (odd and even). Again, to find the success probability for DC 11, we can look at the rolls that satisfy 2R-10-(d2-1) >= 11. We can satisfy this if 2R-10 >= 12 -- that is, if 2R >= 22 (that is, if 2R = 22, 2R = 24, ..., 2R = 36), since for these rolls, subtracting (d2-1) at worst leaves us with 11, which is still a success. And actually that's the only way we can do it, since we can't get 2R-10 = 11, even though if we did d2-1 could be 0, satisfying the event. But if the DC is even (12, say), then there are two ways to get a success: either 2R-10 >= 13, regardless of the d2, or 2R-10 = 12 and the d2 comes up 2.

Assuming only values that can be rolled are rolled, and assuming that the needed rolls are uniformly distributed (you can use other priors, up to you), then the probability of needing a 12 is 1 in 16 (there are 16 possible values on 3d6 from 3 to 18). The odds you actually roll a 12 are 27/216. The odds you fail to confirm are 1/2. So, that's 1/16*27/216*1/2 or 27/6912 or a tiny, tiny under 0.4%.

This is a common mistake: you're conflating joint probabilities with conditional probabilities. The DC isn't a random variable, really, so talking about the probability P(DC = 12 & modified roll >= 12) isn't really meaningful. We care about the conditional probability, P(modified roll >= 12 | DC = 12). But if you don't believe me, apply your own calculation to a d20 roll. What are the odds that you need a 12 and roll one? By your reasoning, it would be (1/16)*(1/20), or 0.003 (that is, 0.3%). But that's not what we care about when we talk about the likelihood of rolling a 12.

Again, by the way, if the confusion is due to my suggestion that we only roll to confirm when the roll is exactly equal to the DC, that was only to avoid pointless rolls. If you roll the d2 on every roll, then the probability of rolling an 11 is P(roll 12) * P(d2 = 1). On 2*3d6-10, that's P(3d6 = 11) * 1/2, or 0.125 * 0.5 = 0.0625. Pretty close actually to the 0.05 chance you have on a d20.

Misaligned data sets and truncated tails are.

Dude, read what I post if you're going to reply. I untruncated the tails for you. And I've been explaining at great length why nothing is misaligned.

• NotAYakk

#### Ovinomancer

##### No flips for you!
Ok, we evidently need a little primer on probability.

I'm going to put it in a spoiler, because it got a little long:

When we talk about success chances, we are measuring the chance of some event occurring, right? In this case, the type of event we're interested in is "Our roll against DC X is successful". In vanilla 5e, a roll against DC X is successful if and only if the number we get on the die plus a modifier equals or exceeds X. So we're talking about the event R + M >= X, and we care about P(R+M >= X). For simplicity, we can move M to the other side and lump it in with the DC -- the event R >= X-M will be TRUE for exactly the same rolls that R+M >=X will be true, so we're not changing anything by doing that.

Now if I just have a mathematical statement, like R >= 3, I can write down a list of all the values of R for which that statement holds. If R comes from a d20, the list is 3, 4, 5, ..., 20. Since R can only take on one value, the sub-events R=3, R=4, R=5, ..., R=20 are non-overlapping, and so the probability that one of them occurs is the sum of the probabilities of each one.

Ok, but you knew that. Now what happens if I decide that I'm going to double the roll I get on the die? Now, my original event is 2R + M >= X, but we can still move M over, to get 2R >= X-M; no problem there. Just to simplify the notation, I'm going to define Y=X-M, so I can write things like 2R >= Y instead. Alright. So now I can either write down the list of values that 2R can take, or I can just write down the list of values that R can take, and for each one, decide whether it makes the event true or not.

So, on a d20, 2R can only take on even numbers: 2, 4, 6, ..., 40, and each of these has a 5% chance of occurring, because each one corresponds to exactly one value on the original d20, which (we presume) are all equally likely. So, how do I determine P(2R >= Y)? Well, if Y is, say, 10, I add up P(2R = 10) + P(2R = 12) + ... + P(2R = 40). So far so good.

What if Y is 11? Well, 2R can't actually equal 11, but we didn't ask what the probability that 2R = Y is, we asked what the probability that 2R >= Y is. What values of 2R satisfy that inequality? Well, they're almost the same ones as before, except we have to throw out 10. How about Y=12? Turns out it's the same values as Y=11 --- we would throw out 11 if 11 were a possible roll (if we're being very formal about it we still do get rid of P(Y=11), but this is zero, so subtracting out its probability doesn't do anything. So we still have P(2R >= 12) = P(2R = 12) + P(2R = 14) + ... + P(2R = 40), the same as P(2R >= 11).

How about Y = -74? Well now, the list of values of 2R that satisfy 2R >= Y expands to... well... all of them. So we get P(2R >= -74) = P(2R = 2) + P(2R = 4) + ... + P(2R = 40). Did we do anything wrong or sneaky or break anything in either the Y = 11 or Y = -74 case? What about Y = 3.14159? That's fine too. The event and the probability are perfectly well defined; we just have to consider the set of possible rolls that make the statement true (4 and up in the case of pi), and add up the probabilities of each of those rolls. If Y goes "out of bounds" to the left we'll get 1, since every roll makes the event true; if Y goes out of bounds to the right, we get 0, since none of them do. And if Y sits in between two possible rolls, it still has a set of possible rolls to the right of it, so we just look at those; though that means that the success chance is flat for a bit between each pair of successive rolls.

If we made a function, f(Y) = P(2R >= Y), it would actually be well defined at every real number value for Y; it would just be flat between values of 2R, since only when y crosses those values do we actually add anything to the set of outcomes that make the event true.
Good, all true.

That's why the first graph in my last post had that saw shape with the flat bits. I can still talk about a DC 11 check even if I can't roll an 11, it just has the same difficulty as a DC 12 check unless I make some adjustment... not being able to roll 11 doesn't make the chance of succeeding at a DC 11 check 0. The important distinction, which I think is what's causing the confusion, is that the x-axis in the graph isn't showing rolls, it's showing DCs. Well, adjusted DCs, where we're moving the modifier over to the DC side instead of the roll side.
And, as long as you're talking about a single PDF, you're still okay for doing this. The problem is, of course, that we're not talking about looking at all possible events, but only those that occur in the reality we're using the probabilities to model. The scale of targets is relevant to the scale of the die rolls. We don't have half-step target numbers because we can't roll half-step numbers. I hope that sinks in, because it's important in a bit.
This sort of thing happens with RAW too, by the way, just to reassure you that I'm not introducing any voodoo. Take a rogue with a stealth modifier of +11, trying to sneak around a monster with passive perception of 10. The unadjusted DC is 10, but the adjusted DC is -2. I can talk about the probability that that rogue succeeds at that check, and I can talk about it as the probability that the d20 roll is -2 or better. Of course that winds up being the same as the probability that the d20 is 1 or better, but it's not wrong to write P(R >= -2).

Ok, now what about the confirmation correction I keep talking about? Well, we could perfectly well just use 2*3d6 - 10 and be done with it, and get the step-shaped graph I posted for our success probabilities. The game would run fine, albeit with a loss of granularity in the DC distinctions that actually matter. But it's not only aesthetically unsatisfying that the graph is jagged like that, nor is it only aesthetically unsatisfying that the 2*3d6-10 curve is above the d20 curve more often than it is below it, it's also a worse approximation than if we make an adjustment. And in this case it's a worse approximation because the 2*3d6-10 distribution has (slightly) the wrong mean.

So how does the confirmation correction fix this? It actually isn't just a smoothing mechanic; it fixes the whole die roll distribution to have the right mean. As I said, you can really think of the confirmation correction as having a 50% chance of subtracting 1 from every roll; in essence subtracting 1/2 on average from every roll (in the sense that the number of heads in a single coin flip is "on average" 1/2), and therefore subtracting 1/2 from the mean. But because I wanted a system that was not only physical but practically efficient, I noted that you don't care whether or not you subtract 1 from your roll unless doing so changes the outcome. And this will only happen if your roll was exactly equal to the adjusted DC. So though I can see why this makes it seem like I'm adjusting one point in an ad hoc fashion, it only seems that way because I'm ignoring meaningless rolls.
It only fixes the mean, and that by way of a kludged system that doesn't address anything but the mean. When you apply a change to fix the mean that also results in being able to have half-step results, you need to question what it is you've done.

I didn't do that precisely because I wanted a system to correspond to something physical -- something you could actually implement.
Fair enough.

Let's clarify something else here: the graphs of success probabilities aren't distributions at all; they're CDFs. The graphs (or the quantities we're depicting on the axes) don't have means or variances, as neither the DC nor the success chance is a random variable. So when I say we want to center a distribution at 10.5, I'm not talking about the DCs that are on the graph; this time I'm talking about the actual rolls. As it happens, if a symmetric distribution is centered at 10.5, then it also has its median at 10.5, meaning we are equally likely to get a value above the mean and below it.

The OP's original observation is that we could match the first two moments (the mean and variance) of the two roll distributions. I realized that since we started out noting that we wanted to double the 3d6 roll to match the variances, and since shifting by 10.5 was functionally identical to shifting by 11 as far as success probabilities go, we'd need to do something to "declump" the distribution in order to properly center it. The confirmation roll mechanic effectively turns the discrete roll distribution into a continuous one, making it easier to work with from a centering and scaling perspective (we could use percentile dice for the confirmation roll to enable us to set any fractional DC to a precision of 0.01, but that would be a little silly).

When I graph the success probability with the confirmation die factored in, I'm not just interpolating or smoothing; I'm actually showing you the probabilities of success at each DC (odd and even). Again, to find the success probability for DC 11, we can look at the rolls that satisfy 2R-10-(d2-1) >= 11. We can satisfy this if 2R-10 >= 12 -- that is, if 2R >= 22 (that is, if 2R = 22, 2R = 24, ..., 2R = 36), since for these rolls, subtracting (d2-1) at worst leaves us with 11, which is still a success. And actually that's the only way we can do it, since we can't get 2R-10 = 11, even though if we did d2-1 could be 0, satisfying the event. But if the DC is even (12, say), then there are two ways to get a success: either 2R-10 >= 13, regardless of the d2, or 2R-10 = 12 and the d2 comes up 2.

This is a common mistake: you're conflating joint probabilities with conditional probabilities. The DC isn't a random variable, really, so talking about the probability P(DC = 12 & modified roll >= 12) isn't really meaningful. We care about the conditional probability, P(modified roll >= 12 | DC = 12). But if you don't believe me, apply your own calculation to a d20 roll. What are the odds that you need a 12 and roll one? By your reasoning, it would be (1/16)*(1/20), or 0.003 (that is, 0.3%). But that's not what we care about when we talk about the likelihood of rolling a 12.

Again, by the way, if the confusion is due to my suggestion that we only roll to confirm when the roll is exactly equal to the DC, that was only to avoid pointless rolls. If you roll the d2 on every roll, then the probability of rolling an 11 is P(roll 12) * P(d2 = 1). On 2*3d6-10, that's P(3d6 = 11) * 1/2, or 0.125 * 0.5 = 0.0625. Pretty close actually to the 0.05 chance you have on a d20.

Dude, read what I post if you're going to reply. I untruncated the tails for you. And I've been explaining at great length why nothing is misaligned.
I strongly advise you do the same. All of your above talks about how do deal with a single rolling method, independent of others, and I have no real beef with it (except your patting yourself on the back for your cleverness about the confirmation mechanic, which is still kludge to address the fact that you wanted to compare at a mean of 10.5 and couldn't commit to being unphysical to start with). The issue is, and has been, in the comparison. Recall what I said above, as it's now important.

We're talking about systems that do not do half-step increments in practice, nor do dice allow for half-step increments. So, when you compare, you MUST avoid half-step increments or you're not comparing the same things. When you compare a d20 incremented by 1 per step to 2*3d6 incremented by 2 per step, comparing anything in a half-step of the 2*3d6 isn't meaningful in any way. You're comparing a real outcome on d20 to an impossible outcome on 2*3d6. This goes exactly the same for comparing 3d6 to d20/2, no matter how you recenter, because d20/2 steps in .5 increments while 3d6 steps in increments of 1. If you compare a 6.5 on the d20/2, it doesn't match anything possible on the 3d6. This is what I mean when I say you toss half the data, you just ignore this because there's an extrapolation and you're assuming it's a valid comparison at that point because you can derive a number. That you invented a confirmation method just continues to let you confuse yourself that you've created a system that has half-step values when it does not have them.

You even missed the boat on the fact that your confirmation mechanic produces minuscule probabilities at the half steps (a fact you glossed in your hurry to point out you know the difference between conditional an joint probabilities -- I presented the joint probability when I set the first conditional to all, because I actually knew that was an argument to make against what I was saying). Your method sets the half-steps at half of the probability of the full step above it. This helps you recenter, but it doesn't create a useful comparison because you've created data where it doesn't exist via a kludge.

I posted raw data above. You cannot compare the probabilities of rolling a 12 on d20 with the probability of rolling a 12 on 2*3d6-10, because a 12 does not exist with the latter. If you kludge it in with a post-hoc confirmation method that reduces the likelihood of rolling exactly a thirteen by half and gives that to 12, I question if you've thought through what you've done or have you just arrived at a way to make 1+1 look like 2+2 and stopped thinking about it.

Maybe it's because I'm an engineer, so I always have to examine my models to see if they do what I assume they do, but the above bits about how data doesn't align is glaringly obvious to me. You cannot compare data points where one set doesn't exist. Data is data. Statistics is often how you lie to yourself with math. Always check your assumptions against reality and run a test. Which is why I took the OP example and showed how a skew of 3 on the normal modifiers takes the near match to impossible in one method in one direction and a 10% delta in the other. That's not the hallmark of a stable system (and it does this because of the half-step problem, a skew of 3 on the 3d6 is a skew of 6 in the scaled d20 version. A little movement on the 3d6 curve is a lot of movement on the d20 curve, a fact I've been trying to point out to you for many posts and you've just glossed as if there's some fundamental basic I've failed to understand. I get the basics, I'm actually looking at what the models tell us while you're still looking at lines.[/spoiler]

#### Esker

##### Hero
There is no statistics going on here; just probability. That's because there is no data; just calculations. Statistics is trying to find a good model to describe a set of observations from the world (the data). Probability is examining the properties of models in and of themselves. This is the latter. There is no comparison between assumptions and reality to be made here (except I guess for the basic assumptions of fair and independent dice that everybody takes as given) because there are only the probability models, no observations for them to fit.

We have two mechanical systems for stochastically producing successes and failures. All that matters as far as the game is concerned is whether the probabilities of success translate reasonably, which they do. The actual numbers that show up on the die are a means to adjudicating success or failure; they have no other purpose or meaning in themselves. I'm honestly not sure what you think I'm claiming that has you so riled up, except that you really seem to want the actual numbers on the dice to be the same.

You cannot compare the probabilities of rolling a 12 on d20 with the probability of rolling a 12 on 2*3d6-10, because a 12 does not exist with the latter.

Presumably you meant to say 11 there (12 happens if 3d6 = 11). The probability of rolling an 11 only matters insofar as it is normally the difference between the difficulty of a DC 11 check and a DC 12 check. That's it. And we don't necessarily even care that much about that, as long as the success rate for DC 11 and the success rate for DC 12 are individually about right (or, if we're comparing two systems that use different DC calculations, then that the corresponding probabilities match up). It still doesn't seem like you're getting that, since you keep hammering on the gaps in the dice distribution itself.

Last edited:
• pemerton

#### Ovinomancer

##### No flips for you!
There is no statistics going on here; just probability. That's because there is no data; just calculations. Statistics is trying to find a good model to describe a set of observations from the world (the data). Probability is examining the properties of models in and of themselves. This is the latter. There is no comparison between assumptions and reality to be made here (except I guess for the basic assumptions of fair and independent dice that everybody takes as given) because there are only the probability models, no observations for them to fit.

We have two mechanical systems for stochastically producing successes and failures. All that matters as far as the game is concerned is whether the probabilities of success translate reasonably, which they do. The actual numbers that show up on the die are a means to adjudicating success or failure; they have no other purpose or meaning in themselves. I'm honestly not sure what you think I'm claiming that has you so riled up, except that you really seem to want the actual numbers on the dice to be the same.
Earlier in the thread I cautioned against reification of the models, as that's a trap that's easy to fall into when using models, statistical or probability. And, largely, we're doing a good bit of both, here, with mean shifting, discussion of variance, discussion of deviation, and looking at how closely two probability models match, none of which true probability math cares about. What we're doing is building a model of a physical system where we plan to use the physical system. Thinking that we can look at the maths in the model and that tells us what reality is, or, even worse, thinking that truth exists because the models tell us something without validating it in the real is the sin of reification, which you latch onto here.

Presumably you meant to say 11 there (12 happens if 3d6 = 6). The probability of rolling an 11 only matters insofar as it is normally the difference between the difficulty of a DC 11 check and a DC 12 check. That's it. And we don't necessarily even care that much about that, as long as the success rate for DC 11 and the success rate for DC 12 are individually about right (or, if we're comparing two systems that use different DC calculations, then that the corresponding probabilities match up). It still doesn't seem like you're getting that, since you keep hammering on the gaps in the dice distribution itself.
Because you're determining the probability for an event that cannot happen and pretending that, because you can do the math, it does. Again, you're believing the model and not reality.

As you said above, scaling d20 DCs by 2 is mathematically the same as halving d20 rolls. If this is true, the either we use the original DCs and the half d20, which means that half of the results on the D20 are in-between DCs, or we expand the DC range and use a normal d20 in which half the results on the d20 are in-between available DCs. Both of these approaches shrink the useful d20 range by half, meaning the d20 range is half as useful as it was. Essentially, we're taking the d20 from 20 useful steps to 10 useful steps.

This halved d20 is then being compared to 3d6, but not the full 16 step range of 3d6, but the central 10 values. Only, the comparison ignores the fact that the d20 range has been effectively halved from 20 to 10 steps, and you get a pretend game that every step on the d20 now matters against the more spaced 10 steps of the central part of 3d6. Various reasons why is is okay are presented -- we can find probabilities, we can pretend those DCs exist, the part of the 3d6 we toss isn't that big, etc -- each brought up and levied independently to defeat an objection and then forgotten when those become a challenge for another excuse. It's a circle of special pleading, always ignoring that the transformation of one of the die methods fundamentally alters the function of the game just in time to compare to a truncated but unaltered other method.

In simpler words, when you scale the die method, you change the steps size for DCs in that scale. You cannot compare to a different scale of DCs using a different die method and pretend you can use the same DC scale for both. This is the core failed assumption to the whole endeavour, and I've shown it to be so with the OP examples -- examples that have so far been ignored. The DCs scale differently in the different scales of die and that matters.

#### Esker

##### Hero
Thinking that we can look at the maths in the model and that tells us what reality is, or, even worse, thinking that truth exists because the models tell us something without validating it in the real is the sin of reification, which you latch onto here.

Not sure what you mean here. The properties of the model follow directly from the basic assumption that we have fair independent dice. That's the connection between model and reality. If that holds, then the model is the reality, because everything else is derived using mathematical laws.

Because you're determining the probability for an event that cannot happen and pretending that, because you can do the math, it does.

This also doesn't make any sense. The probability of the event is the probability of the event. If it couldn't happen it would have a probability of zero. If there's an error in my math, please point it out, but again, the probability of the event follows from the assumption that when we roll a die, every side is equally likely, and that multiple dice rolls don't influence each other. That's all the reality check we need for the math to match the reality.

As you said above, scaling d20 DCs by 2 is mathematically the same as halving d20 rolls.

Well, halving d20 rolls and bonuses both, but yeah.

If this is true, the either we use the original DCs and the half d20, which means that half of the results on the D20 are in-between DCs, or we expand the DC range and use a normal d20 in which half the results on the d20 are in-between available DCs. Both of these approaches shrink the useful d20 range by half, meaning the d20 range is half as useful as it was. Essentially, we're taking the d20 from 20 useful steps to 10 useful steps.

Yes, that's true, but the impact of this is only that we're coarsening the granularity of the DC scale by, essentially, ignoring the differences between (adjusted) DC 3 and DC 6 checks, etc. and similarly between (adjusted) DC 16 and DC 19 checks. But we already mostly do that by switching to 3d6, since the difference between the chance of rolling a 2 or better vs a 6 or better is small (about 4.6% total over a 4 DC range), as is the difference between 15 or better and 19 or better. Even though we only have 10 useful steps to work with, we allocate those to distinguish within the most useful part of the DC range: i.e., between 6 and 15.

Keep in mind, the comparison here isn't between RAW and anything else, it's between a 3d6 system and a 1d20 modified system, so we've already upended the system. So we need to be careful not to fall into the trap of using our intuitions about how granular D&D normally is, since that's thrown out either way.

This halved d20 is then being compared to 3d6, but not the full 16 step range of 3d6, but the central 10 values.

Again, keep in mind that there's no need to compare the actual roll distributions; just the success vs DC curves, since the roll distribution only matters to the extent that it affects success chances. We can compare those curves at any point, not just the middle 10 values. As I've said, if you think it makes a big difference for gameplay that the d20 version equalizes DCs that would otherwise differ by a little less than the equivalent of 1 point in the vanilla system, that's fine. It doesn't particularly bother me, except for special case crit mechanics, which we set aside from the start since they have to be dealt with separately.

Various reasons why is is okay are presented -- we can find probabilities, we can pretend those DCs exist, the part of the 3d6 we toss isn't that big, etc -- each brought up and levied independently to defeat an objection and then forgotten when those become a challenge for another excuse. It's a circle of special pleading...

Where have I engaged in special pleading? I don't believe I've forgotten any of the points I've made. I presented two pairs of systems (one of which is vanilla d20) and showed that within each pair of systems, the two methods produce nearly identical outcomes in practice.

In simpler words, when you scale the die method, you change the steps size for DCs in that scale. You cannot compare to a different scale of DCs using a different die method and pretend you can use the same DC scale for both.

Where did I do that? If I'm comparing one method to a different scale of DCs using another method, then I'm clearly not using the same DC scale for both. But what I am doing (which is only a slight modification on what @NotAYakk originally proposed) is preserving the same range of difficulties, even if I use different numbers to describe them. It doesn't matter if a "moderately difficult" task is described as DC 15 or DC 20, as long as the other elements (rolls and bonuses) are altered correspondingly to keep the success chance about the same across a range of possible characters trying to succeed at those tasks.

#### Esker

##### Hero
@Ovinomancer, here's an analogy for you. I'm curious what your intuitions are.

Suppose all this time 5e had used a percentile system to resolve checks, instead of the d20, and you had to roll at or under the DC to succeed. A medium difficulty task was DC 60, a really hard task was DC 10, etc. The proficiency bonus started at +20 and went up in increments of 10, and ability scores went from 0 to 100, with modifiers set to 0 at a 50 and going up by 10 whenever the ten's digit goes up, so, when you hit 60 you're at +10, 70 is a +20, etc., all the way up to +50 at 100. Also, instead of adding your bonus to the roll, you added it to the DC (thus making it easier to succeed).

Now someone comes along and says, "It's a pain to have to roll two dice for every check, and also wouldn't it be nice if the DM could keep some DCs secret without having to know everybody's bonuses? What if we scaled ability scores and bonuses down by a factor of 10, rolled a d10 instead of a d100 to resolve outcomes, added bonuses to the roll instead of the DC, and said that a success was rolling at or above the target instead of at or below it? To keep things comparable, we'll modify all the DCs to be DC' =1 + (100 - DC)/10, so 60 becomes 5, 50 becomes 6, 40 becomes 7, and so on."

First question for you: if nobody ever used DCs that weren't multiples of 10, would this change have any effect on the outcomes in the game? (I'm not asking whether it would have an effect on how much work it is, just whether it would affect outcomes)

Second question: Suppose somebody objected to this change, saying: "You can't say that this won't have an impact! We used to have 100 increments, and now we only have 10!"

The designer shows the objector a line graph, with two different sets of labels on the x-axis: The first set of labels go from 0 to 100, representing DCs in the old (percentile) system. The second shows the corresponding DC in the new system: 0 is aligned with 11, 5 is aligned with 10.5, 10 is aligned with 10, 20 with 9, 30 with 8, etc. Then there are two lines. The one for the old system shows that a DC 50 check has a 50% success rate, a DC 55 has a 55% success rate, a DC 60 check has a 60% success rate, etc. The second only has points at whole numbers, but at those spots, lines up with the first one.

"Nobody uses DCs that aren't a multiple of 10," they say. "The graph lines up where it matters."

Supposing it's true that DCs are always multiples of 10, who is right?

Third: Suppose the objector were a DM who actually liked to use DCs in multiples of 5. They approach the redesigner, red-faced, saying: I have a DC 65 check, which worked perfectly well before, but now you're telling me the DC is 4.5! You can't roll a 4.5!"

"You're right," says the designer. "How about this: round your DCs down to the nearest whole number, but keep track of whether it was a half originally. So your 4.5 becomes a 4. But if the player rolls exactly 4 on their d10 (after modifiers), have them then roll a d6. If they get 4 or more, they succeed, otherwise they fail."

They then go to their graph, and fill in points on the second line at 10.5, 9.5, etc., which sit at 5% success, 15% success, etc., explaining, "Your DC 65 check becomes almost like a DC 4 check, except it's a little more difficult because there's an extra step involved to succeed. A DC 4 check has a 70% chance of success, since you can roll anything but a 1, 2 or 3. In your case, they have one extra way to fail: by rolling a 4 and then rolling a 1, 2 or 3 on the d6. That happens (1/10) * (3/6) of the time, or 5%. So there's now a 35% chance of failing, and a 65% chance of succeeding, just like there would have been before.

The objector thinks for a minute and says, "That's a B.S. kludge. Those points you're drawing don't exist! You can't just say that you can have a DC 4.5 check, if you can't roll 4.5! What kind of statistics mumbo jumbo is this?"

Is the designer pulling a fast one? Does their suggested fix allow for 55 or 65 DCs, etc. to work as intended? Or is something wrong?

#### Ovinomancer

##### No flips for you!
@Ovinomancer, here's an analogy for you. I'm curious what your intuitions are.

Suppose all this time 5e had used a percentile system to resolve checks, instead of the d20, and you had to roll at or under the DC to succeed. A medium difficulty task was DC 60, a really hard task was DC 10, etc. The proficiency bonus started at +20 and went up in increments of 10, and ability scores went from 0 to 100, with modifiers set to 0 at a 50 and going up by 10 whenever the ten's digit goes up, so, when you hit 60 you're at +10, 70 is a +20, etc., all the way up to +50 at 100. Also, instead of adding your bonus to the roll, you added it to the DC (thus making it easier to succeed).

Now someone comes along and says, "It's a pain to have to roll two dice for every check, and also wouldn't it be nice if the DM could keep some DCs secret without having to know everybody's bonuses? What if we scaled ability scores and bonuses down by a factor of 10, rolled a d10 instead of a d100 to resolve outcomes, added bonuses to the roll instead of the DC, and said that a success was rolling at or above the target instead of at or below it? To keep things comparable, we'll modify all the DCs to be DC' =1 + (100 - DC)/10, so 60 becomes 5, 50 becomes 6, 40 becomes 7, and so on."

First question for you: if nobody ever used DCs that weren't multiples of 10, would this change have any effect on the outcomes in the game? (I'm not asking whether it would have an effect on how much work it is, just whether it would affect outcomes)

Second question: Suppose somebody objected to this change, saying: "You can't say that this won't have an impact! We used to have 100 increments, and now we only have 10!"

The designer shows the objector a line graph, with two different sets of labels on the x-axis: The first set of labels go from 0 to 100, representing DCs in the old (percentile) system. The second shows the corresponding DC in the new system: 0 is aligned with 11, 5 is aligned with 10.5, 10 is aligned with 10, 20 with 9, 30 with 8, etc. Then there are two lines. The one for the old system shows that a DC 50 check has a 50% success rate, a DC 55 has a 55% success rate, a DC 60 check has a 60% success rate, etc. The second only has points at whole numbers, but at those spots, lines up with the first one.

"Nobody uses DCs that aren't a multiple of 10," they say. "The graph lines up where it matters."

Supposing it's true that DCs are always multiples of 10, who is right?

Third: Suppose the objector were a DM who actually liked to use DCs in multiples of 5. They approach the redesigner, red-faced, saying: I have a DC 65 check, which worked perfectly well before, but now you're telling me the DC is 4.5! You can't roll a 4.5!"

"You're right," says the designer. "How about this: round your DCs down to the nearest whole number, but keep track of whether it was a half originally. So your 4.5 becomes a 4. But if the player rolls exactly 4 on their d10 (after modifiers), have them then roll a d6. If they get 4 or more, they succeed, otherwise they fail."

They then go to their graph, and fill in points on the second line at 10.5, 9.5, etc., which sit at 5% success, 15% success, etc., explaining, "Your DC 65 check becomes almost like a DC 4 check, except it's a little more difficult because there's an extra step involved to succeed. A DC 4 check has a 70% chance of success, since you can roll anything but a 1, 2 or 3. In your case, they have one extra way to fail: by rolling a 4 and then rolling a 1, 2 or 3 on the d6. That happens (1/10) * (3/6) of the time, or 5%. So there's now a 35% chance of failing, and a 65% chance of succeeding, just like there would have been before.

The objector thinks for a minute and says, "That's a B.S. kludge. Those points you're drawing don't exist! You can't just say that you can have a DC 4.5 check, if you can't roll 4.5! What kind of statistics mumbo jumbo is this?"

Is the designer pulling a fast one? Does their suggested fix allow for 55 or 65 DCs, etc. to work as intended? Or is something wrong?
This hypo just shows you haven't grasped my argument at all. If you scale both the resolution method and the target numbers, fine, you've done the same things to both sides. The problem would occur if you change the scale on only one side and then compare to a completely different resolution at the original scale to claim similarity, you've done something wrong.

This is the problem, as I explain it again.

#### Esker

##### Hero
This hypo just shows you haven't grasped my argument at all. If you scale both the resolution method and the target numbers, fine, you've done the same things to both sides. The problem would occur if you change the scale on only one side and then compare to a completely different resolution at the original scale to claim similarity, you've done something wrong.

This is the problem, as I explain it again.

I posted the hypothetical for the purposes of removing complexity and isolating one aspect of the comparison at a time. Discovering that you are fine with that hypothetical, where we've rescaled DCs and rolls, helps us narrow down the source of your discomfort, by ruling out the unequal spacing as a cause (since a d10 and a d100 have unequal spacing). It also confirms that you're comfortable using a confirmation die or similar mechanism to fill in the loss in distinctions between consecutive DCs created by unequal spacing (as I do when comparing ordinary 1d20 to 2*3d6-10).

The only differences between that hypothetical and the original scenario (well, there are two original scenarios, differing by how much of a role you want luck to play, but let's focus on the 1d20 vs 2*3d6-10 one for now, since one of them is vanilla and thus hopefully has good intuition behind it, which should be easier than comparing two unfamiliar schemes to each other).

In the 1d20 vs 2*3d6-10 case, we're rolling against the same DCs, so that removes yet one more point of complexity that we have to deal with in the 3d6 vs rescaled 1d20 scenario.

Ok. So, with a regular 1d20, the distinct (adjusted) DC ranges are: [-infinity to 1], 2, 3, ..., 20, and [21 to infinity]. There are 21 functionally distinct DCs here. Anything with an adjusted DC less than 1 is functionally equivalent to an adjusted DC of 1, since there are no additional ways to succeed on, say, an adjusted DC of 0 that don't also succeed on an adjusted DC of 1.

With 2*3d6-10, on its own, we have the following sets of DCs that can be distinguished: [-infinity to -4], {-3,-2}, {-1,0}, {1,2}, ..., {21,22}, {23,24}, {25,26}, [27 to infinity]. There are only 17 of these, and they don't line up with the ones in the 1d20 system. Because we can't roll odd numbers, -2 is no harder than -3, 0 is no harder than -1, etc. The confirmation mechanic breaks this up further (just as the confirmation mechanic in the d10 system in the hypothetical): since the confirmation mechanic only affects even DCs, we can now distinguish the following sets: [-infinity to -5], -4, -3, -2, -1, 0, 1, ..., 25, 26, [27 to infinity]. We've actually created more distinctions than we had with 1d20 -- a total of 33 -- because [-infinity, 1] is subdivided into seven different sets, as is [21 to infinity].

Is this a problem? Well, it depends on your tolerance for approximation. In the 1d20 system, every DC from -infinity to 1 has the same difficulty, whereas in the 2*3d6-10 system, we get slight increases in difficulty when we go from -5 to -4, -4 to -3, from -3 to -2, etc., because we are losing ways to succeed: rolling a 3 on the 3d6 definitely succeeds if the DC is -5 or less, but only a 50% chance if the DC is -4, and 0% if it is -3. Rolling a 4 definitely succeeds if the DC is -3, but it only has a 50% chance of succeeding if it is -2, and a 0% chance if it is -1. And so on.

There are two ways to look at the effects of this discrepancy. First, we can ask what happens if you have two characters facing a DC of 1, but one is rolling d20 and the other is rolling 2*3d6-10-(d2-1). The first character is guaranteed to succeed; they don't even need to roll. The second might fail, because they might roll a 5 or below on the 3d6 (corresponding to 0 or below after the transformation). This has a 4.6% chance of occurring. I'm not pretending this is nothing --- it's not, clearly, as it's only slightly less likely than rolling a 1 on the d20. But this is actually the worst the comparison ever gets. At DC 0 they still might fail (whereas the 1d20 character can't, obviously), by rolling a 3 or 4, or rolling a 5 and failing their confirmation roll. But this is less likely. And so on. The same thing happens on the other end of the spectrum, at those very high adjusted DCs.

The other way you can look at the discrepancy is in terms of the value of a +1. If you have a character whose bonus puts them at an adjusted DC of 1 on a particular check, we can ask, what is the impact if that character gets an additional +1? Well, regardless of the roll mechanic, that +1 reduces their adjusted DC to 0. Now, if they're using a d20 that does nothing; they were already at 100% success. If using 2*3d6-10 though, it gives them a little bit of a boost: about a 1.4 percentage point increase in their success chance (i.e., the probability of both rolling a 5 and succeeding in confirming).

Here, the graphs don't look nearly so similar, but the actual magnitudes of the discrepancies are still pretty small. The worst discrepancy in the value of a +1 is about 2.5 percentage points, which happens if you're currently sitting at a DC of 2: with 1d20 a +1 is always worth 5% within the range of 2 to 21 (because at each of these we add a new way to succeed), but at DC 2, a +1 is only worth about half that. The same at DC 21. Here are the graphs (since you didn't like the fact that I was interpolating between points before, I'm just plotting the points this time):  #### Ovinomancer

##### No flips for you!
I posted the hypothetical for the purposes of removing complexity and isolating one aspect of the comparison at a time. Discovering that you are fine with that hypothetical, where we've rescaled DCs and rolls, helps us narrow down the source of your discomfort, by ruling out the unequal spacing as a cause (since a d10 and a d100 have unequal spacing). It also confirms that you're comfortable using a confirmation die or similar mechanism to fill in the loss in distinctions between consecutive DCs created by unequal spacing (as I do when comparing ordinary 1d20 to 2*3d6-10).
You assume too much. I'm fine with a rescaled system of resolution and targets within that system ONLY. Once you begin comparisons, the gaps become important.

And, the kludge is still a kludge -- it's an grafted on mechanic to correct an failure in the original system. It's not clever, and adding a kludge is admitting the original system failed so you need another system on top of it to try to correct your failure. The problem with your kludge is that you're using it to address target numbers that don't exist in the scaled schema, only when you try to compare to a different schema.

The only differences between that hypothetical and the original scenario (well, there are two original scenarios, differing by how much of a role you want luck to play, but let's focus on the 1d20 vs 2*3d6-10 one for now, since one of them is vanilla and thus hopefully has good intuition behind it, which should be easier than comparing two unfamiliar schemes to each other).

In the 1d20 vs 2*3d6-10 case, we're rolling against the same DCs, so that removes yet one more point of complexity that we have to deal with in the 3d6 vs rescaled 1d20 scenario.
This, right here, is the error. You are NOT using the same DCs in each system. The scaled system uses DCs stepped by 2, because it's scaled. The d20 isn't. You cannot compare these thing without making an error, because possibilities exist for one that do not for the other.

Ok. So, with a regular 1d20, the distinct (adjusted) DC ranges are: [-infinity to 1], 2, 3, ..., 20, and [21 to infinity]. There are 21 functionally distinct DCs here. Anything with an adjusted DC less than 1 is functionally equivalent to an adjusted DC of 1, since there are no additional ways to succeed on, say, an adjusted DC of 0 that don't also succeed on an adjusted DC of 1.

With 2*3d6-10, on its own, we have the following sets of DCs that can be distinguished: [-infinity to -4], {-3,-2}, {-1,0}, {1,2}, ..., {21,22}, {23,24}, {25,26}, [27 to infinity]. There are only 17 of these, and they don't line up with the ones in the 1d20 system. Because we can't roll odd numbers, -2 is no harder than -3, 0 is no harder than -1, etc. The confirmation mechanic breaks this up further (just as the confirmation mechanic in the d10 system in the hypothetical): since the confirmation mechanic only affects even DCs, we can now distinguish the following sets: [-infinity to -5], -4, -3, -2, -1, 0, 1, ..., 25, 26, [27 to infinity]. We've actually created more distinctions than we had with 1d20 -- a total of 33 -- because [-infinity, 1] is subdivided into seven different sets, as is [21 to infinity].
Yes, you can determine if you roll greater than a 2 in the 2*3d6 system and it's mathematically the same as the probability you roll a 3 or greater. However, 2 as a target number DOES NOT EXIST in the 2*3d6-10 system. This is the reification sin -- you confuse being able to create a probability for an event that does not exist in the system. Here, a target number of 2. You confuse that comparing a probability of greater than a number is not the same analysis as greater than or equal to, but you mix an match these to fool yourself into thinking 2's actually exist in the 2*3d6 system.

2 exists for d20, though, which is why you can't compare these systems. One has even DCs, the other doesn't (except below 0, which is an artifact of the recentering).

Is this a problem? Well, it depends on your tolerance for approximation. In the 1d20 system, every DC from -infinity to 1 has the same difficulty, whereas in the 2*3d6-10 system, we get slight increases in difficulty when we go from -5 to -4, -4 to -3, from -3 to -2, etc., because we are losing ways to succeed: rolling a 3 on the 3d6 definitely succeeds if the DC is -5 or less, but only a 50% chance if the DC is -4, and 0% if it is -3. Rolling a 4 definitely succeeds if the DC is -3, but it only has a 50% chance of succeeding if it is -2, and a 0% chance if it is -1. And so on.

There are two ways to look at the effects of this discrepancy. First, we can ask what happens if you have two characters facing a DC of 1, but one is rolling d20 and the other is rolling 2*3d6-10-(d2-1). The first character is guaranteed to succeed; they don't even need to roll. The second might fail, because they might roll a 5 or below on the 3d6 (corresponding to 0 or below after the transformation). This has a 4.6% chance of occurring. I'm not pretending this is nothing --- it's not, clearly, as it's only slightly less likely than rolling a 1 on the d20. But this is actually the worst the comparison ever gets. At DC 0 they still might fail (whereas the 1d20 character can't, obviously), by rolling a 3 or 4, or rolling a 5 and failing their confirmation roll. But this is less likely. And so on. The same thing happens on the other end of the spectrum, at those very high adjusted DCs.

The other way you can look at the discrepancy is in terms of the value of a +1. If you have a character whose bonus puts them at an adjusted DC of 1 on a particular check, we can ask, what is the impact if that character gets an additional +1? Well, regardless of the roll mechanic, that +1 reduces their adjusted DC to 0. Now, if they're using a d20 that does nothing; they were already at 100% success. If using 2*3d6-10 though, it gives them a little bit of a boost: about a 1.4 percentage point increase in their success chance (i.e., the probability of both rolling a 5 and succeeding in confirming).
When you recenter the mean of the method, you must recenter the mean of the DCs, or your system is very, very much not the same as what you started with. This is like saying that needing a 3 on 3d6 is the same as needing a 3 on 2*3d6-10. It's not. The same value on 2*3d6-10 as a 3 on 3d6 is -4. This is the other half of the fundamental reason you can't compare the systems as you're doing -- you're comparing values of DC that do not align but, because it graphs, you've fooled yourself into thinking it does.
Here, the graphs don't look nearly so similar, but the actual magnitudes of the discrepancies are still pretty small. The worst discrepancy in the value of a +1 is about 2.5 percentage points, which happens if you're currently sitting at a DC of 2: with 1d20 a +1 is always worth 5% within the range of 2 to 21 (because at each of these we add a new way to succeed), but at DC 2, a +1 is only worth about half that. The same at DC 21. Here are the graphs (since you didn't like the fact that I was interpolating between points before, I'm just plotting the points this time):  Dear god, but you've graphed two different PDFs on top of each other as if they're the same thing. You've graphed the PDF for greater than x on the half steps, and greater than or equal to on the whole steps. For someone that lectured on the basics of probability and made semantic arguments because I haven't gone jargon but tried to keep this jargon free, this must be an embarrassing error -- graphing two different probability questions on the same graph and pretending they're the same thing. And that doesn't even get to the other system you're graphing and the issues I've outlined above.

And, the worst discrepancy is still where I can roll 2*3d6-10 and can't roll a d20. Surely, this must sink in sometime? I'm losing hope. I'm sure the response will continue to not get the problem I've been rephrasing for many, many posts now -- you have different scales of both die outcomes AND DCs, but you're treating the DC scale as if it's the same. It is not.

#### Esker

##### Hero
This, right here, is the error. You are NOT using the same DCs in each system. The scaled system uses DCs stepped by 2, because it's scaled. The d20 isn't. You cannot compare these thing without making an error, because possibilities exist for one that do not for the other.

No. The set of DCs is the same. I am looking at trying to meet DCs in the range -5 to 27, and comparing apples to apples.

Yes, you can determine if you roll greater than a 2 in the 2*3d6 system and it's mathematically the same as the probability you roll a 3 or greater. However, 2 as a target number DOES NOT EXIST in the 2*3d6-10 system.

If I'm a rogue with a +12 to stealth and I need to beat a passive perception of 11, then my target on a d20 is 0. That's the minimum value I could roll and succeed. I can't actually roll 0, but that's still my target number. And I can still find the probability of getting 0 or better. It just happens to be the same as the probability of getting 1 or better.

You confuse that comparing a probability of greater than a number is not the same analysis as greater than or equal to

It's the same exactly when the probability of "equal to" is zero. I'm starting to think that you don't think it's valid to talk about events with probability zero; that the event doesn't exist or something? Is that what's happening?

When you recenter the mean of the method, you must recenter the mean of the DCs, or your system is very, very much not the same as what you started with. This is like saying that needing a 3 on 3d6 is the same as needing a 3 on 2*3d6-10. It's not. The same value on 2*3d6-10 as a 3 on 3d6 is -4.

No, I've subtracted 10 from the rolls so that I can compare same DC to same DC. You roll 3d6, multiply it by 2, and subtract 10. If the result after all of that is above your DC, you succeed. So when I consider a DC of -4, that's a DC of -4 for either method: a rogue with a +15 in thieves tools trying to pick a DC 10 lock. I need to roll a -4 or better to succeed. If I'm rolling 1d20 this is a guarantee, and it's the same difficulty as a target of 1. If I'm rolling 2*3d6-10 and have to confirm on ties, it's almost a guarantee, but not quite, since I could roll -4 exactly and then fail to confirm. But this is extremely unlikely.

Dear god, but you've graphed two different PDFs on top of each other as if they're the same thing. You've graphed the PDF for greater than x on the half steps, and greater than or equal to on the whole steps.

Nope, not what this shows. Actually none of the points are either of those things. If you read the paragraph above the plot, or even looked at the Y axis label, you'd see that all the points are the change in the probability of success if I'm trying to hit a DC X and I get an extra +1 to my roll (effectively making the DC X-1). This works out to be the probability of hitting X-1 exactly. You can relabel the Y axis P(roll = X-1). (That's why the d20 probabilities are 0.05 from 2 to 21 and not 1 to 20, since if I start at DC 2, I gain 5% success if I get +1, but if I start at DC 1, I gain nothing if I get an extra +1)

I'm graphing the same thing for both evens and odds. The reason there are points at odds on the 2*3d6-10 curve is that I'm really rolling 2*3d6-(d2-1), and so I really can hit both odd and even numbers.

And, the worst discrepancy is still where I can roll 2*3d6-10 and can't roll a d20.

I mean, it depends on how your measure the discrepancy. I'm actually measuring the discrepancy at those values (as simple difference in probability) and including it in the plot. Not sure why you think I haven't taken it into account. If you want to measure discrepancy as a ratio, well then yeah, it's big, since one of the terms is zero. But you've never in this whole massive thread suggested that your problem was in the way I was comparing probabilities.

#### Ovinomancer

##### No flips for you!
No. The set of DCs is the same. I am looking at trying to meet DCs in the range -5 to 27, and comparing apples to apples.

If I'm a rogue with a +12 to stealth and I need to beat a passive perception of 11, then my target on a d20 is 0. That's the minimum value I could roll and succeed. I can't actually roll 0, but that's still my target number. And I can still find the probability of getting 0 or better. It just happens to be the same as the probability of getting 1 or better.

It's the same exactly when the probability of "equal to" is zero. I'm starting to think that you don't think it's valid to talk about events with probability zero; that the event doesn't exist or something? Is that what's happening?

No, I've subtracted 10 from the rolls so that I can compare same DC to same DC. You roll 3d6, multiply it by 2, and subtract 10. If the result after all of that is above your DC, you succeed. So when I consider a DC of -4, that's a DC of -4 for either method: a rogue with a +15 in thieves tools trying to pick a DC 10 lock. I need to roll a -4 or better to succeed. If I'm rolling 1d20 this is a guarantee, and it's the same difficulty as a target of 1. If I'm rolling 2*3d6-10 and have to confirm on ties, it's almost a guarantee, but not quite, since I could roll -4 exactly and then fail to confirm. But this is extremely unlikely.

Nope, not what this shows. Actually none of the points are either of those things. If you read the paragraph above the plot, or even looked at the Y axis label, you'd see that all the points are the change in the probability of success if I'm trying to hit a DC X and I get an extra +1 to my roll (effectively making the DC X-1). This works out to be the probability of hitting X-1 exactly. You can relabel the Y axis P(roll = X-1). (That's why the d20 probabilities are 0.05 from 2 to 21 and not 1 to 20, since if I start at DC 2, I gain 5% success if I get +1, but if I start at DC 1, I gain nothing if I get an extra +1)

I'm graphing the same thing for both evens and odds. The reason there are points at odds on the 2*3d6-10 curve is that I'm really rolling 2*3d6-(d2-1), and so I really can hit both odd and even numbers.

I mean, it depends on how your measure the discrepancy. I'm actually measuring the discrepancy at those values (as simple difference in probability) and including it in the plot. Not sure why you think I haven't taken it into account. If you want to measure discrepancy as a ratio, well then yeah, it's big, since one of the terms is zero. But you've never in this whole massive thread suggested that your problem was in the way I was comparing probabilities.
In your hypo, you make a clear point of scaling both the rolls and the DCs. You do this because it would be immediately obvious you were talking about different systems if you did not. Yet, your entire argument here is tgat you can do this and it's the same.

Again, the argument I have issue with is that d20 does not differ significantly from 3d6, proved by scaling 3d6 by 2 and recentering, but still using the original DC scheme. This is improper. I've tried that argument cleanly, I've tried it by showing the mismatch in range for the cumulatives, and I've tried by pointing out the gaps in the probabilities, all to try show this. So far I've failed, so here's a final go:

Explain how the same set of DCs generate the same results if I use a d20 in one and 2*d20-10 (with or without your kludge as you please). If you cannot, please revisit your argument that the same DCs used for d20 vs 3d6 comparisions are still valid for 2*3d6-10 (kludged as you wish) v d20 comparisons.

#### Esker

##### Hero
Have you been under the impression this entire time that I was claiming that 1d20 was similar to 3d6? I've never claimed that. The claim is that 1d20 is similar to 2*3d6-10. Of course 3d6 and 2*3d6-10 yield different results vs the same DCs.

The scaling and shifting were never part of the proof, they are part of the system.

#### Ovinomancer

##### No flips for you!
Have you been under the impression this entire time that I was claiming that 1d20 was similar to 3d6? I've never claimed that. The claim is that 1d20 is similar to 2*3d6-10. Of course 3d6 and 2*3d6-10 yield different results vs the same DCs.
But, they are not, because the DC are at different scales in each. DC exist only in steos of 2 in the 2*3d6 scheme, because you cannot roll at half step intervals. That you can imagine, and even compute, a different probabilty question for the half steps doesn't mean you can suddenly roll those numbers. And, as you note in your hypo, rolling non-existent numbers is a bit of a problem. You can't use a resolution mechanic that's at a lower resolution (heh) than your targets.

In other words, while you can imagine and do math to get a probability fir a half step target number, the functional result of this is that the half step doesn't exist -- the probability of rolling above the half step is shared by the probability of rolling the next highest full step or greater. It's not a separate event, it is the same event with a slightly rephrased pribability question.

Example: if I ask what the odds of rolling greater than a 12 on 2*3d6-10, this is tge same event as answered by asking what tge idds of rolling 13 or greater. It's not a separate event -- it's tge same exact thing. Yet, you've asked these two as if they are the sane and plotted then in the sane PDF as if they are the same. You've completely missed this, even when your PDF no longer sums to 1.

Half steps don't exist in the scaled 3d6 scheme, just like fractional steps don't exist in the unscaled 3d6 scheme.

#### Esker

##### Hero
You can't use a resolution mechanic that's at a lower resolution (heh) than your targets.

I mean, sure you can, you just lose some distinctions between DCs. But that's what the confirmation mechanic corrects for. Here's another simplified hypothetical to zoom in on a particular aspect of the situation.

Suppose I don't have any d20s on hand, and so for the night I decide I'm going to roll a d10 and double the result. First: do you agree that if the target number I need is odd, then this produces the same results as rolling a d20? If I would have needed a 19 or 20, now I need a 10 (which becomes 20). If I would have needed a 17,18,19 or 20, now I need a 9 or 10 (which become 18 or 20).

So far so good?

Now, on even DCs my success chances are too high. If I would have needed a 20, which should have a 5% chance, now I will get a 20 with a 10% chance. So, to correct for that, I roll a d6 whenever I tie the DC, and subtract 1 on a 3 or lower. Now what are my chances of getting that 20 I need? Well, the only way to do it is to roll a 10 on the d10, and a 4 or higher on the d6. That's a 10% * 50% = 5% chance. What are the chances of hitting a target of 18 or better? I can either: roll a 10 on the d10 (ignoring the d6), which happens 10% of the time, or I can roll a 9 on the d10 and a 4 or higher on the d6, which happens 10% * 50% = 5% of the time. In total, 15%. Just as on a d20.

So, if I lost all of my d20s and used this system instead, would it affect the game at all?

Example: if I ask what the odds of rolling greater than a 12 on 2*3d6-10, this is tge same event as answered by asking what tge idds of rolling 13 or greater. It's not a separate event -- it's tge same exact thing.

I agree. And?

Yet, you've asked these two as if they are the sane and plotted then in the sane PDF as if they are the same. You've completely missed this, even when your PDF no longer sums to 1.

I explained in my last post what I was plotting. It actually does give you the same values as the PMF, just shifted by 1. Did you add up the values? They actually do sum to 1.

#### Esker

##### Hero
I should add: the last red and blue graph only corresponds to the shifted-by-one PMF if you roll the d2 and subtract on a 1 after every roll, not just when you tie the target. But even if you don't do that it's still valid for what it was constructed to be: the increase in success chance if you gain an extra +1 on your roll.