• The VOIDRUNNER'S CODEX is coming! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

D&D 5E Advantage vs Disadvantage : What's the Math?

doctorbadwolf

Heretic of The Seventh Circle
I still wish there was something smaller than Ad/Dis in 5e. Even if it's just the ability to choose to reroll the roll but you have to take the second result. One of the best things, IMO, about Star Wars Saga Edition is that most +x bonuses are replaced with "reroll take second" and "reroll take either". I wish they had made even greater use of such bonuses, even for skill training. Like adding d6s for being trained or focused in a skill, rather than the flat bonus, for instance.
 

log in or register to remove this ad

clearstream

(He, Him)
And most actual combats will involve a lot less than 20 rolls per player, so you won't notice any difference. You're still not seeing the forest, but it seems like you're noticing the outline of the underbrush :)
Heh! I can appreciate a shrubbery with the best of them :) Try it this way. Let's say that through DM generosity or session duration each player makes 10 rolls that can have advantage per session (potentially representing one combat), and they play one session per week. For 11+, each week they will experience about 3 failures instead of about 5. For 2+, each year they will experience about 1 failure instead of 26. Or to put it another way, instead of failing once a fortnight they fail once a year!

What I'm drawing attention to of course, is that terms like absolute difference can be taken more than one way.
 


Dausuul

Legend
I really wouldn't count the extremes. Unless your level 1 trying to hit a tarasque, you won't need a 2 or 19 to succeed. And if you are a level 1 trying to hit a tarasque, an advantage is the least of your problems.

Instead, I would only count rolls of 5 to 15.
A 19 to succeed is unheard-of in the MM, but a 2 to succeed is not. There are monsters in the DMG that even low-level PCs can hit on a 2; zombies, for example.
 

nexalis

Numinous Hierophant
Here is a mathematically precise and succinct way to compute your chance to hit with advantage and disadvantage. You can turn this into a set of simple functions in a programming language or Excel without the need to perform simulations or permutations of any sort.

// First, compute the basic probability of a hit
double probabilityOfHit = (21 + attackModifier - targetArmorClass) / 20;
probabilityOfHit = Math.min(0.95, probabilityOfHit); // if you roll a 1, you always miss
probabilityOfHit = Math.max(0.05 * (21 - criticalHitOn), probabilityOfHit); // if you equal or exceed your critical hit roll, you always hit

// From the probability of a hit, derive the probability of a miss
double probabilityOfMiss = 1 - probabilityOfHit;

// Compute the probability of hit with advantage
probabilityOfHit = probabilityOfHit + (probabilityOfMiss * probabilityOfHit);

// Compute the probability of hit with disadvantage
probabilityOfHit = 1 - (probabilityOfMiss + (probabilityOfHit * probabilityOfMiss);

 
Last edited:


nexalis

Numinous Hierophant
Just to have fun with the concept we can also refute that. Rolling a d20 creates 20 possible universes of which at 11+ half (50%) contain failures. Rolling two d20s creates 400 possible universes of which at 11+ one hundred (25%) contain failures. At 2+ rolling a d20 only one of twenty possible universes contains a failure (5%); whereas rolling two d20s only one of 400 possible universes contains a failure (0.25%).

25% goes twice into 50% whereas 0.25% goes twenty times into 5%. So measured in possible universes containing failures out of possible universes the magnitude of the improvement is greatest at high, not medium, chances of success.

Looking at universes containing failures the jump from 1/20 to 1/400 is bigger than that from 10/20 to 100/400.

This is precisely the way to look at it. If you focus only on the difference as it applies to a single roll, a 0.25% chance of failure doesn't look much different from a 5% chance failure. If you focus on all of the rolls you will make for an entire year, however, it is the difference between failing once per week as opposed to failing once per year. The graph is deceptive in that it makes advantage/disadvantage appear inconsequential at the extremes of the DC, when this is not the case at all, as you rightly point out. The relative difference is just as important as the absolute difference when you have a reliable source of advantage. In other words, you are seeing the forest just fine!
 
Last edited:

Heh! I can appreciate a shrubbery with the best of them :) Try it this way. Let's say that through DM generosity or session duration each player makes 10 rolls that can have advantage per session (potentially representing one combat), and they play one session per week. For 11+, each week they will experience about 3 failures instead of about 5. For 2+, each year they will experience about 1 failure instead of 26. Or to put it another way, instead of failing once a fortnight they fail once a year!

What I'm drawing attention to of course, is that terms like absolute difference can be taken more than one way.
Your numbers are certainly correct, but I prefer to take the terms in a way that are relevant to actual D&D 5e play.
 

TwoSix

"Diegetics", by L. Ron Gygax
This is precisely the way to look at it. If you focus only on the difference as it applies to a single roll, a 0.25% chance of failure doesn't look much different from a 5% chance failure. If you focus on all of the rolls you will make for an entire year, however, it is the difference between failing once per week as opposed to failing once per year. The graph is deceptive in that it makes advantage/disadvantage appear inconsequential at the extremes of the DC, when this is not the case at all, as you rightly point out. The relative difference is just as important as the absolute difference when you have a reliable source of advantage. In other words, you are seeing the forest just fine!
You're absolutely right. In a game with a 2-3 combats per session, rolling attacks normally will probably give you a 1 about once a session. Rolling attacks with advantage gives you a 1 about once a campaign.
 

vostygg

Explorer
You're not seeing the forest for the trees. In both cases you'll be failing almost all the time, so in actual play you won't notice much difference from having advantage or disadvantage if you need to roll a 20 to succeed. The absolute difference in success/failure rates is what matters, not the relative difference.

Funny, but from where I'm sitting, it looks like you are the one who is failing to see the forest for the trees. The "forest" is the impact of consistent advantage applied across multiple sessions or even an entire campaign, as [MENTION=71699]vonklaude[/MENTION] has pointed out. The "trees" is the impact of advantage applied to a single roll or a small set of rolls, which is what you seem to be focused on. Looking at both means looking at the relative and the absolute differences between advantage and lack of advantage. That is what gives you the complete picture.

Thank you, @vonklaude, for pointing this out.
 
Last edited:

Remove ads

Top