• The VOIDRUNNER'S CODEX is coming! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

Why don't 3e and 4e use percentile dice for skills?

Cadfan

First Post
Essentially what you're looking for is the number of times something doesn't happen (with probability 1-p), until it actually happens (with probability p).
That's not what he was looking for. He could have looked for that, but he didn't. He wasn't looking at the average number of times something doesn't happen before it does, he was looking at the point where the likelihood of something happening at least once was greater than .50.

To calculate that, all you have to do is calculate the chance of something not happening at all given n trials, and increase n until the result passes .5.
Technically the figures in that sequence should be multiplied by 0.15, in order to satisfy the probability theory axiom that the probabilities of all exhaustive possible outcomes add up to 1.
Except that this wasn't a list of exhaustive possible outcomes, or even the start of a list of exhaustive possible outcomes. Its a list of separate, distinct probabilities for sets of trials of varying length.

Although I will note that I was wrong in one assumption- I figured you asked because you didn't understand the math, and you clearly did. The issue is more about what he was looking to calculate than how.
 

log in or register to remove this ad

Elder-Basilisk

First Post
Cadfan is correct about what I was calculating and how I calculated it.

I'm not sure whether the geometric progression showing the average number of rolls before it matters once is more appropriate to what I'm trying to communicate or not. It would require more advanced statistics :)
 

ggroy

First Post
That's not what he was looking for. He could have looked for that, but he didn't. He wasn't looking at the average number of times something doesn't happen before it does, he was looking at the point where the likelihood of something happening at least once was greater than .50.

To calculate that, all you have to do is calculate the chance of something not happening at all given n trials, and increase n until the result passes .5.

Now that you mention it, this is a binomial distribution with n trials, with i instances of something happening with probability p.

Binomial distribution - Wikipedia, the free encyclopedia

To be more precise, the Poisson distribution approximation to the Binomial distribution is what I'm thinking of.

Poisson distribution - Wikipedia, the free encyclopedia

P(i) = exp(-x) x^i/i! , i = 0, 1, 2, 3, ...

where x = np, and i are the same as defined above.

The probability of something happening at least once (ie. i>=1) over n trials is,

P(i>=1) = 1 - P(i=0) = 1 - exp(-x) = 1 - exp(-np)

So for P(i>=1) = 0.50, we get exp(-np) = 0.5 which is the same as n = ln(2)/p

(ln(2) is the natural logarithm of 2, which is approximately 0.6931).

In the two examples elaborated by Cadfan, n = ln(2)/p = 0.6931/p gives:

(1) p=0.15, gives n = 4.62
(2) p=0.05, gives n = 13.86

which is consistent with Elder-Basilisk's estimates.

Except that this wasn't a list of exhaustive possible outcomes, or even the start of a list of exhaustive possible outcomes. Its a list of separate, distinct probabilities for sets of trials of varying length.

The Poisson distribution being summed up P(i) over i from 0 to infinity, will give you 1. (Hint: Use the infinite series 1 + x + x^2 /2! + x^3 /3! + ... = exp(x) ).

Although I will note that I was wrong in one assumption- I figured you asked because you didn't understand the math, and you clearly did. The issue is more about what he was looking to calculate than how.

Then you probably know why someone with such a background may be a stickler for details and consistency checks.
 
Last edited:

ggroy

First Post
(Doing the calculation again with the binomial distribution, without the Poisson approximation).

The probability of i instances of something happening with probability p, out of n trials is:

P(i) = [n!/i!(n-i)!] p^i (1-p)^(n-i)

The case of something happening at least once (i>=1) over n trials is

P(i>=1) = 1 - P(i=0) = 1 - (1-p)^n

So for P(i>=1) = 0.50, we get (1-p)^n = 0.5 which is the same as n = ln(0.5)/ln(1-p)

In the two examples elaborated by Cadfan, n = ln(0.5)/ln(1-p) gives:

(1) p=0.15, gives n = 4.27
(2) p=0.05, gives n = 13.51

which is consistent with Elder-Basilisk's estimates.

(It can also be shown that the binomial probability P(i) summed i over 0 to n, will give you 1).
 

Cadfan

First Post
1 - (1-p)^n
Bingo. The important part.

Personally, I find the psychological aspect of it all to be the most interesting. How we decide what counts as a "meaningful" change in statistics. Personally, I think a +3 is pretty significant because its enough to get me to change how I make decisions. Even if it doesn't come up that often in terms of the actual dice, its likely to come up frequently in my own mind when I run the numbers. If I'm deciding whether to have my character leap over a pit or climb a wall, I'm going to notice the difference between an estimated 70% chance of success and an estimated 85% chance.
 

ggroy

First Post
Note that this reasoning only works on checks that involve a single check versus a static DC where degree of failure is irrelevant.

If you are making an opposed check (do these still exist?), the math is different.

Actually the math is very similar even in the case of opposed checks where the degree of failure is irrelevant. It's not much harder.
 

ggroy

First Post
For the case of an opposed roll, where an attacker rolls a d20 and adds a bonus of +3, the defense rolls an opposed d20 (with no bonus) which becomes the "DC" for that situation against the attacker that round. In this setup, the probability that the attacker's +3 bonus actually matters is p=0.135 (excluding critical failures on a d20 roll of 1). If one includes critical failures on an attacker's d20 roll of 1, the probability the attacker's +3 bonus matters reduces to p=0.1275.

The rest of the setup and argument is identical, once one knows what the p's are.
 

ggroy

First Post
Personally, I find the psychological aspect of it all to be the most interesting. How we decide what counts as a "meaningful" change in statistics.

Meaningful change can be very subjective.

More generally, the human mind doesn't seem to process probabilities easily. One just has to see how much $$$ is spent on things like lottery tickets, casinos, horse races, sports betting, etc ...
 

Remove ads

Top