[Math] Developing methods for determining the balance and reliability of dice.

Umbran said:


Well, the dsitribution for a fair single die should be completely flat. Standard deviations are intended to measure the width of a peak. The results of a d20 should have no peak whatsoever. So Standard deviation will probably fail to be a good measure.
SD isn't a measurement of the width of any peak, and it's really not even a measurement. If you're measuring something, it's technically an estimator of a parameter, not the parameter itself. SD is the square root of the variance, the expectation of the square of the deviation from the mean (also called the 2nd central moment)
sqrt(E[(x-μ )<sup>2</sup>]). With certain unimodally peaked distributions like the normal, this corresponds to the width of the peak, but it's neither the definition nor the underlying sense of what SD is. As the name roundaboutly implies, it represents how much the random variable or process tends to deviate from the mean.
Uniform distributions have it and they've got quite a lot of it.
 
Last edited:

log in or register to remove this ad

circles?

I'm really not sure that any of the other stuff is needed: I think GnomeWorks suggestions of a chi-square test is the right test to me: it seems like exactly what we want to do is take an expected distribution for a die, and then compare the actual distribution against it for signs of significance. I can't think of anything that fits that more appropriately.
 

tarchon said:

I think it's
s^2=1/3 (n+1)(n-1) for 1dN
If you know the sum of squares and the sum for 1 to N, it works out pretty easily from the definition of the variance.

Actually, that should be a 1/12. But to do a statistical test, I don't just need the variance, I need the distribution of the variance. That is, if I roll a fair die 50 times and calculate the variance, and then do all of that 100 times, what is the distribution of the 100 values going to be? Then you can use that to determine how unlikely the variance of a given die is assuming it was fair. If it's highly unlikely, you say the die is not fair.
 

Re: circles?

anonystu said:
I'm really not sure that any of the other stuff is needed: I think GnomeWorks suggestions of a chi-square test is the right test to me: it seems like exactly what we want to do is take an expected distribution for a die, and then compare the actual distribution against it for signs of significance. I can't think of anything that fits that more appropriately.

Well, the problem with the chi-squared test is that it has low power. That is, it's not very good at telling you that an unfair die is unfair. That means you have to roll the die a tremendous number of times in order to be sure of your test. I was just think that a combined test of the mean and the standard deviation might have better power. Then it would be an easier test to conduct, while still being a reasonable test of fairness.
 

Using the Bernouilli trial (true/false) sample size formula for estimating p
n=z<sup>2</sup><sub>&alpha/2</sub>p(1-p)/e<sup>2</sup>
where e is the error (and assuming the true p is close to the ideal value of p)...
For 1d6, treating each outcome separately as a Bernoulli trial, I get a required sample size of 213 to estimate p within 5% with 95% confidence.
For 1d20, I get 73 trials for 5% error, 95% confidence.
Because the outcomes are mutually exclusive, there probably is a more efficient way to do it than to treat each one as a separate Bernoulli trial, but this would do the job.
 

Remove ads

Top