D&D (2024) The WotC Playtest Surveys Have A Flaw

Cadence

Legend
Supporter
. That's not how statistics works either. This level of responses given this field of consumers has a 99% confidence level that these results accurately reflect the consumer population for these products. IE that 70%+ of the consumers of these products like the results of this playtest MORE than they liked the 2014 version (some of which was now scoring in the 20% region).

* Apologies if you hit these other issues already *

That would be how statistics would work if this was a random sample from "the consumer population of these products" (or however you want to say it) that all responded. If it's not a random sample and the nonrespondents missingness isn't just right ... well, then it's a whole sea of assumptions and could almost all all be garbage. (See non-presidents Dewey and Landon for some nice reads).

And if it was all the nice stuff, if they saw exactly 70% of their sample liked it, that would mean they were, say, 99% confident the population percent liking it was between 70%-something small and 70%+something small for a usual interval. The one sided thing would be that it was 99% confident the population percent was more than 70%-something small.
 

log in or register to remove this ad

Maxperson

Morkus from Orkus
This thread is forked from another, unrelated thread. Things started to drift off-topic there, but I felt it's a good topic for discussion.


This is a big one.

I belong to three different gaming groups. And of those 18 people in total, I'm the only one who is following the development of the game at all. I'm the only one who knew that Wizards of the Coast is working on a new edition rules revision, and my fellow gamers get really defensive when I mention it. One guy will actually growl at me every time I bring up the playtest, "We are not changing editions again!" They weren't even aware of the OGL debacle earlier this year, and it was supposed to have blown up the Internet.

These are people that I game with every week. We schedule and coordinate our games through social media, so it's not like they live under a rock either. But as one person out of eighteen total, I'm literally the only 6% of active 5E gamers that I know of who are even aware of these playtests. I imagine the number of folks who are aware and interested is even lower. How much less, then, for the number of people who are (a) interested enough to (b) download the material, (c) read it, (d) playtest it, and (e) provide feedback?

And that fraction of a fraction of a fraction of people that made it all the way to Step (e) is supposed to be everyone's voice in the room.

I don't have a better idea, but still. That's a big ask.
My group is similar. I'm the only one who goes online or follows the gaming news. They only know about those things because I fill them in.
 

Maxperson

Morkus from Orkus
Maybe we're the weird ones? And the desired demographics for the game probably isn't a bunch of older gamers that spend their time on an internet forum complaining about 5e?
And yet it seems we make up a large chunk of forum junkies., and who actually fill out the surveys. Seems counterproductive to give us an overly large amount of influence if we are not the target.
 

Mistwell

Crusty Old Meatwad (he/him)
* Apologies if you hit these other issues already *

That would be how statistics would work if this was a random sample from "the consumer population of these products" (or however you want to say it) that all responded. If it's not a random sample and the nonrespondents missingness isn't just right ... well, then it's a whole sea of assumptions and could almost all all be garbage. (See non-presidents Dewey and Landon for some nice reads).

And if it was all the nice stuff, if they saw exactly 70% of their sample liked it, that would mean they were, say, 99% confident the population percent liking it was between 70%-something small and 70%+something small for a usual interval. The one sided thing would be that it was 99% confident the population percent was more than 70%-something small.
There is no self-selection bias present which would impact approval rating involved. Self selection is of course an issue, though it's almost never a "garbage" level issue. But for this kind of product, it's a non-issue. A self selector is equally likely to bias to like or dislike what's being surveyed for this kind of thing. This kind of survey is very common for consumer products. Self selection bias long ago was accounted for in these kinds of surveys.
 

Mistwell

Crusty Old Meatwad (he/him)
And yet it seems we make up a large chunk of forum junkies., and who actually fill out the surveys. Seems counterproductive to give us an overly large amount of influence if we are not the target.
There are about 25 of us actively talking about it. That's....not even a rounding error on the number of people filling out the surveys. The extreme bulk majority are ordinary DNDBeyond users.
 

Cadence

Legend
Supporter
There is no self-selection bias present which would impact approval rating involved. Self selection is of course an issue, though it's almost never a "garbage" level issue. But for this kind of product, it's a non-issue. A self selector is equally likely to bias to like or dislike what's being surveyed for this kind of thing. This kind of survey is very common for consumer products. Self selection bias long ago was accounted for in these kinds of surveys.
I certainly believe they are very common types of surveys. I would be curious to see the studies used to justify that it usually goes away because it is equally likely to go both ways for this kind of thing. Do you have a favorite article or textbook? (I'm a statistician, certainly not working in consumer surveys, but I would find it edifying).
 

Mistwell

Crusty Old Meatwad (he/him)
I certainly believe they are very common types of surveys. I would be curious to see the studies used to justify that it usually goes away because it is equally likely to go both ways for this kind of thing. Do you have a favorite article or textbook? (I'm a statistician, certainly not working in consumer surveys, but I would find it edifying).
I don't. I got that information from someone who works in the surveying industry, but I have no idea what their sources would be.
 

UngainlyTitan

Legend
Supporter
* Apologies if you hit these other issues already *

That would be how statistics would work if this was a random sample from "the consumer population of these products" (or however you want to say it) that all responded. If it's not a random sample and the nonrespondents missingness isn't just right ... well, then it's a whole sea of assumptions and could almost all all be garbage. (See non-presidents Dewey and Landon for some nice reads).

And if it was all the nice stuff, if they saw exactly 70% of their sample liked it, that would mean they were, say, 99% confident the population percent liking it was between 70%-something small and 70%+something small for a usual interval. The one sided thing would be that it was 99% confident the population percent was more than 70%-something small.
They can cross check if the UA surveys are good samples by running smaller more focused surveys to determine if that is the case.
 


Cadence

Legend
Supporter
They can cross check if the UA surveys are good samples by running smaller more focused surveys to determine if that is the case.

And if those smaller ones are done right... I just usually don't have faith the extra effort is usually made.

And they may not care about how well they're estimating the actual audience percent and just want to get lots of written feedback to make sure they haven't missed anything, get a summary value that gives a general feeling which way the wind is blowing, and have an easy to implement cut-off that sounds good to the masses.
 

Remove ads

Top