mamba
Legend
the fact that they are self selected, any survey you post on the internet for people to come and fill out is. That basically is the definition of self-selectedWhat is your proof that they are self selected?
the fact that they are self selected, any survey you post on the internet for people to come and fill out is. That basically is the definition of self-selectedWhat is your proof that they are self selected?
Ah, my bad, I misunderstood.the fact that they are self selected, any survey you post on the internet for people to come and fill out is. That basically is the definition of self-selected
I think the true reason is far simpler - less flawed methodology requires a far more elaborate setup and many more moving parts than they could hope to handle, especially if they have the typical deadlines for "make a new edition".no, that is my assessment of their methodology and their handling of feedback. Their approach is about as good at identifying improvements and then sticking to working on them as tossing a coin would be.
Where it considerably beats a coin toss is at identifying total duds and discarding them. It is very reliable there, but pretty useless for anything else. I have been saying this during the UA phase too.
I am honestly surprised they stuck with such a flawed methodology for so long. The only conclusion I can draw is that they do not care about identifying improvements, only about weeding out complete duds, and that the UA mostly is about marketing / engagement / raising awareness about the new product, rather than actually improving the game
not really, all it does need is better options to answer, to tell the difference between ‘do not like it’ and ‘like it, but needs some work’ rather than having WotC guessing which of the two it is.I think the true reason is far simpler - less flawed methodology requires a far more elaborate setup and many more moving parts than they could hope to handle, especially if they have the typical deadlines for "make a new edition".
a lot fewer than they have, this is the least of my concerns. I not once used this as a reason for why their polls are flawedSeriously, think about how many people you would need to make a representative sample!
given that they are not concerned about balance, I’d say about as good as from someone who played itHow valuable is feedback based just on reading some rule suggestion actually? If it requries play-test, how long until survey participants have organized a group to play?
it really isn’tThat's almost as complicated at testing new medical drugs!
Are you sure that's what was said? I remember a video where Crawford mentioned that exact same scenario and they were making an entirely different assumption. That it made them look at it as individually these features are hitting the mark, but collectively the class is missing something fundamental. That the features weren't a problem, they just weren't enough.All of that is before you consider the impact of statistical insanity like the time Crawford told us all that when they saw things like a class getting high marks on individual features but poor in overall class grade as a whole they assumed respondents forgot what they voted and used the higher value.
not really, I don’t see much correlation between 5e’s success and their polling ability. Also, I like 5e and I still think their polling methodology is about as bad as it can beI kinda laugh when people claim polls are flawed. Biggest D&D edition ever. Translation is dont like something and got out voted.
Fortunately, I don’t need a survey to know that I know better than everyone else.![]()
You create an robot duplicate indistinguishable from an actual teenager and send it out into the wild to invite real teenagers to play D&D and record their responses.I think the true reason is far simpler - less flawed methodology requires a far more elaborate setup and many more moving parts than they could hope to handle, especially if they have the typical deadlines for "make a new edition".
Seriously, think about how many people you would need to make a representative sample! How do you get those "casuals" to possibly even do an actual playtest? How valuable is feedback based just on reading some rule suggestion actually? If it requries play-test, how long until survey participants have organized a group to play? How many factors are actually influencing the outcome of such a play-test that might be unrelated to the topic of the survey?
How long are they supposed to wait until that feedback is there, and how do they control for all those variables and factors?!
That's almost as complicated at testing new medical drugs! With far lower stakes.