as I said, you cannot extrapolate, but since you detected this in that small a sample size, it is worth looking into. You are simply wrong to dismiss it, just because the sample is small.
It isn't just because the sample is small. It is also because we fully expect people to misunderstand written communication. Your one person who misunderstood this isn't some anomaly that indicates anything, it is 100% expected by anyone who understands sending out written questions to large groups of humans.
What percentage do we expect to stumble? Should we maybe work on making it harder to misunderstand the survey?
I don't know the percentage. There are dozens of academic papers written on the subject. And studying on the feasibility of fixing it. Here are some links. Go wild
I'm sure your utter brilliance will outshine the people who have been studying this for decades.
Sure, because a sample size of 10 is too small to reliably detect a problem, it's just that we already managed despite the small size. If we had a thousand and found the problem, it still would be a problem, we just managed to do so in 10 already.
Do you think they would say 'oh, it is only 10 out of the 1000 bulbs, we can ignore that'?
And here again, you don't seem to understand why sample size actually matters.
Let us say you find 1 out of 10 lightbulbs has a problem. That's bad right? But then you take a larger, more relevant sample size and find that it is actually 1 out of a 1000. Then that ISN'T bad, it can be easily ignored. This is why companies who do this sort of research into their products ALWAYS START with a significant sample size.
What you are doing is the equivalent of noticing the third light bulb you put into your home burnt out too quickly, and calling the company to demand why they sell such obviously faulty products that 1 in three of them burn out, and demanding they investigate the obvious problem they clearly have.
You are ignoring the fact that they likely did quality testing before you "noticed something". You are ignoring that they likely have better data than you. You are ignoring that they have absolutely sent out thousands of other products without any issue. You are assuming incompetence because you noticed a statistically insignificant event in a shallow sample size.
I gave a rationale, you are basically saying 'you have motive, you have opportunity, you have circumstantial evidence, but you have no DNA at the crime scene, so it could have been anyone'. I have no access to the proverbial crime scene... If you want to dispute the circumstantial evidence, be my guest.
What motive does WoTC have to ruin their own playtest with garbage data? How could that in any possible way achieve their goals?
All you have is opportunity, and one guy who says "I swear that guy commits crimes". You don't even HAVE a crime, you want to go looking for a crime scene. In police work, you are doing the equivalent of demanding a fishing expedition. And I don't need evidence to prove that there is no crime scene, when there is no reason to assume that there is.
sure, but it still pales in comparison to all responses, and the % is the aggregate of all of them, so the few written opinions have only a small influence on the result
Can you prove that? You threw out 5% with no evidence. What if 50% of people leave comments, then what? What if ratings with comments are weighted at double the impact? What if they figure that at least 20% of non-comment responses likely had similar opinions to the comments?
All of that would make a difference. So, show me what WoTC's processes for sifting through their data is. Prove they don't know what they are doing.
yeah, that is your claim, but 'unfortunately' correlation is not the same as causation, so you still will have to show that this is due to how great the playtest is working
I don't need to show that their success is because of the playtest working. Firstly, the product this playtest is for isn't even out yet. Kind of hard to show the playtest gave us a successful product when the product isn't released.
Secondly, I CAN show that this same survey method can lead to successful products, because... we have multiple successful products that have been released that followed this survey method (Tashas, Xanathars, ect)
Thirdly, even if I cannot show that the survey led to those successes.... since they are successes I can extropolate that the survey didn't HURT them. It was not a negative. And being neutral is just as bad for your position, because if the surveys are neutral, then they are not causing harm, and your argument falls apart again.
No more or less than mine, or rather, if anything it is less so, because you are not making a case like I did, you just make a claim. And yet you are very comfortable with dismissing mine. Guess I feel the same way about yours.
I doubt we will get to an agreement here. How about we turn this around? Why are you so opposed to improving the process? After all that is all I am asking for here...
Because I'd rather them work in the game than theoretically improving a process that might theoretically not be perfect, into a version that might theoretically be slightly less imperfect. They do not have infinite time and infinite money after all.
Let's see if we can agree on something here...
1) What is WotC really interested in answering? To me it is A) do you like this idea better than what we have today (nevermind the balancing)? B) Do you like the execution enough for us to add it as is, or does it need improvement?
Do you agree / disagree? If you disagree, what are they looking for?
I disagree that that is what they seem to be looking for. They especially do not seem to be asking us if they should improve their ideas or not.