D&D (2024) Class spell lists and pact magic are back!

Sure, they could have used a synthetic population model, combined with iterative proportional fitting and propensity weighting.
nice try, I can google too… now explain what that does and especially how that can be applied here and how it helps with improving the result of WotC’s survey
 
Last edited:

log in or register to remove this ad

So was flight. So was space flight. So was splitting the atom. So was developing cures to diseases. So was building an island.
Wow. One, two, three, four, five....five False Equivalences in one short span of text. That's amazing. Unfortunately, math is math, so unless you think math is wrong, and you'll have to prove it, it's not possible.
The problem is, you are convincing yourself the number must be accurate within a value 0.0001, but people aren't like that.
Nah. +/- 5% is standard and this isn't even that close.
So like... most political policy polls?
I've never seen one try and take 4 completely different categories and try to come up with a percentage. Can you link me a poll that does that? See, if WotC was saying that they needed 80% voters to vote "Very Satisfied" they could figure THAT out, but that's not what they are doing.
I mean, how do we know what percentage of people approve of same-sex marriage?
Because they ask a binary question, not 4 different classifications.
How do we know what percentage of people think that the new education policy is worth pursuing? How do we know what percentage of people approve of marijuana?
Same.
All of these have vague categories, go out to thousands of people, and then have a final percentage tally.
Nope. None of them try and come up with a single percentage from 4 different broad classifications. They ask binary questions OR if they ask 4 different classifications, only tell us the percentage for each individual classification.

NONE of them try what WotC is trying(and failing) to do.
I'm sorry that you are not aware of exactly how they work
I am. You apparently aren't.
 

no, you constantly replying with whatever pops into your head has led to this discussion.

I started with pointing out flaws in their methodology, that was before this interview was done / before I was aware of it.


it doesn’t even matter, as I already told you. I am complaining about the flaws, not about some stuff I wanted not making it in. The flaws exist regardless of that, and as I said, if my stuff did not make it in and I considered the methodology sound, I would not be complaining (and that Crawford then comes in and tramples all over the survey result based on personal bias, as that interview showed, is just the icing on the cake).

This is about the methodology, not about what makes it in or not, no matter how much you try to spin the opposite.

The flaws to me are pretty obvious and I explained them, several times. Take it or leave it, I am not interested in rehashing the same (or new) unfounded claims of yours for another month.

Except you have no evidence of these flaws beyond "I can't believe this gives them accurate data". That's it. Personal incredulity that they are doing something effective. And I'm not convinced by your personal incredulity that people can have surveys that give accurate data.

Also, I notice once more that you simply skipped large portions of my post. Especially the parts where I pointed out you grossly misunderstood the article and what it said. Which, again, if you understand an article interviewing the designer, why should I trust that you have the secret sauce to understanding the failures of the survey?
 

nice try, I can google too… now explain what that does and especially how that can be applied here and how it helps with improving the result of WotC’s survey

I don't fully understand it, again, I'm not a survey expert.

I know that a synthetic population model allows them to take multiple population polls, compare biases and data between those populations, to fill in gaps within a combined data set.

I know that iterative proportional fitting is an incredibly common tool where weights are adjusted based on the population, to more accurately reflect the data. Do I understand exactly how it works? No, again, this isn't something I have a degree in, but it is commonly used by researchers to adjust the numbers based on known values.

Propensity weighting is similar, but tends to work mainly on known biases within a population. Essentially knowing that a population will generally lean one way or the other, and adjusting the data numbers to account for that lean. Not to change the values, but to account for things like how people in a certain demographic might use different phrases to express the same information.


And before you accuse my surface level understanding of meaning that all this is impossible, I also only have a surface level understanding of rocket science, brain surgery, and architecture. My personal lack of detailed knowledge in a subject does not mean that experts in the field don't know this stuff or can't do this stuff.
 

Except you have no evidence of these flaws beyond "I can't believe this gives them accurate data". That's it.
No, I explained the issues I have, feel free to show where I went wrong,... You never even attempted that. All you did is dismiss them and make unfounded claims that WotC somehow is infallible. Personal incredulity that they could make any mistakes at all
 

Wow. One, two, three, four, five....five False Equivalences in one short span of text. That's amazing. Unfortunately, math is math, so unless you think math is wrong, and you'll have to prove it, it's not possible.

Yes Max, math is math.

Sociology is not math, it is sociology.

A survey about people's opinions on a subject uses some level of math... but it is more related to the field of sociology. A non-math field.

Nah. +/- 5% is standard and this isn't even that close.

Yes it is that close. Literally proved that a while back when I used the sample size calculator a few days ago. You just don't think it can be that close, because you think they just made up random numbers. Which they didn't.

I've never seen one try and take 4 completely different categories and try to come up with a percentage. Can you link me a poll that does that? See, if WotC was saying that they needed 80% voters to vote "Very Satisfied" they could figure THAT out, but that's not what they are doing.

Because they ask a binary question, not 4 different classifications.

Same.

Nope. None of them try and come up with a single percentage from 4 different broad classifications. They ask binary questions OR if they ask 4 different classifications, only tell us the percentage for each individual classification.

NONE of them try what WotC is trying(and failing) to do.

Here is an article talking about Likert Scale questions, with surveys and examples. The exact type of surveying WoTC is doing.



I am. You apparently aren't.

No, you really don't.
 

No, I explained the issues I have, feel free to show where I went wrong,... You never even attempted that. All you did is dismiss them and make unfounded claims that WotC somehow is infallible. Personal incredulity that they could make any mistakes at all

Strawman fallacy AGAIN.

I never said they were infallible. I never said they are incapable of making mistakes.
I HAVE said that they are bloody professionals, using a well-worn and trusted style of survey for a decade.

Same article I just gave Max. Likert scale questions, survey and examples | QuestionPro

This is COMMON STUFF. They didn't make this up, this is an industry-standard type of survey. It is quite literally an integral part of market research for every single industry. It doesn't suddenly turn useless just because WoTC is using it instead of Apple or Google.
 

I don't fully understand it, again, I'm not a survey expert.
Fair enough, neither am I, I will base the following entirely on


so let's see where we agree and disagree

First, what this method is intend to do is to reduce the bias in a self-selecting poll (like WotC's). The bias it tries to adjust for is introduced by the people answering the poll not being an exact proportional match for the entire population, and different groups within the population holding different opinions to varying degrees.

Second, it attempts this by determining the deviation of the people answering the poll from the overall population and then adjusting the poll results towards what is known about the overall population (from well established factors that are known for the entire population, e.g. from government surveys), based on demographic factors such as sex, age, race and ethnicity, educational attainment, and geographic region.

Feel free to disagree and say where I misunderstand this.

Based on this, I'd say it cannot be applied by WotC. For one they do not ask for enough demographic factors for this to be feasible, and second there are no established facts that are known for the entire population (by such demographic factors) that would be of any use here.

The paper then goes on to discuss that this approach is not really working reliably in the first place ("But are they sufficient for reducing selection bias in online opt-in surveys? Two studies that compared weighted and unweighted estimates from online opt-in samples found that in many instances, demographic weighting only minimally reduced bias, and in some cases actually made bias worse."), but that is moot here, as this is not a workable approach for WotC's survey to begin with, for the reasons I gave above.
 

I never said they were infallible. I never said they are incapable of making mistakes.
I grant you this, so we can move on from it, it is not relevant

I HAVE said that they are bloody professionals, using a well-worn and trusted style of survey for a decade.
I don't think I disagreed with this point, in that they are using that survey method. That does not address any of the issues with their survey that I brought up however. You can use a widely used methodology wrong too....

This does not address any of the issues I raised, it just explains what I already know, i.e. what WotC is doing

This is COMMON STUFF. They didn't make this up, this is an industry-standard type of survey. It is quite literally an integral part of market research for every single industry. It doesn't suddenly turn useless just because WoTC is using it instead of Apple or Google.
No, it is not based on the company asking, it is based on the questions being asked, the information you as a participant have available while answering, and the answers the survey is intended to give the people coming up with the survey. In theory Google can screw this up just as much as WotC.

I gave specific issues I have, you cannot just handwave them away by saying 'others manage to have different surveys with a similar approach where they do not run into these problems'. If I tell you my car broke down because of X, you cannot just say 'no it didn't break down, the proof for that is that many cars never break down'... Either finally address them directly, or move on.

EDIT: @Maxperson is going into more detail in their post right below, I am tired of repeating my points over and over again, just for you to always ignore them anyway. They should be clear by now, or at a minimum you can find them if you actually want to address them for a change
 
Last edited:

Yes Max, math is math.

Sociology is not math, it is sociology.
Very good. Now understand that the percentage you reach from the voting is not sociology, it's math. And that math cannot be reached via the method WotC is using.
A survey about people's opinions on a subject uses some level of math...
Correct. The percentage(math) that they like it.
but it is more related to the field of sociology. A non-math field.
Not to achieve the percentage it doesn't.
Yes it is that close. Literally proved that a while back when I used the sample size calculator a few days ago. You just don't think it can be that close, because you think they just made up random numbers. Which they didn't.
No you didn't. You didn't do what they are doing, which is taking for broad categories and assigning arbitrary percentages to each one, none of which are correct, and then trying to come to an accurate total.
Here is an article talking about Likert Scale questions, with surveys and examples. The exact type of surveying WoTC is doing.

Dude that doesn't say what you think it says. At no point is it trying to come up with a percentage based on multiple broad categories like WotC is doing.

What that sort of survey is good for.

1. X% are very satisfied, X% are satisfied, etc. It doesn't tell you what either of those two broad categories mean specifically, though.
2. If you decide to weight satisfied as equal to very satisfied and dissatisfied as equal to very dissatisfied, you can get a percentage of people who are satisfied and a percentage who are not. And while that can help them find the 80% satisfaction rate, it fails miserably at giving them a percentage that includes people who are not satisfied, but want another version of the ability in question.

Number 2 above means that they cannot accurately gauge whether they should or should not make a new version of an ability. For example, say 67% of voters are satisfied or very satisfied. If they only use that number they won't make another incarnation because it failed to hit the 70% mark. However, there are certainly a bunch of people who were not satisfied with the current version, but would like another version to look at and vote on. It's impossible for them to know how many, so either they get it wrong by sticking to the 67% number and make no further incarnations of the ability, or they assign some arbitrary number to the dissatisfied votes and get it wrong that way.
No, you really don't.
Says the guy who linked something that isn't what WotC is doing. They've modified what that link shows and are using it in ways unintended by that style of survey and which are guaranteed to be wrong.
 

Remove ads

Top