D&D (2024) Class spell lists and pact magic are back!

Fair enough, neither am I, I will base the following entirely on


so let's see where we agree and disagree

First, what this method is intend to do is to reduce the bias in a self-selecting poll (like WotC's). The bias it tries to adjust for is introduced by the people answering the poll not being an exact proportional match for the entire population, and different groups within the population holding different opinions to varying degrees.

Second, it attempts this by determining the deviation of the people answering the poll from the overall population and then adjusting the poll results towards what is known about the overall population (from well established factors that are known for the entire population, e.g. from government surveys), based on demographic factors such as sex, age, race and ethnicity, educational attainment, and geographic region.

Feel free to disagree and say where I misunderstand this.

Okay

Based on this, I'd say it cannot be applied by WotC. For one they do not ask for enough demographic factors for this to be feasible, and second there are no established facts that are known for the entire population (by such demographic factors) that would be of any use here.

Disagree. They have asked for plenty of demographic information. And they have plenty of established facts known about the entire population. It is not all asked for in this precise survey, but they have been surveying the community for a decade, asking these questions, and utilizing the market research of Hasbro and other data points for the community. DnD Beyond uses Google to sign in, that probably gives them access to Google's research on the DnD Beyond community. They have the metrics of their Youtube channel, which gives them quite a lot of demographic information as well.

Just because it is not in this single survey does not mean they do not have a large amount of data on the community. Hence the mention of the synthetic population model to COMBINE and REFERENCE their population information from multiple resources.

The paper then goes on to discuss that this approach is not really working reliably in the first place ("But are they sufficient for reducing selection bias in online opt-in surveys? Two studies that compared weighted and unweighted estimates from online opt-in samples found that in many instances, demographic weighting only minimally reduced bias, and in some cases actually made bias worse."), but that is moot here, as this is not a workable approach for WotC's survey to begin with, for the reasons I gave above.

Which is why you don't use just one method.

I grant you this, so we can move on from it, it is not relevant

I don't need you to grant me my own statements. I need you to stop making up these strawmen to try and discredit me.

I don't think I disagreed with this point, in that they are using that survey method. That does not address any of the issues with their survey that I brought up however. You can use a widely used methodology wrong too....

Really, you don't think you disagreed with it? What was this statement then? "and make unfounded claims that WotC somehow is infallible." Just a slip of the keyboard for the fifth time?

And yes, you CAN use a widely used methodology incorrectly... mind proving that they've actually done that? Instead of using it exactly as it can be used?

This does not address any of the issues I raised, it just explains what I already know, i.e. what WotC is doing

If you understand what they are doing, then why do you keep insisting they are doing it wrong? Because everything I've found out about the situation points to them using a well-known method, in well-known ways, over the course of a decade, with access to large amounts of data, and utilizing well-known methods for reducing exactly the issues you and Max are fixated on.

But they still, somehow, have to be wrong, because you, somehow, have to be right.

No, it is not based on the company asking, it is based on the questions being asked, the information you as a participant have available while answering, and the answers the survey is intended to give the people coming up with the survey. In theory Google can screw this up just as much as WotC.

I gave specific issues I have, you cannot just handwave them away by saying 'others manage to have different surveys with a similar approach where they do not run into these problems'. If I tell you my car broke down because of X, you cannot just say 'no it didn't break down, the proof for that is that many cars never break down'... Either finally address them directly, or move on.

EDIT: @Maxperson is going into more detail in their post right below, I am tired of repeating my points over and over again, just for you to always ignore them anyway. They should be clear by now, or at a minimum you can find them if you actually want to address them for a change

The questions they are asking are the exact questions they intend to ask. Your interpretation of their goals with those questions are flawed, as I have stated repeatedly.

The information you have as a participant is fully sufficient to answer the questions they intend to ask, in the manner they intend to be answered.

The issue is not the survey. The issue is you insisting that they are trying to do something they are not trying to do, then declaring the survey broken because if can't do what you are imagining. This isn't "my car broke" and "here is evidence cars don't break" it is "My car broke down, because [X] doesn't do quality control of their vehicles, because if they did it wouldn't have broken down" and "No, they do do quality control, but somethings break anyways, despite that quality control and quality control is not designed to catch every single possible issue."
 

log in or register to remove this ad

Very good. Now understand that the percentage you reach from the voting is not sociology, it's math. And that math cannot be reached via the method WotC is using.

but the sociology can be achieved by the method WoTC is using, because it uses a form of math that accounts for things like "not every human being conceives of things in an identical manner" And therefore those percentages CAN be reached with the method WoTC is using, because they have been reached for decades by multiple different organizations.

No you didn't. You didn't do what they are doing, which is taking for broad categories and assigning arbitrary percentages to each one, none of which are correct, and then trying to come to an accurate total.

They are not assigning arbitrary percentages to each category. That is a something you made up because you cannot conceive of any other explanation of how they are doing their work.

This is the fundamental problem. You insist they must be doing something wrong, despite having no evidence they are doing that thing. Because you, a layman with no expertise, cannot think of any other way to do it.

Dude that doesn't say what you think it says. At no point is it trying to come up with a percentage based on multiple broad categories like WotC is doing.

What that sort of survey is good for.

1. X% are very satisfied, X% are satisfied, etc. It doesn't tell you what either of those two broad categories mean specifically, though.
2. If you decide to weight satisfied as equal to very satisfied and dissatisfied as equal to very dissatisfied, you can get a percentage of people who are satisfied and a percentage who are not. And while that can help them find the 80% satisfaction rate, it fails miserably at giving them a percentage that includes people who are not satisfied, but want another version of the ability in question.

Number 2 above means that they cannot accurately gauge whether they should or should not make a new version of an ability. For example, say 67% of voters are satisfied or very satisfied. If they only use that number they won't make another incarnation because it failed to hit the 70% mark. However, there are certainly a bunch of people who were not satisfied with the current version, but would like another version to look at and vote on. It's impossible for them to know how many, so either they get it wrong by sticking to the 67% number and make no further incarnations of the ability, or they assign some arbitrary number to the dissatisfied votes and get it wrong that way.

You literally say they can't do something... then show them doing it.

And yes, if something only reaches 67%, then it doesn't reach 70%. HOWEVER, 67% doesn't get thrown in the trash. They WILL make another incarnation of it, BECAUSE THEY BLOODY TOLD US THEY WOULD DO THAT! Seriously, 70% is the keep range, 60% is the "has problems but can be salvaged" range. How am I supposed to believe you cracked the code to show how foolishly wrong-headed WoTC is when you can't even get their actual positions correct?!

And want do you mean by accurately? Seriously, what is accurately to you? Because they can trivially get within a +/- 3 with the size of their sample. Because my calculator wasn't about the questions, it was about sample size compared to population size and how they demonstrates your likely variance of error.

Your issue is you think it is impossible for WoTC to read their own data, because they are just making things up out of thin air. Yes, this simple, free basic version of a Lykert Scale doesn't show exactly what WoTC is doing. But don't you think that a company that does these professionally and offers their services to a company like Hasbro has more advanced versions of this? It is like looking at Windows Notepad and declaring Microsoft Word cannot possibly have templates, because Notepad doesn't demonstrate that technology.

Says the guy who linked something that isn't what WotC is doing. They've modified what that link shows and are using it in ways unintended by that style of survey and which are guaranteed to be wrong.

They modified it. Who do you think did that? Maybe a... survey company? People whose professional lives revolves around the creation and implementation of surveys?

I offered you an example, from a free site, I never thought it was going to match 1 to 1, but to demonstrate how common a tool it was and how it is used for exactly the type of data collection WoTC is using it for. Yes, they likely have modified it, but just because it was modified doesn't mean it is suddenly worthless and broken. If it did, they wouldn't have hired a company to make those modifications!
 

but the sociology can be achieved by the method WoTC is using, because it uses a form of math that accounts for things like "not every human being conceives of things in an identical manner" And therefore those percentages CAN be reached with the method WoTC is using, because they have been reached for decades by multiple different organizations.
Seriously? Those surveys/polls are horribly inaccurate, and have been for decades. Hell, those same surveys had a presidential candidate winning by an overwhelming margin, who instead lost by an overwhelming margin.

Why are they so inaccurate? Because they assume math instead of do math. If you are arbitrarily guessing at numbers, you're going to be wrong a lot.
You literally say they can't do something... then show them doing it.
No. If you think I showed them doing what I'm saying they can't do, then you don't understand what I'm saying in addition to not understanding how those polls work.
And yes, if something only reaches 67%, then it doesn't reach 70%. HOWEVER, 67% doesn't get thrown in the trash. They WILL make another incarnation of it, BECAUSE THEY BLOODY TOLD US THEY WOULD DO THAT! Seriously, 70% is the keep range, 60% is the "has problems but can be salvaged" range. How am I supposed to believe you cracked the code to show how foolishly wrong-headed WoTC is when you can't even get their actual positions correct?!
Then make it 57% 🤦‍♂️

Respond to what I was clearly saying, not the technically incorrect numbers. You are of course technically correct, and we all know that's the best kind of correct, especially when used as a Red Herring.
Your issue is you think it is impossible for WoTC to read their own data, because they are just making things up out of thin air. Yes, this simple, free basic version of a Lykert Scale doesn't show exactly what WoTC is doing. But don't you think that a company that does these professionally and offers their services to a company like Hasbro has more advanced versions of this? It is like looking at Windows Notepad and declaring Microsoft Word cannot possibly have templates, because Notepad doesn't demonstrate that technology.
They can read their data just fine. They just are bad at interpreting it because they are trying to get numbers not intended to be reached via the method they are using.
I offered you an example, from a free site, I never thought it was going to match 1 to 1, but to demonstrate how common a tool it was and how it is used for exactly the type of data collection WoTC is using it for. Yes, they likely have modified it, but just because it was modified doesn't mean it is suddenly worthless and broken. If it did, they wouldn't have hired a company to make those modifications!
It's worthless because of the math, which you consistently ignore in favor of guesswork.
 

Disagree. They have asked for plenty of demographic information. And they have plenty of established facts known about the entire population.
facts that help with adjusting e.g. what people think about a Druid feature in 5e? You even trying to make that case is ridiculous, this is not something they have data for, like e.g. 'old white dudes vote disproportionately Republican, so if we have too many / too few of these in our survey, we need to account for that'...

What happened to
I don't fully understand it, again, I'm not a survey expert.
First you didn't even attempt to explain what it does or how it would help, and now you want to convince me that it definitely can help...


All this does is show why it is pointless to have a discussion with you. I am done
 
Last edited:

Seriously? Those surveys/polls are horribly inaccurate, and have been for decades. Hell, those same surveys had a presidential candidate winning by an overwhelming margin, who instead lost by an overwhelming margin.

Why are they so inaccurate? Because they assume math instead of do math. If you are arbitrarily guessing at numbers, you're going to be wrong a lot.

And those same surveys have been used to make corporations hundreds of millions of dollars.

Yes, it turns out polling people in a constantly shifting situation like a presidential election can end up being wrong. They aren't omniscient. But if they are so horrifically inaccurate to be worthless, then they wouldn't be used by companies as market research to determine how to make more money.

And guess what! People with a college education and decades of experience in a field know that guessing numbers is going to make you wrong too! That's why they don't arbitrarily guess numbers, and instead of methods to determine those numbers that are actually more accurate.

No. If you think I showed them doing what I'm saying they can't do, then you don't understand what I'm saying in addition to not understanding how those polls work.

You said they can't get percentages of broad categories, then make a single percentage from that. Then you talked about them doing just that. Seriously, stop assuming people are idiots and you are the only intelligent man in the world. The people who do these surveys and collect this data aren't fools who don't understand math.

Then make it 57% 🤦‍♂️

Respond to what I was clearly saying, not the technically incorrect numbers. You are of course technically correct, and we all know that's the best kind of correct, especially when used as a Red Herring.

It isn't about technically incorrect numbers, it is about continual misunderstandings. You and Mamba both have made statements about WoTC or Crawford, that have been factually incorrect. You both continue to make assumptions and assertions based upon these fallacies, then act indignant when I point out that you are wrong.

But fine, I will fix your example for you.

WoTC has determined that a single feature of a single subclass rated a total of 57%, therefore they will abandon it. Now, somehow you think that that 57% data ONLY includes half their data set. It... it doesn't. It isn't just made of the people who voted Satisfied and Very Satisfied. They determine that 57% using the entirety of their data set. So, that is a second part of your example that is flawed.

But, your point is that by creating that 57% they have "no idea" how many people might have wanted them to give the feature a second chance by redoing it again. And, I suppose you are technically correct. After all, WoTC isn't asking that question. They also don't know how many people would prefer a 2d10 system over a 1d20 system. Because they aren't asking that question.

They asked "do you like it". They didn't ask, "Should we try and make it better". So, you are correct, they don't have accurate data on how many people want them to make features better. That isn't the goal of the survey. So, yet again, you have demonstrated that you don't understand the actual process, because if you did, you wouldn't act like information they didn't ask for is somehow a slam dunk that they aren't getting the information they asked for.

They can read their data just fine. They just are bad at interpreting it because they are trying to get numbers not intended to be reached via the method they are using.

It's worthless because of the math, which you consistently ignore in favor of guesswork.

No, they aren't. They aren't trying to get numbers not intended. They are getting the numbers, then they are USING those numbers to determine policy. You have to separate the two. WoTC isn't making a survey to ask people how much they want them to edit and iterate on features. They are making a survey to ask people how much they like the features presented. Then utilizing that data to make determinations on their next steps.
 

facts that help with adjusting e.g. what people think about a Druid feature in 5e? You even trying to make that case is ridiculous, this is not something they have data for, like e.g. 'old white dudes vote disproportionately Republican, so if we have too many / too few of these in our survey, we need to account for that'...

What happened to

First you didn't even attempt to explain what it does or how it would help, and now you want to convince me that it definitely can help...


All this does is show why it is pointless to have a discussion with you. I am done

It's almost as if you wouldn't listen the first time, so I had to dig deeper and deeper into what little I did know, and do research into the rest, to try and show you what you are missing.

And your example is hilariously inaccurate to what I said. I said they had data, I didn't say they were looking at who votes for what political party.
 

It's almost as if you wouldn't listen the first time, so I had to dig deeper and deeper into what little I did know, and do research into the rest, to try and show you what you are missing.
all you showed is what little understanding you have of the topic

And your example is hilariously inaccurate to what I said. I said they had data, I didn't say they were looking at who votes for what political party.
It's like you understood nothing at all of what I posted, not really surprising at this point... this was not an example of what data they are looking for, but one of what kind of data is available for these adjustments. It clearly does not help them, that was the point.
 
Last edited:

all you showed is what little understanding you have of the topic


It's like you understood nothing at all of what I posted, not really surprising at this point... this was not an example of what data they are looking for, but one of what kind of data is available for these adjustments. It clearly does not help them, that was the point.

How does their DnD beyond data showing what people are buying, including to my knowledge being able to buy specific rules "parts" sometimes, not help them understand what things their customers prioritize?

How does having access to Google's meta data on what DnD terms are searched for most heavily, not help them understand what their customers are discussing and thinking about?

How does knowing which DnD videos get the most attention on Youtube not help them understand what their customers engage with?

The company I just started working for sells software to other businesses. One of the softwares we sell has the capability of showing our customers where their customers got the link or the phone number they used to call in and shop at their stores. I don't know if WoTC uses similar technology, but can you at least imagine how it might help to be able to trace how the customers who buy your items learned about those items, and use that to add clarity to what content they engage with most heavily?

See, there is plenty of data they could use to figure out important metrics of the community.

And how do you know that this data "clearly does not help them" when... they seem to be succeeding? Playtest 7 is a massive success by most metrics, most people are thrilled with the content and the changes. How can we look at this amount of positive discourse and say that the survey's are clearly failing the company?
 

And those same surveys have been used to make corporations hundreds of millions of dollars.


WotC surveys make a number of those mistakes.
Yes, it turns out polling people in a constantly shifting situation like a presidential election can end up being wrong. They aren't omniscient. But if they are so horrifically inaccurate to be worthless, then they wouldn't be used by companies as market research to determine how to make more money.
Because playtests are super stable and don't shift. ;)
You said they can't get percentages of broad categories, then make a single percentage from that. Then you talked about them doing just that. Seriously, stop assuming people are idiots and you are the only intelligent man in the world. The people who do these surveys and collect this data aren't fools who don't understand math.
No. I talked about them getting percentages for each category independently and only as a broad X percentage voted this way, which doesn't give a hard satisfaction rating. i.e. you can say that 44% of survey takers said they were satisfied, but you can't say that there was an 80% satisfaction rating, because they can't know how satisfied the customers were with their votes of "satisfied" or "very satisfied."

You might be "very satisfied" at 66% and I might be "very satisfied" at 88%
It isn't about technically incorrect numbers, it is about continual misunderstandings. You and Mamba both have made statements about WoTC or Crawford, that have been factually incorrect. You both continue to make assumptions and assertions based upon these fallacies, then act indignant when I point out that you are wrong.
Because we are not wrong and you haven't actually countered what we are saying. You keep misunderstanding things and trying to counter things we aren't saying with links and statements that aren't accurate about what WotC is actually doing.
WoTC has determined that a single feature of a single subclass rated a total of 57%, therefore they will abandon it. Now, somehow you think that that 57% data ONLY includes half their data set. It... it doesn't. It isn't just made of the people who voted Satisfied and Very Satisfied. They determine that 57% using the entirety of their data set. So, that is a second part of your example that is flawed.
Good God! 🤦‍♂️

They CAN'T do that. It's not possible, because none of their specific satisfaction percentages are accurate. It's quite literally impossible for them to know how many of the "unsatisfied" and "very unsatisfied" customers want the idea scrapped completely and how many want another iteration of the ability.

Just repeating "They use the entirety of it!" doesn't counter diddly.
But, your point is that by creating that 57% they have "no idea" how many people might have wanted them to give the feature a second chance by redoing it again. And, I suppose you are technically correct. After all, WoTC isn't asking that question.
Yes they are. Not directly, but if they are saying that if satisfaction hits X percentage they will give it another go, they are asking it indirectly. If they weren't, they wouldn't be doing it.
They asked "do you like it". They didn't ask, "Should we try and make it better". So, you are correct, they don't have accurate data on how many people want them to make features better.
And yet they announced a percentage of satisfaction that if hit, will result in trying to make a successful incarnation of the ability. You've just admitted they don't have accurate data to determine that. Thanks for finally conceding one of the points that @mamba and I have been making.
 

And how do you know that this data "clearly does not help them" when... they seem to be succeeding?
that is your real answer for everything. You have never had anything else to say for four weeks, no matter what the topic was.

Never think about anything, never engage with anything, just say whatever pops into your mind and call it a fact because ‘WotC make no mistake, WotC big’, that is all the ‘proof’ you ever have for anything you say

Now you are even applying that to something that you have no idea whether they even do. You just googled something and ran with the first answer you found, and that you barely understand the basics of (and that is me being generous). And yet that too obviously is something they do and that is working great for them because ‘how could it not’…
 
Last edited:

Remove ads

Top