D&D 5E (2024) A critical analysis of 2024's revised classes

I mean, there's also the question of whether the poll was actually representative, or whether it was full of partisans.

Given how radically D&D has grown and such in the past ten years? We have plenty of reason to believe that that sample is, at absolute bare minimum, very much non-representative now. And there were plenty of reasons to question whether it was representative back during the "D&D Next" playtest.

What is the logical reason to believe the sample is not representative of players of D&D?

I am not discounting that it is possible the UA results are or were a non-representative sample, but I don't see what evidence there is to support that and I would tend to think it is not partisans but rather "serious" players that are skewing it as compared to "casual" players who did not bother with or even know about the UA.

There does not seem to be any sort of organized campaign to skew the results as far as I am aware, so if we are to accept the hypothesis that the sample is skewed by partisans then it means it is randomly skewed with partisans of a certain idea participating and partisans of the counter philosophy not participating. That seems unlikely to me.

I can say anecdotally I commented on a lot of things and weighed my comments against commentary here in a non-scientific fashion. It seems the things I asked to eliminate in the surveys which were popular on this board (boosts to Monk, weapon mastery, spell casting restrictions on rage, multiclass minimum abilities) survived until the final product and were not cut in accordance with my preferences. Meanwhile things I did not like, that most others seemed to not like as well (Warlock half caster, making everything the Wizard does a spell, Wild Shape statblocks) were scrapped. I can only think if a couple things where what they implemented seemed like an unpopular choice (ex changes to twin spell).
 
Last edited:

log in or register to remove this ad

What is the logical reason to believe the sample is not representative of players of D&D?

I am not discounting that it is possible the UA results are or were a non-representative sample, but I don't see what evidence there is to support that and I would tend to think it is not partisans but rather "serious" players that are skewing it as compared to "casual" players who did not bother with or even know about the UA.

There does not seem to be any sort of organized campaign to skew the results as far as I am aware, so if we are to accept the hypothesis that the sample is skewed by partisans then it means it is randomly skewed with partisans of a certain idea participating and partisans of the counter philosophy not participating. That seems unlikely to me.

I can say anecdotally I commented on a lot of things and weighed my comments against commentary here in a non-scientific fashion. It seems the things I asked to eliminate in the surveys which were popular on this board (boosts to Monk, weapon mastery, spell casting restrictions on rage, multiclass minimum abilities) survived until the final product and were not cut in accordance with my preferences. Meanwhile things I did not like, that most others seemed to not like as well (Warlock half caster, making everything the Wizard does a spell, Wild Shape statblocks) were scrapped. I can only think if a couple things where what they implemented seemed like an unpopular choice (ex changes to twin spell).

1. Representative of what? For the most part play testing is only done by the most dedicated of groups that already play the current version of the game. I don't think it was that way for 2014 as we were coming off a fairly controversial edition, but for 2024 it was still highly successful and they were promising some form of backward compatibility, so we knew massive sweeping changes were most likely out.

2. I don't believe the most dedicated to 2014 D&D groups would be particularly representative. Though groups approaching them probably buy more wotc stuff than the more casual end of the spectrum so it's probably still a decent financial bet to listen to them. I don't know why anymore would think it was remotely possible that people matching that particular fact pattern would be representative of D&D as a whole. It's against all logic and common sense.
 

1. Representative of what? For the most part play testing is only done by the most dedicated of groups that already play the current version of the game. I don't think it was that way for 2014 as we were coming off a fairly controversial edition, but for 2024 it was still highly successful and they were promising some form of backward compatibility, so we knew massive sweeping changes were most likely out.

2. I don't believe the most dedicated to 2014 D&D groups would be particularly representative. Though groups approaching them probably buy more wotc stuff than the more casual end of the spectrum so it's probably still a decent financial bet to listen to them.

The statement was "We have plenty of reason to believe that that sample is, at absolute bare minimum, very much non-representative"

What are the specific reasons to believe this?

The points you make above suggest the play testing is done by "dedicated" players. This is exactly what I said in the second paragraph you quoted, but offers no evidence that the results are non-representative of the larger player base, especially when there is a lot of diversity of opinion among "dedicated" players.

I don't know why anymore would think it was remotely possible that people matching that particular fact pattern would be representative of D&D as a whole. It's against all logic and common sense.

Why? I accept that dedicated players could bias the results, but I think it is illogical to believe they automatically do bias the results without any evidence of that at all. This is especially true when many people on this board (i.e. "dedicated" players) have so many problems and issues with the playtest results and have such a wide diversity of opinion as to what is good and what isn't.

If you are sampling players, the baseline assumption is those that answer want to make the game better regardless of their experience level or dedication to the game.

If there is evidence dedicated players like "a thing" more or less than casual players I could accept the results with respect to "that thing" would be biased, but there is no evidence of that as far as I can see. No one has provided a single example of where that alleged difference of opinion exists, where "dedicated" players like something casual players don't like.
 
Last edited:

Dedicated players wouldnt be representative.

Most players are casual. They lack the knowledge/experience to do deep dives on crunch. They do know what they like however and are the biggest % of the players.
Or were at least .
 
Last edited:

The statement was "We have plenty of reason to believe that that sample is, at absolute bare minimum, very much non-representative"

What are the specific reasons to believe this?

The points you make above suggest the play testing is done by "dedicated" players. This is exactly what I said in the second paragraph you quoted, but offers no evidence that the results are non-representative of the larger player base, especially when there is a lot of diversity of opinion among "dedicated" players.



Why? I accept that dedicated players could bias the results, but I think it is illogical to believe they automatically do bias the results without any evidence of that at all. This is especially true when many people on this board (i.e. "dedicated" players) have so many problems and issues with the playtest results and have such a wide diversity of opinion as to what is good and what isn't.

If you are sampling players, the baseline assumption is those that answer want to make the game better regardless of their experience level or dedication to the game.

If there is evidence dedicated players like "a thing" more or less than casual players I could accept the results with respect to "that thing" would be biased, but there is no evidence of that as far as I can see. No one has provided a single example of where that alleged difference of opinion exists, where "dedicated" players like something casual players don't like.
Most importantly the way real surveys work isn’t to do a survey and say since you cannot prove the sample is not representative that it’s just as likely to be representative as not. Unless one has evidence the study is representative one shouldn’t act like it is or likely is representative.
 

Most importantly the way real surveys work isn’t to do a survey and say since you cannot prove the sample is not representative that it’s just as likely to be representative as not. Unless one has evidence the study is representative one shouldn’t act like it is or likely is representative.

Dedicated players would be more likely to participate in surveys. Imho

They made 5.5 more complicated when 5.0 big selling point was its relative simplicity.

I'm expecting a shorter edition run than 10 years. Not sure how short I would be shocked at 4 years not expecting 10.
 

Dedicated players would be more likely to participate in surveys. Imho
Agreed. The survey barrier to entry was high. I never participated in any but 1. It’s really hard to get non biased results with high survey barrier to entry.

They made 5.5 more complicated when 5.0 big selling point was its relative simplicity.

I'm expecting a shorter edition run than 10 years. Not sure how short I would be shocked at 4 years not expecting 10.
Good bet but I think it will be longer. I think they want stable d&d tabletop to push other d&d branded products. I could see them mostly doing small revisions to it again. Really depends on the competition.
 

Agreed. The survey barrier to entry was high. I never participated in any but 1. It’s really hard to get non biased results with high survey barrier to entry.


Good bet but I think it will be longer. I think they want stable d&d tabletop to push other d&d branded products. I could see them mostly doing small revisions to it again. Really depends on the competition.

D&D beyond is the great unknown imho.
 

Most importantly the way real surveys work isn’t to do a survey and say since you cannot prove the sample is not representative that it’s just as likely to be representative as not. Unless one has evidence the study is representative one shouldn’t act like it is or likely is representative.

Likewise if you have no idea how representative a sample is you should not assume that it is not representative either, yet that is exactly what people are doing here. Absence of a reason to confirm the quality of the survey is not the same as "plenty of reasons" to doubt it.

Without any specific information on the test design or metrics, you can only draw conclusions from the data you have. We have no reason to believe the sample is not representative of D&D players. No one who claims with confidence that it is not representative has provided even one specific topic that would have scored differently if the barrier was lower nor presented any evidence that the players who did take the survey (i.e. dedicated players) have different opinions, in aggregate, than players who didn't (i.e. casual players).

Moreover, the people commenting here and saying it is not representative, were not the ones that built the survey. Yet we have someone on this thread who was actually part of the team that designed and executed the survey and they are not saying this. The "it is not representative" argument started as a counterpoint to the one person on this thread who actually does know how the survey was designed and posted about the survey results and rules design.

I can walk through a city and ask people if they prefer Orange Juice or Apple Juice. If I only poll people inside one city it is possible the results would be different as compared to the population since I did not poll people living in the suburbs or rural areas or in different cities. Just because the results could possibly be different, does not constitute evidence they would be different.
 
Last edited:

Likewise if you have no idea how representative a sample is you should not assume that it is not representative either, yet that is exactly what people are doing here. Absence of a reason is not a reason and you can only draw conclusions from the data you have. We have no reason to believe the sample is not representative of D&D players.

Moreover, the people commenting here and stating that it is not representative, were not the ones that built the survey. Yet they presented this to as a counterpoint against the one person on this thread who was actually part of the team that designed the survey.

I think the surveys are representative enough.

Probably not to many casuals vote. I don’t because iirc they locked it behind beyond these days.

Problem with UA surveys is lack of time to playtest the content. Its essentially voting on vibes. Still better than no survey though.

People know what they like in any event. CME made it through the surveys to print.
 

Enchanted Trinkets Complete

Recent & Upcoming Releases

Remove ads

Top