• NOW LIVE! Into the Woods--new character species, eerie monsters, and haunting villains to populate the woodlands of your D&D games.

D&D (2024) GenCon 2023 - D&D Rules Revision panel

I just don't get people's acceptance of the status quo.

I get acceptance that what we get is what we get. I will accept whatever form of D&D WotC puts out, because I have little, if any, influence over that. But that doesn't mean we can't be constantly aspiring too better solutions to problems, already solved or not. Its seems antiethical to any artist, let alone game designer, to constantly just settle. There's a certain amount of settling that has to be done by a corporation, but that doesn't mean we can't keep asking for them to push harder. Innovation is how we got from B/X to the 5E of today — not just accepting things as they are, but by pushing to see what else we can do, how else can we crack this nut.
I think it all goes back to - one man’s trash is another mans treasure.

Then there’s also the notion that quantity has a quality all of its own.

And also that everything has trade offs.
 

log in or register to remove this ad

that was not the point I am making, to me it is an objectively bad way to gather data, regardless of the outcome. I have been consistently distinguishing between these two. Bad methodology can lead to outcomes I like, a good one can lead to outcomes I do not like, that changes nothing about the quality of the methodology

That I think it got in the way of things I would have preferred is just the icing on the cake, and probably a reason why I took a closer look at it, but it does not inform my judgement of the methodology, I always gave reasons for why it is / examples for why I think it is. Argue those if you want to. If anyone is arguing from the outcome to the methodology it is you when you say ‘D&D is selling great, so how they test must be working’
I'm with mamba here. I don't KNOW yet whether their data-gathering methodology is going to result in things that I like or things that I don't like. That's not really the point. My point, when criticizing their survey style, is that I don't think that they're getting very useful data if their goal is actually to find out what people like and what they don't.

Heck, when I'm filling out the surveys, I don't know how to use it to TELL THEM what I like and what I don't like. I can't imagine how they'll understand me. (And that's not even getting into the fact that I don't believe that everything I like should get "five stars" and everything I don't like should get "one star", which is so common in our modern world. I'm simply not that kind of extremist. The write-in parts don't help much either, because while I could write a book about my opinion on the rules of D&D, I simply don't have the time to do it every playtest survey. While I believe that they read them, I don't think that MY opinion is going to sway them enough to make it worth my while to type it. (Beyond a few sentences, at least.)
 

I recently came across this discussion of writing from comedian Bill Hader. The takeaway: "When people give you notes on something... when they tell you it's wrong, they're usually right. When they tell you how to fix it, they're usually wrong." The surveys are really good at telling the design team what's wrong. It may be that they're not interested in our ideas about how to fix it. And while they are getting the information they need, they are also successfully keeping the game that had its reputation so tarnished only 8 months ago in the conversation. That's good marketing.
I watched a presentation where a video game designer gave a talk saying much the same thing. Players are good at identifying or finding problems, but terrible at coming up with solutions.
 

I'm with mamba here. I don't KNOW yet whether their data-gathering methodology is going to result in things that I like or things that I don't like. That's not really the point. My point, when criticizing their survey style, is that I don't think that they're getting very useful data if their goal is actually to find out what people like and what they don't.
Maybe they just need a balance scale with balanced in the middle. Too bad on the left. Too good on the right. And maybe an extra button for unfitting.
 

I think it all goes back to - one man’s trash is another mans treasure.

Then there’s also the notion that quantity has a quality all of its own.

And also that everything has trade offs.
that covers why we should accept the result if we had faith in the way it was arrived at.

It does not apply to the methodology itself, we should always try to improve that
 

(And that's not even getting into the fact that I don't believe that everything I like should get "five stars" and everything I don't like should get "one star", which is so common in our modern world. I'm simply not that kind of extremist.
you should, it is the best / only way for you to convey what you want in a way that WotC probably will pick up on
 

Maybe they just need a balance scale with balanced in the middle. Too bad on the left. Too good on the right.
what does too good mean? There should not be such a thing.

As I wrote yesterday (?), I’d go with two yes/no questions instead of a scale

1) Do you like this approach better than the current alternative?
2) If yes, do you think it needs improvements?

No 70% threshold that does not work the way WotC tells us it does, no second guessing by WotC whether 2 meant ‘I like templates, but these ones suck’ or ‘I hate templates’. No more us wondering ‘if I give it a 3, do they understand that I want them to improve it, or will they throw it out’

At this point I am rating 1 or 5, since we do not judge balance anyway, giving a 5 even if I do not like the current version means I at least have a chance of seeing a version of it I actually like. Anything else means I never will
 
Last edited:

I'm with mamba here. I don't KNOW yet whether their data-gathering methodology is going to result in things that I like or things that I don't like. That's not really the point. My point, when criticizing their survey style, is that I don't think that they're getting very useful data if their goal is actually to find out what people like and what they don't.

Heck, when I'm filling out the surveys, I don't know how to use it to TELL THEM what I like and what I don't like. I can't imagine how they'll understand me. (And that's not even getting into the fact that I don't believe that everything I like should get "five stars" and everything I don't like should get "one star", which is so common in our modern world. I'm simply not that kind of extremist. The write-in parts don't help much either, because while I could write a book about my opinion on the rules of D&D, I simply don't have the time to do it every playtest survey. While I believe that they read them, I don't think that MY opinion is going to sway them enough to make it worth my while to type it. (Beyond a few sentences, at least.)
You identify how they are getting useful data about what people like and don't like - they are getting general data from the ranking, which can be done very quickly, and then specific feedback from written comments. That's how you "TELL THEM what [you] like and what [you] don't like."

Your point about the written comments taking time that you couldn't spare is a feature, not a flaw, of this type of methodology. They don't really want written feedback from folks who aren't deeply invested in that particular issue. They want written feedback from folks who feel strongly enough to find the time. For example, on the last survey I whipped through most of the responses, just giving a ranking and no comment. However, on a few specific points (monk basic design, Moon druid subclass, etc.) I gave significant, detailed written feedback.

This is a very standard design for a survey intended to 1) gauge overall reactions at a broad scale, 2) identify specific pressure points, 3) generate more specific feedback and suggestions on those pressure points. My work, for instance, does a very similarly constructed employee survey every year and uses it to identify management priorities - this is widespread methodology. WotC didn't just throw something together at the last minute; this is meticulously constructed survey that is very much up to current industry standards, and they have clearly invested substantial resources into this process. It is obviously being conducted by industry professionals.

Also, WotC has masses of data that we lack, which allows them (or more accurately, the professionals conducting the survey) to analyze the responses in aggregate and identify what the rankings, etc. mean in context. This is how they have established that a proposal that has fallen below the 70% satisfaction level is not currently worth pursuing for this project. That doesn't mean the idea is thrown in the trash - Ardlings, for example, fell well below that threshold, yet WotC has stated that they intend to keep working with the basic idea.

I think most in us lack the frame of reference or information to really make an informed criticism of how this survey is being conducted or analyzed. We tend to reach for what we know and understand, which is leading to a lot of false assumptions (e.g. that the satisfaction numbers are equatable to letter grades in school, or that the onerous nature of written feedback is a flaw).
 

I'm with mamba here. I don't KNOW yet whether their data-gathering methodology is going to result in things that I like or things that I don't like. That's not really the point. My point, when criticizing their survey style, is that I don't think that they're getting very useful data if their goal is actually to find out what people like and what they don't.

Heck, when I'm filling out the surveys, I don't know how to use it to TELL THEM what I like and what I don't like. I can't imagine how they'll understand me. (And that's not even getting into the fact that I don't believe that everything I like should get "five stars" and everything I don't like should get "one star", which is so common in our modern world. I'm simply not that kind of extremist. The write-in parts don't help much either, because while I could write a book about my opinion on the rules of D&D, I simply don't have the time to do it every playtest survey. While I believe that they read them, I don't think that MY opinion is going to sway them enough to make it worth my while to type it. (Beyond a few sentences, at least.)
This is why I only did the monk survey and rules glossary breakdown post on packet6 despite doing all parts of all surveys and a post with a full breakdown for the first 5 packets. There is a severe breakdown in the process somewhere and some of it only makes sense if I assume that the surveys are an irrelevant waste of my time by design of some elements in the process between whiteboarding and pushing things to test for survey or making much worse assumptions.

I don't know how to answer without adding self defeating noise to the results and if I do im skeptical it would matter
 

This is why I only did the monk survey and rules glossary breakdown post on packet6 despite doing all parts of all surveys and a post with a full breakdown for the first 5 packets.
same, I skipped over 95% this time, gave a few 5 and 1 and wrote something for every single one. Anything else feels like a waste of my time at this point. And if that is what I must conclude based on the experience so far, that does not fill me with confidence that any of this actually is worth my time. It feels like reading tea leaves on both sides.

Inception, the survey ;)
 
Last edited:

Into the Woods

Remove ads

Top