• NOW LIVE! Into the Woods--new character species, eerie monsters, and haunting villains to populate the woodlands of your D&D games.

D&D (2024) GenCon 2023 - D&D Rules Revision panel

you should, it is the best / only way for you to convey what you want in a way that WotC probably will pick up on
Maybe, but I can't stand that approach. What's the point of "stars" 2-4 if everyone only uses 1 & 5? It seems to be the standard, these days. I can't tell you how many times I've had an employee ask me to "rate them" on a 1-10 scale, who've told me that if they don't get AT LEAST 8's, then they will get in trouble with their boss. SIX should me "did well". 10 should be "I can't imagine how anyone could possibly do better" (a scenario that I don't think even exists).

I think most in us lack the frame of reference or information to really make an informed criticism of how this survey is being conducted or analyzed. We tend to reach for what we know and understand, which is leading to a lot of false assumptions (e.g. that the satisfaction numbers are equatable to letter grades in school, or that the onerous nature of written feedback is a flaw).
Well, that's for sure. And I admit that I'm not trained in survey-design, but seeing as I don't feel like I can use their system to tell them what I think (and it seems likely to me that I'm not the only one) then I don't know how they can get any useful data out of it. In particular when you have examples like them wanting to know if we like Wildshaping Templates or not and the Wildshaping Template examples they gave were so terrible. How can they tell if any sort of majority doesn't like templates, or doesn't like THOSE templates? I simply can't imagine.
 

log in or register to remove this ad

Maybe, but I can't stand that approach. What's the point of "stars" 2-4 if everyone only uses 1 & 5?
I have no idea about the point, granularity I suppose, but the result is to add confusion into WotC’s interpretation of the data

I resigned myself to 1 and 5 as ‘no’ and ‘yes’ to my first question (do you like it better than what we have, at a conceptual level, nevermind the details / balance, we will never get around to that one, you will have to trust us that we know what we are doing on that one, but if you do not go with 5, we will just throw it away and stick to what we have… you learned that the hard way recently, and unfortunately much too late to still do this playtest much good, oh well, see you again in 10 years, hope you still remember then, and that there isn’t a whole new generation who has to learn it then and screws this up again for everyone involved)

I can't tell you how many times I've had an employee ask me to "rate them" on a 1-10 scale, who've told me that if they don't get AT LEAST 8's, then they will get in trouble with their boss. SIX should me "did well". 10 should be "I can't imagine how anyone could possibly do better" (a scenario that I don't think even exists).
it’s the norm, sometimes they ask me to rate 5 stars (out of 5) because anything less gets them into trouble / is a fail.

seeing as I don't feel like I can use their system to tell them what I think (and it seems likely to me that I'm not the only one) then I don't know how they can get any useful data out of it
this, 100%
 
Last edited:

It's weird, as a federal technician working within the military, our evaluation system is a 1-5. 3 is "you're doing your job, no complaints" this is the standard "grade". If you rate a 5 then leadership wants documentation and proof you've been an outstanding worker. A good supervisor or flight chief will definitely go to bat for you if they feel you deserve the 5. On top of that getting 5s is the only way you can be considered for incentives.
 

It's weird, as a federal technician working within the military, our evaluation system is a 1-5. 3 is "you're doing your job, no complaints" this is the standard "grade". If you rate a 5 then leadership wants documentation and proof you've been an outstanding worker. A good supervisor or flight chief will definitely go to bat for you if they feel you deserve the 5. On top of that getting 5s is the only way you can be considered for incentives.
Same at my job, wouldn't be surprised if that was adapted from military usage.
 

what does too good mean? There should not be such a thing.

As I wrote yesterday (?), I’d go with two yes/no questions instead of a scale

1) Do you like this approach better than the current alternative?
2) If yes, do you think it needs improvements?

No 70% threshold that does not work the way WotC tells us it does, no second guessing by WotC whether 2 meant ‘I like templates, but these ones suck’ or ‘I hate templates’. No more us wondering ‘if I give it a 3, do they understand that I want them to improve it, or will they throw it out’

At this point I am rating 1 or 5, since we do not judge balance anyway, giving a 5 even if I do not like the current version means I at least have a chance of seeing a version of it I actually like. Anything else means I never will
You are correct. Maybe: keep, improve, drop?
 

There's a certain amount of settling that has to be done by a corporation, but that doesn't mean we can't keep asking for them to push harder.
Well as you just said, there IS an acceptable amount of settling, the devils in the detail of how much.

We all have to remember that simply put, that majority of the fanbase is pretty gosh darn happy with 5e. Sales continue to be strong, the brand continues to get stronger. Fixing what ain't broke seems pretty silly in the face of that.

Now should they continue to innovate, they should....and they are, just not in the "revamp the whole edition" that we might have thought at initial glance. And for those that are basically looking for 5.75 that is disappointing. For people wanting a more 5.25 they are likely pretty happy here, some tweaks to the really bad stuff, some polish, cleanups in a few key areas, reimagining of the bad subclasses, etc.

The ultimate trial of these changes is whether they justify new books. At this point, we do seem to have more a 5.25 than a 5.5 (going by our 3.5 standard of change), and so will that mean people go out and buy these books, or just remain perfectly content with their current books.
 

It's weird, as a federal technician working within the military, our evaluation system is a 1-5. 3 is "you're doing your job, no complaints" this is the standard "grade". If you rate a 5 then leadership wants documentation and proof you've been an outstanding worker.
same for my job, they also want details on a 1 or 2

Does not change what service people ask me to do / tell me (anything below 5 out of 5 is rated a failure, I am telling you specifically because you probably are not used to that) when it comes to rating my experience with them
 

The ultimate trial of these changes is whether they justify new books. At this point, we do seem to have more a 5.25 than a 5.5 (going by our 3.5 standard of change), and so will that mean people go out and buy these books, or just remain perfectly content with their current books.
WotC is probably banking more on fresh art and hundreds of new Monster stat blocks to sell than PC options...and if the books remain compatible, they keep people playing and eventually someone who refuses to buy a new PHB in 2024 might in 2026 becaothets are.
 

You identify how they are getting useful data about what people like and don't like - they are getting general data from the ranking, which can be done very quickly, and then specific feedback from written comments. That's how you "TELL THEM what [you] like and what [you] don't like."

Your point about the written comments taking time that you couldn't spare is a feature, not a flaw, of this type of methodology. They don't really want written feedback from folks who aren't deeply invested in that particular issue. They want written feedback from folks who feel strongly enough to find the time. For example, on the last survey I whipped through most of the responses, just giving a ranking and no comment. However, on a few specific points (monk basic design, Moon druid subclass, etc.) I gave significant, detailed written feedback.

This is a very standard design for a survey intended to 1) gauge overall reactions at a broad scale, 2) identify specific pressure points, 3) generate more specific feedback and suggestions on those pressure points. My work, for instance, does a very similarly constructed employee survey every year and uses it to identify management priorities - this is widespread methodology. WotC didn't just throw something together at the last minute; this is meticulously constructed survey that is very much up to current industry standards, and they have clearly invested substantial resources into this process. It is obviously being conducted by industry professionals.

Also, WotC has masses of data that we lack, which allows them (or more accurately, the professionals conducting the survey) to analyze the responses in aggregate and identify what the rankings, etc. mean in context. This is how they have established that a proposal that has fallen below the 70% satisfaction level is not currently worth pursuing for this project. That doesn't mean the idea is thrown in the trash - Ardlings, for example, fell well below that threshold, yet WotC has stated that they intend to keep working with the basic idea.

I think most in us lack the frame of reference or information to really make an informed criticism of how this survey is being conducted or analyzed. We tend to reach for what we know and understand, which is leading to a lot of false assumptions (e.g. that the satisfaction numbers are equatable to letter grades in school, or that the onerous nature of written feedback is a flaw).
Completely agree here. While there maybe parts of the survey structure I don't like, at the end of the day I am not a data scientist or a survey professional.

I trust that ultimately WOTC would like to succeed, and getting accurate feedback about how their products appeal to their customers is a key way to do that. As such, I expect that as a serious company, WOTC is going to use professionals to generate and accumulate their user data. Unless a survey expert wants to come on these forums and breakdown why the survey structure is objectively bad, I'm just going to assume the professional company that has a monetary stake in getting it right is going to know more about the topic than a few forum goers filling out a survey.
 


Into the Woods

Remove ads

Top