• NOW LIVE! Into the Woods--new character species, eerie monsters, and haunting villains to populate the woodlands of your D&D games.

An "open letter" to WotC staff on survey design

kevtar

First Post
Thank you for adopting an "open playtest" approach as you develop the next iteration of D&D. I've enjoyed many of the articles on the WotC website and the conversations they have raised in forums like this. However, as a researcher, I've been frustrated with the surveys accompanying many of the articles and blog entries. Overall, the poorly designed surveys cause me to seriously question the data they collect, the decisions made following analysis, and their original intent of staff in including the surveys in the first place. Many of the polls are poorly designed in terms of the language used and the constructs found in each instrument. With this in mind, I offer this simple guideline on creating surveys that produce accurate, usable data.


The following is a simplified list of steps one might take in good survey design:
  • Set project goals: clear goals = better data
  • Determine sample
  • Develop questions
  • Pretest questions
  • Collect data
  • Conduct analysis

In many instances, WotC are lacking in determing clear-cut goals for what data they want to collect with each survey. They also seem incapable of desiging questions/constructs that produce good, usable data. For example, the types of surveys WotC uses can be described as "fixed response" surveys. A question is asked, and a set of fixed responses are provided for the respondent. However, in order for the survey to maintain any internal or construct validity (i.e. intrinsic reliability of the instrument and consistency and coherency between questions/constructs in the instrument), the responses must provide options for all possible alternatives and that the questions are unique with no conceptual overlap. One thing WotC has been doing well is providing an option for respondents to clarify their responses in the "comments" section. Whether or not these responses are included in their analysis remains to be seen.

Unfortunately, the majority of the polls I've seen suffer from a number of errors. I describe these errors below:

Inconsistency
Consistency is important. All of the responses should be similar and consistent so that no single response stands out to the respondent other than the one response that feels “true” for them. Inconsistent answer choices may lead respondents to a particular answer. Inconsistency also can make it difficult for respondents to recognize a choice that is best suited for them.

Irrelevancy
Irrelevant responses add unnecessary answer choices for the respondents and can cause distractions which can affect accurate data collection.

Poorly Designed Ranking Questions
When asking respondents to rank their responses, surveys should use the most direct language. Avoid asking “in the negative” or reverse ranking. Help the respondent in making a clear choice by keeping things intuitive.

Multiple construct Questions (or “doubled barreled” questions)
Sometimes surveys include questions that include two constructs. For example, consider the following:

“How do players and DMs feel about the recent survey?”
(a) satisfied
(b) unsatisfied

The question is inadequate because the respondent cannot address both “players” and “DMs.” In this instance, the question should be split into two questions: one for players and one for DMs.

Bias
This is pretty self explanatory, but I’ll provide an example anyway.

“Do you think the 4e offers a better magic system than the older editions?”
(a) Yes
(b) No
(c) No Opinion

This is a leading question that suggests that the 4e power system is better than the magic system from previous editions. A non-leading question could be:

“How do you feel about the 4e magic system compared to other editions?”
(a) I prefer the 4e system
(b) I prefer “Vancian magic”
(c) I have no preference for any of the magic systems
(d) Other. I’ll explain my comments below in the comments section.

The “unanswerable question”
Too often, researchers ask respondents questions that they cannot answer. This is especially prevalent in D&D surveys. Often, we see questions like: “Now that we described this option we are considering for D&D next, would you like that to be included in the new version of D&D?” While some respondents may answer the question, they are making a decision based on what the description provided to them and not on actual game experience. Many respondents will be frustrated by this type of question because they want to be able to have the actual experience in playing the concept before making a decision. Any response that doesn’t involve actual play experience is simply a guess. It may be informed through the description of the article, but the accuracy of the data collected by the question is still compromised because the players do not have actual gaming experience to inform their choice. A better question may be, “Here is concept ‘X,’ – based on this description, is this an option that should be considered for the next iteration of the game?” or "Would you be interested in playtesting this concept?"

General Tips
Here are a few suggestions I’d like to offer to WotC staff when designing online surveys:
• Clearly state the goal of the survey at the beginning of the survey.
• Include clear instructions on how to complete the survey. Keep it simple.
• Keep the questions short & concise.
• Only ask one question at a time.
• If you have more than 5 or 6 question, consider group them into categories.
• Finally – test the questionnaire before “going public.”

I wish you all the best of luck as you attempt to design an iteration of D&D that captures the "essence" of the D&D experience while providing players the option of tailoring that experience to their own particular style of play.

Kind regards,

A fellow player
 
Last edited:

log in or register to remove this ad

While I think (hope?) that the surveys aren't being used for meaningful analysis, clearer communication is always preferable.

To that end, I add my support to this letter.
 

I think you (and others) are assuming that these little surveys are being used for serious market analysis and research. While I could be wrong, it seems to me that they are more of an effort to generate reader participation, involvement, and debate. It may be that the results are merely glanced at, or used mainly to show readers past results. I do believe WOTC has done serious market research in the past, and knows, largely, how to conduct it.
 

I think you (and others) are assuming that these little surveys are being used for serious market analysis and research. While I could be wrong, it seems to me that they are more of an effort to generate reader participation, involvement, and debate. It may be that the results are merely glanced at, or used mainly to show readers past results. I do believe WOTC has done serious market research in the past, and knows, largely, how to conduct it.

Considering that the latest L&L references earlier polls to "get an idea of how and what people like to play", I wouldn't be surprised if they were.
 

I agree that WotC has conducted serious market research in the past, but I believe that these polls exist outside of that particular type of market research. In the past, WotC has employed a marketing firm in conducting online surveys, and these surveys have been well constructed. With the current polls, it seems the authors writing the articles are also creating the polls, and while they may be good game designers, they aren't particularly good researchers - at least in regard to survey design. This is simply a letter designed to identify the areas where they can improve and to provide them with some advice on how to create surveys that result in good data and don't frustrate their respondents.

On a side note, I think it would be helpful for WotC to clearly state how they intend to use the data (if they haven't already done so), but even then - even if this is only for "fun" - it should still be done well.
 

I'm not sure that any of these surveys, even if properly constructed, could provide viable data purely due to only surveying people who seek out the polls in the first place.

Which is an interesting slice of the D&D demographic, I'm sure.
 

I'm not sure that any of these surveys, even if properly constructed, could provide viable data purely due to only surveying people who seek out the polls in the first place.

Which is an interesting slice of the D&D demographic, I'm sure.

This is an interesting point and something that a researcher should consider when designing the instrument. This is where we think about the sample: who do we want to participate? How do we get them to participate? What are the limitations in selecting our sample?, etc...

If we adopt the "maximized purposeful sampling" approach, we can argue that we are specifically targeting visitors to the WotC website that seek out the articles. We can make certain assumptions about the group, and if we were interested in publishing our results we would have a robust discussion about the characteristics of the sample and our justification for using this type of sample selection. I don't think that WotC is worried about those factors, but it is interesting to think about who decides to participate in those surveys and why. I believe that line of questioning is directly related to the purpose of the surveys. It seems logical that people who participate in the surveys do so because they want their voices "heard," and more specifically, they want their voices heard and considered in the development of the new edition of D&D. With that in mind, I think WotC should be more specific in what the purpose of the surveys are, and if they are for decision making that affects the game, how these data are analyzed, interpreted and how the recommendations are put forward.
 

I'm not sure that any of these surveys, even if properly constructed, could provide viable data purely due to only surveying people who seek out the polls in the first place.

Which is an interesting slice of the D&D demographic, I'm sure.

While this does create a bias, a large enough sample should mitigate that.

Besides these days almost everyone has internet access so the pool in question is "People interested enough in 5e to be following the discussion and willing to answer polls" which, while no doubt containing a few outliers, is the core of their target audience.

In any targeted survey there are always bias risks but I don't think they are likely to be too bad in this case as there are not likely to be large sections of the D&D market segment who do not have internet access and are unaware of the 5e ongoing design process. While there is a risk is deliberate mischief in polls like this there are ways to detect and filter out that bad data. Not that they seem to be employing them...
 

I do believe WOTC has done serious market research in the past, and knows, largely, how to conduct it.

I conduct market research interviews. I see good surveys and a lot of bad surveys written by the clients (e.g., ambiguous questions, leading questions, lack of basic skip patterns or don't provide a choice for don't know/refused locking up the survey when customers cannot is unable to provide an answer based on the choices provided).

I have had the opportunity to take several of WOTC's customer surveys and the majority have fallen into the bad survey category (there have been a few that were good).

I remember a survey about one of their products (don't recall which one). It asked me to rate the overall book. It asked me to rate specific sections. It asked me to rate certain content in the book and to rate that content on how much I wanted to see more. All of the questions were phrased to encourage me to rate things highly (which was annoying).
I rated the overall book poor to below average. I rated the sections and content asked about poor to mediocre and I had no desire to see any of the content inquired about in any future products. I never got to rate the few things that I felt were exceptional and of which I wanted to see more, because that was the specific content never addressed in the survey.

I had another survey in which I answered a question that should have led to a skip pattern to avoid a few questions. Instead, I was stuck with questions that I could not answer as I had no familiarity with the product in that section as answered in a previous section.
 
Last edited:

I'm not sure that any of these surveys, even if properly constructed, could provide viable data purely due to only surveying people who seek out the polls in the first place.

Which is an interesting slice of the D&D demographic, I'm sure.


I agree with this. Considering that the people who frequent the WotC boards will be the ones still using their products OR actively searching out the polls.

I can imagine their results will be skewed to a particular demographic, compared to if it was held in other locations.
 

Into the Woods

Remove ads

Top