Thank you for adopting an "open playtest" approach as you develop the next iteration of D&D. I've enjoyed many of the articles on the WotC website and the conversations they have raised in forums like this. However, as a researcher, I've been frustrated with the surveys accompanying many of the articles and blog entries. Overall, the poorly designed surveys cause me to seriously question the data they collect, the decisions made following analysis, and their original intent of staff in including the surveys in the first place. Many of the polls are poorly designed in terms of the language used and the constructs found in each instrument. With this in mind, I offer this simple guideline on creating surveys that produce accurate, usable data.
The following is a simplified list of steps one might take in good survey design:
In many instances, WotC are lacking in determing clear-cut goals for what data they want to collect with each survey. They also seem incapable of desiging questions/constructs that produce good, usable data. For example, the types of surveys WotC uses can be described as "fixed response" surveys. A question is asked, and a set of fixed responses are provided for the respondent. However, in order for the survey to maintain any internal or construct validity (i.e. intrinsic reliability of the instrument and consistency and coherency between questions/constructs in the instrument), the responses must provide options for all possible alternatives and that the questions are unique with no conceptual overlap. One thing WotC has been doing well is providing an option for respondents to clarify their responses in the "comments" section. Whether or not these responses are included in their analysis remains to be seen.
Unfortunately, the majority of the polls I've seen suffer from a number of errors. I describe these errors below:
Inconsistency
Consistency is important. All of the responses should be similar and consistent so that no single response stands out to the respondent other than the one response that feels “true” for them. Inconsistent answer choices may lead respondents to a particular answer. Inconsistency also can make it difficult for respondents to recognize a choice that is best suited for them.
Irrelevancy
Irrelevant responses add unnecessary answer choices for the respondents and can cause distractions which can affect accurate data collection.
Poorly Designed Ranking Questions
When asking respondents to rank their responses, surveys should use the most direct language. Avoid asking “in the negative” or reverse ranking. Help the respondent in making a clear choice by keeping things intuitive.
Multiple construct Questions (or “doubled barreled” questions)
Sometimes surveys include questions that include two constructs. For example, consider the following:
“How do players and DMs feel about the recent survey?”
(a) satisfied
(b) unsatisfied
The question is inadequate because the respondent cannot address both “players” and “DMs.” In this instance, the question should be split into two questions: one for players and one for DMs.
Bias
This is pretty self explanatory, but I’ll provide an example anyway.
“Do you think the 4e offers a better magic system than the older editions?”
(a) Yes
(b) No
(c) No Opinion
This is a leading question that suggests that the 4e power system is better than the magic system from previous editions. A non-leading question could be:
“How do you feel about the 4e magic system compared to other editions?”
(a) I prefer the 4e system
(b) I prefer “Vancian magic”
(c) I have no preference for any of the magic systems
(d) Other. I’ll explain my comments below in the comments section.
The “unanswerable question”
Too often, researchers ask respondents questions that they cannot answer. This is especially prevalent in D&D surveys. Often, we see questions like: “Now that we described this option we are considering for D&D next, would you like that to be included in the new version of D&D?” While some respondents may answer the question, they are making a decision based on what the description provided to them and not on actual game experience. Many respondents will be frustrated by this type of question because they want to be able to have the actual experience in playing the concept before making a decision. Any response that doesn’t involve actual play experience is simply a guess. It may be informed through the description of the article, but the accuracy of the data collected by the question is still compromised because the players do not have actual gaming experience to inform their choice. A better question may be, “Here is concept ‘X,’ – based on this description, is this an option that should be considered for the next iteration of the game?” or "Would you be interested in playtesting this concept?"
General Tips
Here are a few suggestions I’d like to offer to WotC staff when designing online surveys:
• Clearly state the goal of the survey at the beginning of the survey.
• Include clear instructions on how to complete the survey. Keep it simple.
• Keep the questions short & concise.
• Only ask one question at a time.
• If you have more than 5 or 6 question, consider group them into categories.
• Finally – test the questionnaire before “going public.”
I wish you all the best of luck as you attempt to design an iteration of D&D that captures the "essence" of the D&D experience while providing players the option of tailoring that experience to their own particular style of play.
Kind regards,
A fellow player
The following is a simplified list of steps one might take in good survey design:
- Set project goals: clear goals = better data
- Determine sample
- Develop questions
- Pretest questions
- Collect data
- Conduct analysis
In many instances, WotC are lacking in determing clear-cut goals for what data they want to collect with each survey. They also seem incapable of desiging questions/constructs that produce good, usable data. For example, the types of surveys WotC uses can be described as "fixed response" surveys. A question is asked, and a set of fixed responses are provided for the respondent. However, in order for the survey to maintain any internal or construct validity (i.e. intrinsic reliability of the instrument and consistency and coherency between questions/constructs in the instrument), the responses must provide options for all possible alternatives and that the questions are unique with no conceptual overlap. One thing WotC has been doing well is providing an option for respondents to clarify their responses in the "comments" section. Whether or not these responses are included in their analysis remains to be seen.
Unfortunately, the majority of the polls I've seen suffer from a number of errors. I describe these errors below:
Inconsistency
Consistency is important. All of the responses should be similar and consistent so that no single response stands out to the respondent other than the one response that feels “true” for them. Inconsistent answer choices may lead respondents to a particular answer. Inconsistency also can make it difficult for respondents to recognize a choice that is best suited for them.
Irrelevancy
Irrelevant responses add unnecessary answer choices for the respondents and can cause distractions which can affect accurate data collection.
Poorly Designed Ranking Questions
When asking respondents to rank their responses, surveys should use the most direct language. Avoid asking “in the negative” or reverse ranking. Help the respondent in making a clear choice by keeping things intuitive.
Multiple construct Questions (or “doubled barreled” questions)
Sometimes surveys include questions that include two constructs. For example, consider the following:
“How do players and DMs feel about the recent survey?”
(a) satisfied
(b) unsatisfied
The question is inadequate because the respondent cannot address both “players” and “DMs.” In this instance, the question should be split into two questions: one for players and one for DMs.
Bias
This is pretty self explanatory, but I’ll provide an example anyway.
“Do you think the 4e offers a better magic system than the older editions?”
(a) Yes
(b) No
(c) No Opinion
This is a leading question that suggests that the 4e power system is better than the magic system from previous editions. A non-leading question could be:
“How do you feel about the 4e magic system compared to other editions?”
(a) I prefer the 4e system
(b) I prefer “Vancian magic”
(c) I have no preference for any of the magic systems
(d) Other. I’ll explain my comments below in the comments section.
The “unanswerable question”
Too often, researchers ask respondents questions that they cannot answer. This is especially prevalent in D&D surveys. Often, we see questions like: “Now that we described this option we are considering for D&D next, would you like that to be included in the new version of D&D?” While some respondents may answer the question, they are making a decision based on what the description provided to them and not on actual game experience. Many respondents will be frustrated by this type of question because they want to be able to have the actual experience in playing the concept before making a decision. Any response that doesn’t involve actual play experience is simply a guess. It may be informed through the description of the article, but the accuracy of the data collected by the question is still compromised because the players do not have actual gaming experience to inform their choice. A better question may be, “Here is concept ‘X,’ – based on this description, is this an option that should be considered for the next iteration of the game?” or "Would you be interested in playtesting this concept?"
General Tips
Here are a few suggestions I’d like to offer to WotC staff when designing online surveys:
• Clearly state the goal of the survey at the beginning of the survey.
• Include clear instructions on how to complete the survey. Keep it simple.
• Keep the questions short & concise.
• Only ask one question at a time.
• If you have more than 5 or 6 question, consider group them into categories.
• Finally – test the questionnaire before “going public.”
I wish you all the best of luck as you attempt to design an iteration of D&D that captures the "essence" of the D&D experience while providing players the option of tailoring that experience to their own particular style of play.
Kind regards,
A fellow player
Last edited: