Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*Pathfinder & Starfinder
An "open letter" to WotC staff on survey design
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="kevtar" data-source="post: 5869566" data-attributes="member: 27098"><p>Thank you for adopting an "open playtest" approach as you develop the next iteration of D&D. I've enjoyed many of the articles on the WotC website and the conversations they have raised in forums like this. However, as a researcher, I've been frustrated with the surveys accompanying many of the articles and blog entries. Overall, the poorly designed surveys cause me to <u>seriously </u>question the data they collect, the decisions made following analysis, and their original intent of staff in including the surveys in the first place. Many of the polls are poorly designed in terms of the language used and the constructs found in each instrument. With this in mind, I offer this simple guideline on creating surveys that produce accurate, usable data.</p><p></p><p></p><p>The following is a simplified list of steps one might take in good survey design:</p><ul> <li data-xf-list-type="ul">Set project goals: clear goals = better data</li> <li data-xf-list-type="ul">Determine sample</li> <li data-xf-list-type="ul">Develop questions</li> <li data-xf-list-type="ul">Pretest questions</li> <li data-xf-list-type="ul">Collect data</li> <li data-xf-list-type="ul">Conduct analysis</li> </ul><p></p><p>In many instances, WotC are lacking in determing clear-cut goals for what data they want to collect with each survey. They also seem incapable of desiging questions/constructs that produce good, usable data. For example, the types of surveys WotC uses can be described as "fixed response" surveys. A question is asked, and a set of fixed responses are provided for the respondent. However, in order for the survey to maintain any internal or construct validity (i.e. intrinsic reliability of the instrument and consistency and coherency between questions/constructs in the instrument), the responses must provide options for all possible alternatives and that the questions are unique with no conceptual overlap. One thing WotC has been doing well is providing an option for respondents to clarify their responses in the "comments" section. Whether or not these responses are included in their analysis remains to be seen.</p><p></p><p>Unfortunately, the majority of the polls I've seen suffer from a number of errors. I describe these errors below:</p><p></p><p><strong>Inconsistency</strong></p><p>Consistency is important. All of the responses should be similar and consistent so that no single response stands out to the respondent other than the one response that feels “true” for them. Inconsistent answer choices may lead respondents to a particular answer. Inconsistency also can make it difficult for respondents to recognize a choice that is best suited for them.</p><p></p><p><strong>Irrelevancy</strong></p><p>Irrelevant responses add unnecessary answer choices for the respondents and can cause distractions which can affect accurate data collection.</p><p></p><p><strong>Poorly Designed Ranking Questions</strong></p><p>When asking respondents to rank their responses, surveys should use the most direct language. Avoid asking “in the negative” or reverse ranking. Help the respondent in making a clear choice by keeping things intuitive.</p><p></p><p><strong>Multiple construct Questions</strong> (or “doubled barreled” questions)</p><p>Sometimes surveys include questions that include two constructs. For example, consider the following:</p><p></p><p>“How do players and DMs feel about the recent survey?” </p><p>(a) satisfied</p><p>(b) unsatisfied</p><p></p><p>The question is inadequate because the respondent cannot address both “players” and “DMs.” In this instance, the question should be split into two questions: one for players and one for DMs.</p><p></p><p><strong>Bias</strong></p><p>This is pretty self explanatory, but I’ll provide an example anyway.</p><p></p><p>“Do you think the 4e offers a better magic system than the older editions?”</p><p>(a) Yes</p><p>(b) No</p><p>(c) No Opinion</p><p></p><p>This is a leading question that suggests that the 4e power system is better than the magic system from previous editions. A non-leading question could be:</p><p></p><p>“How do you feel about the 4e magic system compared to other editions?”</p><p>(a) I prefer the 4e system</p><p>(b) I prefer “Vancian magic”</p><p>(c) I have no preference for any of the magic systems</p><p>(d) Other. I’ll explain my comments below in the comments section.</p><p></p><p><strong>The “unanswerable question”</strong></p><p>Too often, researchers ask respondents questions that they cannot answer. This is especially prevalent in D&D surveys. Often, we see questions like: “Now that we described this option we are considering for D&D next, would you like that to be included in the new version of D&D?” While some respondents may answer the question, they are making a decision based on what the description provided to them and not on actual game experience. Many respondents will be frustrated by this type of question because they want to be able to have the actual experience in playing the concept before making a decision. Any response that doesn’t involve actual play experience is simply a guess. It may be informed through the description of the article, but the accuracy of the data collected by the question is still compromised because the players do not have actual gaming experience to inform their choice. A better question may be, “Here is concept ‘X,’ – based on this description, is this an option that should be considered for the next iteration of the game?” or "Would you be interested in playtesting this concept?"</p><p></p><p><strong>General Tips</strong></p><p>Here are a few suggestions I’d like to offer to WotC staff when designing online surveys:</p><p>• Clearly state the goal of the survey at the beginning of the survey.</p><p>• Include clear instructions on how to complete the survey. Keep it simple.</p><p>• Keep the questions short & concise.</p><p>• Only ask one question at a time.</p><p>• If you have more than 5 or 6 question, consider group them into categories.</p><p>• Finally – test the questionnaire before “going public.”</p><p></p><p>I wish you all the best of luck as you attempt to design an iteration of D&D that captures the "essence" of the D&D experience while providing players the option of tailoring that experience to their own particular style of play. </p><p></p><p>Kind regards, </p><p></p><p>A fellow player</p></blockquote><p></p>
[QUOTE="kevtar, post: 5869566, member: 27098"] Thank you for adopting an "open playtest" approach as you develop the next iteration of D&D. I've enjoyed many of the articles on the WotC website and the conversations they have raised in forums like this. However, as a researcher, I've been frustrated with the surveys accompanying many of the articles and blog entries. Overall, the poorly designed surveys cause me to [U]seriously [/U]question the data they collect, the decisions made following analysis, and their original intent of staff in including the surveys in the first place. Many of the polls are poorly designed in terms of the language used and the constructs found in each instrument. With this in mind, I offer this simple guideline on creating surveys that produce accurate, usable data. The following is a simplified list of steps one might take in good survey design: [LIST] [*]Set project goals: clear goals = better data [*]Determine sample [*]Develop questions [*]Pretest questions [*]Collect data [*]Conduct analysis [/LIST] In many instances, WotC are lacking in determing clear-cut goals for what data they want to collect with each survey. They also seem incapable of desiging questions/constructs that produce good, usable data. For example, the types of surveys WotC uses can be described as "fixed response" surveys. A question is asked, and a set of fixed responses are provided for the respondent. However, in order for the survey to maintain any internal or construct validity (i.e. intrinsic reliability of the instrument and consistency and coherency between questions/constructs in the instrument), the responses must provide options for all possible alternatives and that the questions are unique with no conceptual overlap. One thing WotC has been doing well is providing an option for respondents to clarify their responses in the "comments" section. Whether or not these responses are included in their analysis remains to be seen. Unfortunately, the majority of the polls I've seen suffer from a number of errors. I describe these errors below: [B]Inconsistency[/B] Consistency is important. All of the responses should be similar and consistent so that no single response stands out to the respondent other than the one response that feels “true” for them. Inconsistent answer choices may lead respondents to a particular answer. Inconsistency also can make it difficult for respondents to recognize a choice that is best suited for them. [B]Irrelevancy[/B] Irrelevant responses add unnecessary answer choices for the respondents and can cause distractions which can affect accurate data collection. [B]Poorly Designed Ranking Questions[/B] When asking respondents to rank their responses, surveys should use the most direct language. Avoid asking “in the negative” or reverse ranking. Help the respondent in making a clear choice by keeping things intuitive. [B]Multiple construct Questions[/B] (or “doubled barreled” questions) Sometimes surveys include questions that include two constructs. For example, consider the following: “How do players and DMs feel about the recent survey?” (a) satisfied (b) unsatisfied The question is inadequate because the respondent cannot address both “players” and “DMs.” In this instance, the question should be split into two questions: one for players and one for DMs. [B]Bias[/B] This is pretty self explanatory, but I’ll provide an example anyway. “Do you think the 4e offers a better magic system than the older editions?” (a) Yes (b) No (c) No Opinion This is a leading question that suggests that the 4e power system is better than the magic system from previous editions. A non-leading question could be: “How do you feel about the 4e magic system compared to other editions?” (a) I prefer the 4e system (b) I prefer “Vancian magic” (c) I have no preference for any of the magic systems (d) Other. I’ll explain my comments below in the comments section. [B]The “unanswerable question”[/B] Too often, researchers ask respondents questions that they cannot answer. This is especially prevalent in D&D surveys. Often, we see questions like: “Now that we described this option we are considering for D&D next, would you like that to be included in the new version of D&D?” While some respondents may answer the question, they are making a decision based on what the description provided to them and not on actual game experience. Many respondents will be frustrated by this type of question because they want to be able to have the actual experience in playing the concept before making a decision. Any response that doesn’t involve actual play experience is simply a guess. It may be informed through the description of the article, but the accuracy of the data collected by the question is still compromised because the players do not have actual gaming experience to inform their choice. A better question may be, “Here is concept ‘X,’ – based on this description, is this an option that should be considered for the next iteration of the game?” or "Would you be interested in playtesting this concept?" [B]General Tips[/B] Here are a few suggestions I’d like to offer to WotC staff when designing online surveys: • Clearly state the goal of the survey at the beginning of the survey. • Include clear instructions on how to complete the survey. Keep it simple. • Keep the questions short & concise. • Only ask one question at a time. • If you have more than 5 or 6 question, consider group them into categories. • Finally – test the questionnaire before “going public.” I wish you all the best of luck as you attempt to design an iteration of D&D that captures the "essence" of the D&D experience while providing players the option of tailoring that experience to their own particular style of play. Kind regards, A fellow player [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Pathfinder & Starfinder
An "open letter" to WotC staff on survey design
Top