Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*TTRPGs General
Interesting Ryan Dancey comment on "lite" RPGs
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Steve Conan Trustrum" data-source="post: 2497829" data-attributes="member: 1620"><p>Don't need to. You, as a researcher, should know that problems of methodology can be determined by looking at the results.</p><p></p><p><a href="http://www.rpg.net/news+reviews/wotcdemo.html" target="_blank">http://www.rpg.net/news+reviews/wotcdemo.html</a></p><p></p><p>For example, the information provided by WotC admits it didn't poll existing demographics (35+) and speculated about what those demographics would yield by generalizing the results. And then we have this nugget:</p><p></p><p></p><p></p><p>Actually, we know the results are NOT a snapshot of the entire nation. We know it is grounds for a GUESSTIMATE. You don't find it at all odd that even their own analysis includes the statement admitting they couldn't poll significant demographics, yet they then make conclusions that encapsulate those missing demographics?</p><p></p><p>Here's another big research no-no: 20,000 housholds yielded 65,000 results? So, more than 3 returns, on average, came from each household? Market Research 101: doubling up (never mind trebling) the individuals providing data from within the same household is going to introduce purchasing trends that are related to the household politics and economics rather than being representative of the market. For example, a household where three young kids send in the survey is likely to provide answers based on the fact that they have to spread around more money between them as opposed to a household spending money on just one kid. Considering the survey includes questions about how much the respondents spend on products in a month, all that data is definately tainted by improper sample separation.</p><p></p><p>The methodology explanation then goes on to explain that of the returns, 1000 were CHOSEN to participate for the end screener. Not "qualified" but CHOSEN. That means that people who were qualified through prescreening were then bypassed through a selection process. Their assertation to the contrary, subjectively choosing your end sample from a presample is NOT an accepted methodology for accurate quant or qualitative work. I truly hope that the wording is just a matter of poor choice and instead of "chosen", Ryan meant to say "qualified", but even then they are artificially winnowing the sample which directly repudiates their claims about how it relates to the overall gaming market--they are actually gaining information solely on the gaming market that fits whatever qualifications (if any) go the people in for the second survey.</p><p></p><p>Also, I shouldn't have to explain to you that 1,000 final screeners in a single, "blast" (meaning it doesn't take place over a period of time wherein results continue to come in to track changes over time) survey is hardly an accurate way to assess a national market, regardless of the industry.</p><p></p><p>Now we come to Section 3 of their data presentation. There are an awful lot of "millions of peopel play this" and "millions of people play that" for a single survey of 1000 people. If you're going to claim that each person in your sample represents several thousand people resulting in conclusions ranging in the millions, you'd better be using a much larger sample than that and you'd better be doing a longitudinal study; those are some pretty big claims to be making without tracking data (control groups, if you will) to compare to.</p><p></p><p>Now, we'll bypass most of their "exciting" conclusions because I've touched on most of the reasons why they are faulty and move right on to Section 4. Here we see another error in the data. They make a lot of claims about computer trends amongst gamers. Sorry, but no go. If you want to gather the information properly you don't just approach gamers and say "how many of you gamers do so and so on computers?" but you also have to approach people who play on computers and say "how many of you video game players also play role-playing games, CCGS, table-top games, etc.?" The way the data was gathered to gain these results is most definately skewed because it only approaches a two direction question from a single direction. To make their data gathered in this section at all relevant, their 1000 person sample should have been 500 of one and 500 of the other.</p><p></p><p>Hell, they don't even list an "other" rating for the multiple choice question. They even admit that the responses given were the only options allowed. That is VERY bad brand testing methodology. This was also a problem with the question concerning where the product was purchased--both "other" and "gift" (if you don't want to lump the latter into the former) were left off the list; while that may seem minor, it is, in fact, important.</p><p></p><p>And, if this "post card survey" is what I seem to remember it was--post cards included in product--then that is a biased method if they are trying to develop a general pop survey. You've already limited your sample to people purchasing WotC product instead of, say, having retailers insert it into every purchase, regardless of publisher, or mailing it out blindly. Again, if that was what they used (and, if IIRC, in the past people involved with the project have indeed stated that is what happened) all conclusions will be skewed. It's like saying the survey you can take when you register a computer game published by, say, EA Sports will give you an accurate account of video game players throughout America. Not it won't. At best it can give you information on people who buy games from EA Sports because the data was not gathered through other product suppliers. If this wasn't how the cards were distributed, I'd like to hear how they were (most likely a blind mailing.)</p></blockquote><p></p>
[QUOTE="Steve Conan Trustrum, post: 2497829, member: 1620"] Don't need to. You, as a researcher, should know that problems of methodology can be determined by looking at the results. [url]http://www.rpg.net/news+reviews/wotcdemo.html[/url] For example, the information provided by WotC admits it didn't poll existing demographics (35+) and speculated about what those demographics would yield by generalizing the results. And then we have this nugget: Actually, we know the results are NOT a snapshot of the entire nation. We know it is grounds for a GUESSTIMATE. You don't find it at all odd that even their own analysis includes the statement admitting they couldn't poll significant demographics, yet they then make conclusions that encapsulate those missing demographics? Here's another big research no-no: 20,000 housholds yielded 65,000 results? So, more than 3 returns, on average, came from each household? Market Research 101: doubling up (never mind trebling) the individuals providing data from within the same household is going to introduce purchasing trends that are related to the household politics and economics rather than being representative of the market. For example, a household where three young kids send in the survey is likely to provide answers based on the fact that they have to spread around more money between them as opposed to a household spending money on just one kid. Considering the survey includes questions about how much the respondents spend on products in a month, all that data is definately tainted by improper sample separation. The methodology explanation then goes on to explain that of the returns, 1000 were CHOSEN to participate for the end screener. Not "qualified" but CHOSEN. That means that people who were qualified through prescreening were then bypassed through a selection process. Their assertation to the contrary, subjectively choosing your end sample from a presample is NOT an accepted methodology for accurate quant or qualitative work. I truly hope that the wording is just a matter of poor choice and instead of "chosen", Ryan meant to say "qualified", but even then they are artificially winnowing the sample which directly repudiates their claims about how it relates to the overall gaming market--they are actually gaining information solely on the gaming market that fits whatever qualifications (if any) go the people in for the second survey. Also, I shouldn't have to explain to you that 1,000 final screeners in a single, "blast" (meaning it doesn't take place over a period of time wherein results continue to come in to track changes over time) survey is hardly an accurate way to assess a national market, regardless of the industry. Now we come to Section 3 of their data presentation. There are an awful lot of "millions of peopel play this" and "millions of people play that" for a single survey of 1000 people. If you're going to claim that each person in your sample represents several thousand people resulting in conclusions ranging in the millions, you'd better be using a much larger sample than that and you'd better be doing a longitudinal study; those are some pretty big claims to be making without tracking data (control groups, if you will) to compare to. Now, we'll bypass most of their "exciting" conclusions because I've touched on most of the reasons why they are faulty and move right on to Section 4. Here we see another error in the data. They make a lot of claims about computer trends amongst gamers. Sorry, but no go. If you want to gather the information properly you don't just approach gamers and say "how many of you gamers do so and so on computers?" but you also have to approach people who play on computers and say "how many of you video game players also play role-playing games, CCGS, table-top games, etc.?" The way the data was gathered to gain these results is most definately skewed because it only approaches a two direction question from a single direction. To make their data gathered in this section at all relevant, their 1000 person sample should have been 500 of one and 500 of the other. Hell, they don't even list an "other" rating for the multiple choice question. They even admit that the responses given were the only options allowed. That is VERY bad brand testing methodology. This was also a problem with the question concerning where the product was purchased--both "other" and "gift" (if you don't want to lump the latter into the former) were left off the list; while that may seem minor, it is, in fact, important. And, if this "post card survey" is what I seem to remember it was--post cards included in product--then that is a biased method if they are trying to develop a general pop survey. You've already limited your sample to people purchasing WotC product instead of, say, having retailers insert it into every purchase, regardless of publisher, or mailing it out blindly. Again, if that was what they used (and, if IIRC, in the past people involved with the project have indeed stated that is what happened) all conclusions will be skewed. It's like saying the survey you can take when you register a computer game published by, say, EA Sports will give you an accurate account of video game players throughout America. Not it won't. At best it can give you information on people who buy games from EA Sports because the data was not gathered through other product suppliers. If this wasn't how the cards were distributed, I'd like to hear how they were (most likely a blind mailing.) [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
Interesting Ryan Dancey comment on "lite" RPGs
Top