Patryn of Elvenshae said:
Without having seen a single one or the data thereby produced?
Don't need to. You, as a researcher, should know that problems of methodology can be determined by looking at the results.
http://www.rpg.net/news+reviews/wotcdemo.html
For example, the information provided by WotC admits it didn't poll existing demographics (35+) and speculated about what those demographics would yield by generalizing the results. And then we have this nugget:
Information from more than 65,000 people was gathered from a questionnaire
sent to more than 20,000 households via a post card survey. This survey was
used as a “screener” to create a general profile of the game playing
population in the target age range, for the purposes of extrapolating trends
to the general population.
This "screener" accurately represents the US population as a whole; it is a
snapshot of the entire nation and is used to extrapolate trends from more
focused surveys to the larger market.
Actually, we know the results are NOT a snapshot of the entire nation. We know it is grounds for a GUESSTIMATE. You don't find it at all odd that even their own analysis includes the statement
We know for certain that there are lots of gamers older than 35, especially for
games like Dungeons & Dragons; however, we wanted to keep the study to a
manageable size and profile. Perhaps in a few years a more detailed study
will be done of the entire population.
admitting they couldn't poll significant demographics, yet they then make conclusions that encapsulate those missing demographics?
Here's another big research no-no: 20,000 housholds yielded 65,000 results? So, more than 3 returns, on average, came from each household? Market Research 101: doubling up (never mind trebling) the individuals providing data from within the same household is going to introduce purchasing trends that are related to the household politics and economics rather than being representative of the market. For example, a household where three young kids send in the survey is likely to provide answers based on the fact that they have to spread around more money between them as opposed to a household spending money on just one kid. Considering the survey includes questions about how much the respondents spend on products in a month, all that data is definately tainted by improper sample separation.
The methodology explanation then goes on to explain that of the returns, 1000 were CHOSEN to participate for the end screener. Not "qualified" but CHOSEN. That means that people who were qualified through prescreening were then bypassed through a selection process. Their assertation to the contrary, subjectively choosing your end sample from a presample is NOT an accepted methodology for accurate quant or qualitative work. I truly hope that the wording is just a matter of poor choice and instead of "chosen", Ryan meant to say "qualified", but even then they are artificially winnowing the sample which directly repudiates their claims about how it relates to the overall gaming market--they are actually gaining information solely on the gaming market that fits whatever qualifications (if any) go the people in for the second survey.
Also, I shouldn't have to explain to you that 1,000 final screeners in a single, "blast" (meaning it doesn't take place over a period of time wherein results continue to come in to track changes over time) survey is hardly an accurate way to assess a national market, regardless of the industry.
Now we come to Section 3 of their data presentation. There are an awful lot of "millions of peopel play this" and "millions of people play that" for a single survey of 1000 people. If you're going to claim that each person in your sample represents several thousand people resulting in conclusions ranging in the millions, you'd better be using a much larger sample than that and you'd better be doing a longitudinal study; those are some pretty big claims to be making without tracking data (control groups, if you will) to compare to.
Now, we'll bypass most of their "exciting" conclusions because I've touched on most of the reasons why they are faulty and move right on to Section 4. Here we see another error in the data. They make a lot of claims about computer trends amongst gamers. Sorry, but no go. If you want to gather the information properly you don't just approach gamers and say "how many of you gamers do so and so on computers?" but you also have to approach people who play on computers and say "how many of you video game players also play role-playing games, CCGS, table-top games, etc.?" The way the data was gathered to gain these results is most definately skewed because it only approaches a two direction question from a single direction. To make their data gathered in this section at all relevant, their 1000 person sample should have been 500 of one and 500 of the other.
Hell, they don't even list an "other" rating for the multiple choice question. They even admit that the responses given were the only options allowed. That is VERY bad brand testing methodology. This was also a problem with the question concerning where the product was purchased--both "other" and "gift" (if you don't want to lump the latter into the former) were left off the list; while that may seem minor, it is, in fact, important.
And, if this "post card survey" is what I seem to remember it was--post cards included in product--then that is a biased method if they are trying to develop a general pop survey. You've already limited your sample to people purchasing WotC product instead of, say, having retailers insert it into every purchase, regardless of publisher, or mailing it out blindly. Again, if that was what they used (and, if IIRC, in the past people involved with the project have indeed stated that is what happened) all conclusions will be skewed. It's like saying the survey you can take when you register a computer game published by, say, EA Sports will give you an accurate account of video game players throughout America. Not it won't. At best it can give you information on people who buy games from EA Sports because the data was not gathered through other product suppliers. If this wasn't how the cards were distributed, I'd like to hear how they were (most likely a blind mailing.)