Interesting Ryan Dancey comment on "lite" RPGs

Ok, Conan.

Conan said:
You have to keep in mind that how this information is gathered is sample polling rather than complete polling.

In your professional opinion, as a market researcher, how much marketing research is done on a "sample polling" vs. "complete polling" basis?

And be honest.
 

log in or register to remove this ad

Patryn of Elvenshae said:
Which is exactly the kind of enforcement I'm arguing is going to be required if you want a 100% return rate on any sizeable demographic.
And I won't argue that point for a large demographic, which is why I most certainly did not specify that we had done so with a "sizeable" poll. I merely said that we had done it, and later specified that it was with groups of more than 100. It really depends on the person's emotional or social investment, though, rather than cash -- if it's the latter you risk "buying" their answers rather than giving them an incentive.

With gamers, for example, I doubt you'd find any such investment to give you such a high return. I wager, though, that you'd likely get a good response if you offered free copies of product or gave store credit rather than cash alone. Still not 100% returns, but definately a tailored incentive that could work. If I was doing the quant that Ryan describes, however, I wouldn't give product as part of the incentive because that could bias the results along the lines of "we want you to participate in a study to see what is more efficient, rules heavy games like DnD or rules lite games such as Buffy. In exchange for your answers you'll get $40 and a free DnD book of your choice."
 

Patryn of Elvenshae said:
Ok, Conan.



In your professional opinion, as a market researcher, how much marketing research is done on a "sample polling" vs. "complete polling" basis?

And be honest.
For general pop? None of the latter. In specialized fields still not a lot but returns can get very close if that's your goal. It really depends on what sort of resources you have available and what the client provides you, not to mention your purpose. For quant, sampling is still by far more common, but less so in qualitative (even though it's still in the majority.) I've never disputed that, though.

My point with WotC is that their sampling and polling instances is too little and too infrequent (one poll) to accurately arrive at many of their conclusions. My point wasn't that sample polling in general doesn't work.

If it was at all possible, I'd prefer to do complete polling all the time, though (as would any other researcher.) It's much easier to draw conclusions when you know you've talked to everyone in your target demographic rather than having to extrapolate. But that is, of course, impossible for the industry to survive on.
 
Last edited:




Steve Conan Trustrum said:
Do I really have to reiterate my comments on the validity of this assertation?

Let me spell it out for you nice and slow, then. :)

You're taking me to task because I said, in obvious, bolded hyperbole:

Me said:
Egads - you mean they did it the same way every other poll in the history of statistics has every been done? Say it ain't so!!!

I made this comment because, in your previous post, you started tossing around industry terms and anecdotal evidence to make it appear as if sample polling were some mysterious, voodoo process that's almost guaranteed to get sketchy results.

I pointed out, through hyperbole, that not only is sample polling a common and acceptable industry practice, it's also far and away the most common method of doing such research.

You agree with me about that, as is evidenced by your earlier post. So what's your problem with my statement?
 

Patryn of Elvenshae said:
I made this comment because, in your previous post, you started tossing around industry terms and anecdotal evidence to make it appear as if sample polling were some mysterious, voodoo process that's almost guaranteed to get sketchy results.
Actually, I stated VERY specifically as to why I thought that SPECIFIC studies WotC did were sketchy. When I used industry terms I even went through the bother of explaining them, so I hardly see how you can claim I was trying to obfuscate.

So what's your problem with my statement?
Already covered in depth, I believe. I doubt repeating it again for you would add anything you've missed on all those other occassions.

Honestly, at this point either address the points themselves or just step back and stop playing the same tune. We've all seen you repeat the same things post after post. We're all still waiting for you to actually comment on the actual points made, though. Please do so or stop wading in the water while telling everyone how well you can swim.
 


Patryn of Elvenshae said:
Without having seen a single one or the data thereby produced?

Don't need to. You, as a researcher, should know that problems of methodology can be determined by looking at the results.

http://www.rpg.net/news+reviews/wotcdemo.html

For example, the information provided by WotC admits it didn't poll existing demographics (35+) and speculated about what those demographics would yield by generalizing the results. And then we have this nugget:

Information from more than 65,000 people was gathered from a questionnaire
sent to more than 20,000 households via a post card survey. This survey was
used as a “screener” to create a general profile of the game playing
population in the target age range, for the purposes of extrapolating trends
to the general population.

This "screener" accurately represents the US population as a whole; it is a
snapshot of the entire nation and is used to extrapolate trends from more
focused surveys to the larger market.

Actually, we know the results are NOT a snapshot of the entire nation. We know it is grounds for a GUESSTIMATE. You don't find it at all odd that even their own analysis includes the statement
We know for certain that there are lots of gamers older than 35, especially for
games like Dungeons & Dragons; however, we wanted to keep the study to a
manageable size and profile. Perhaps in a few years a more detailed study
will be done of the entire population.
admitting they couldn't poll significant demographics, yet they then make conclusions that encapsulate those missing demographics?

Here's another big research no-no: 20,000 housholds yielded 65,000 results? So, more than 3 returns, on average, came from each household? Market Research 101: doubling up (never mind trebling) the individuals providing data from within the same household is going to introduce purchasing trends that are related to the household politics and economics rather than being representative of the market. For example, a household where three young kids send in the survey is likely to provide answers based on the fact that they have to spread around more money between them as opposed to a household spending money on just one kid. Considering the survey includes questions about how much the respondents spend on products in a month, all that data is definately tainted by improper sample separation.

The methodology explanation then goes on to explain that of the returns, 1000 were CHOSEN to participate for the end screener. Not "qualified" but CHOSEN. That means that people who were qualified through prescreening were then bypassed through a selection process. Their assertation to the contrary, subjectively choosing your end sample from a presample is NOT an accepted methodology for accurate quant or qualitative work. I truly hope that the wording is just a matter of poor choice and instead of "chosen", Ryan meant to say "qualified", but even then they are artificially winnowing the sample which directly repudiates their claims about how it relates to the overall gaming market--they are actually gaining information solely on the gaming market that fits whatever qualifications (if any) go the people in for the second survey.

Also, I shouldn't have to explain to you that 1,000 final screeners in a single, "blast" (meaning it doesn't take place over a period of time wherein results continue to come in to track changes over time) survey is hardly an accurate way to assess a national market, regardless of the industry.

Now we come to Section 3 of their data presentation. There are an awful lot of "millions of peopel play this" and "millions of people play that" for a single survey of 1000 people. If you're going to claim that each person in your sample represents several thousand people resulting in conclusions ranging in the millions, you'd better be using a much larger sample than that and you'd better be doing a longitudinal study; those are some pretty big claims to be making without tracking data (control groups, if you will) to compare to.

Now, we'll bypass most of their "exciting" conclusions because I've touched on most of the reasons why they are faulty and move right on to Section 4. Here we see another error in the data. They make a lot of claims about computer trends amongst gamers. Sorry, but no go. If you want to gather the information properly you don't just approach gamers and say "how many of you gamers do so and so on computers?" but you also have to approach people who play on computers and say "how many of you video game players also play role-playing games, CCGS, table-top games, etc.?" The way the data was gathered to gain these results is most definately skewed because it only approaches a two direction question from a single direction. To make their data gathered in this section at all relevant, their 1000 person sample should have been 500 of one and 500 of the other.

Hell, they don't even list an "other" rating for the multiple choice question. They even admit that the responses given were the only options allowed. That is VERY bad brand testing methodology. This was also a problem with the question concerning where the product was purchased--both "other" and "gift" (if you don't want to lump the latter into the former) were left off the list; while that may seem minor, it is, in fact, important.

And, if this "post card survey" is what I seem to remember it was--post cards included in product--then that is a biased method if they are trying to develop a general pop survey. You've already limited your sample to people purchasing WotC product instead of, say, having retailers insert it into every purchase, regardless of publisher, or mailing it out blindly. Again, if that was what they used (and, if IIRC, in the past people involved with the project have indeed stated that is what happened) all conclusions will be skewed. It's like saying the survey you can take when you register a computer game published by, say, EA Sports will give you an accurate account of video game players throughout America. Not it won't. At best it can give you information on people who buy games from EA Sports because the data was not gathered through other product suppliers. If this wasn't how the cards were distributed, I'd like to hear how they were (most likely a blind mailing.)
 
Last edited:

Remove ads

Top