From the wizards boards...
http://209.221.178.225/showthread.php?t=843014
Mike Lescault: Rather then reply to another thread (and likely yank it off topic), I'd like to start a discussion here about community feedback. I am looking for ideas on how you think we can best collect and organize the feedback into managable lists that I can pass onto the R&D folks.
A couple of thoughts to start off the discussions:
Self-management - I like ideas where the community self-manages things as much as possible. For example, if we create a book wishlist, I think it's best if a member of the community owns and updates the list. This would allow me to grab an updated version of the wishlist whenever I think it's most likely to be of help to the R&D folks(likely with some of my own edits). Because this would be "peer-review" style, we'd avoid the problems that might come up if it was me or someone else from WotC who was editing the list. Are there problems here I am not thinking about?
Polls - I've seen a lot of people talk about polls. Polls really bother me for many reasons. It's extremely hard to create an unbiased poll that provides an accurate representation of...well, of anything beyond a measure of those inclined to take a poll. Even a very carefully worded online poll is going be inherently skewed because the only people responding would be those who are online (and inclined to answer online polls). Another problem is that a poll can easily generate needless badwill. If poll results happened to weigh heavily in one direction, while we ended up choosing the other direction, it can foster the idea that we don't care or don't listen. (i.e: Should all products be mailed to your house for free?)
So take a moment to think this subject over if you could. I'd appreciate any feedback you can offer.
Scott Rouse: Polls can be easily taken over by bots to skew results in a particular direction.
Customer response cards aka CRCs that are inserted into books are a valid (but not great) form of market research. They work because we can take a random sample of those we receive to weed out various types of bias (regionality etc) but it is "self reported" data so it needs (and we take) it for what it is. We use it as one research indicator among many but don't rely heavily on it because they only come from people who bought the book which is another bias in and of itself. Inserting the codes could change the costs by adding more due to the logistics of generating unique codes per unit. CRCs are a pretty inexpensive form of market research.
BTW Polls are different from online surveys you see pop up on wizards.com from time to time. The reason they are better is beacuse they are delivered at random so responses are more difficult to manipulate with a bot and have less bias (like a bias of purchasing behavior mentioned above) but have other issues e.g. they only target people online and are fairly expensive to execute (mostly on the back end data tabulation).
Well, there is always companies that could do the reseach for you; http://www.pollingpoint.com/, for example. There are problems going that route as well, seeing as an outside company wouldn't understand the workings of the D&D brand-name very well..
Scott: We do employ outside companies that facilitate research.
I think the best answer would be a cross between the CRC and the online surveys. Basically, create a campaign asking individuals if they want to be apart of WotC feedback circle. If they accept, give them some small discount of some type as an incentive to agree to take all WotC online polls and mail polls (if the customer doesn't have internet). Again, that may be more work then what your willing to put forth...
Scott: We have a D&D online panel that does exactly this. Surveys are done quarterly.
A better low mantance option would be to lift the boards' ban on polls and give us a specific format that you would want us to follow. Of course, this would create more work for WizOs, and would not be as accurate as the above mentioned idea.
Scott: As I said before online polls are pretty useless because a simple program script (bot) can be written to vote over and over to skew the results. As an example this happened in Washington State when the public was asked to vote on one of three state quarter designs. Someone wrote a bot that voted tens of thousands of times for one design.
Depending on the subject of the poll some one inclined enough to cheat on the issue could do so with little trouble.
Well written posts in a feedback forum seem to be a pretty good way to go.
Up until recently I played an MMORPG called Dark Age of Camelot (DAOC). The company who published that game, Mythic, solicited from their players a group of "team leaders" who would gather information from players on certain topics. There was a "leader" for each of the player classes, housing, roleplaying, PVP combat etc. Theses team leaders would gather the information from various message boards, emails conventions and then present to Mythic a report on "Player Recommendations" a few times every year. Mythic's R&D department would then reply back to the player base as to which idea's they liked, didn't like and what they were currently working on.
Scott: Our community Manager Mike Lescault worked with Mythic on much of what you are talking about here.
Mike: Scott's correct and it's funny that you bought it up. I relaunched the DAOC Team Lead program in 2003 and developed the policies and procedures you mentioned, serving as the TL coordinator for a couple of years before I moved into design full-time. I think it's definitely a great example of the level of community involvement I'd like to see.
I'm not sure that the same exact model would work in the case of D&D, but I want to try and explore similar types of feedback models.
####
Hope you find this interesting.
Cheers!
http://209.221.178.225/showthread.php?t=843014
Mike Lescault: Rather then reply to another thread (and likely yank it off topic), I'd like to start a discussion here about community feedback. I am looking for ideas on how you think we can best collect and organize the feedback into managable lists that I can pass onto the R&D folks.
A couple of thoughts to start off the discussions:
Self-management - I like ideas where the community self-manages things as much as possible. For example, if we create a book wishlist, I think it's best if a member of the community owns and updates the list. This would allow me to grab an updated version of the wishlist whenever I think it's most likely to be of help to the R&D folks(likely with some of my own edits). Because this would be "peer-review" style, we'd avoid the problems that might come up if it was me or someone else from WotC who was editing the list. Are there problems here I am not thinking about?
Polls - I've seen a lot of people talk about polls. Polls really bother me for many reasons. It's extremely hard to create an unbiased poll that provides an accurate representation of...well, of anything beyond a measure of those inclined to take a poll. Even a very carefully worded online poll is going be inherently skewed because the only people responding would be those who are online (and inclined to answer online polls). Another problem is that a poll can easily generate needless badwill. If poll results happened to weigh heavily in one direction, while we ended up choosing the other direction, it can foster the idea that we don't care or don't listen. (i.e: Should all products be mailed to your house for free?)
So take a moment to think this subject over if you could. I'd appreciate any feedback you can offer.
Scott Rouse: Polls can be easily taken over by bots to skew results in a particular direction.
Customer response cards aka CRCs that are inserted into books are a valid (but not great) form of market research. They work because we can take a random sample of those we receive to weed out various types of bias (regionality etc) but it is "self reported" data so it needs (and we take) it for what it is. We use it as one research indicator among many but don't rely heavily on it because they only come from people who bought the book which is another bias in and of itself. Inserting the codes could change the costs by adding more due to the logistics of generating unique codes per unit. CRCs are a pretty inexpensive form of market research.
BTW Polls are different from online surveys you see pop up on wizards.com from time to time. The reason they are better is beacuse they are delivered at random so responses are more difficult to manipulate with a bot and have less bias (like a bias of purchasing behavior mentioned above) but have other issues e.g. they only target people online and are fairly expensive to execute (mostly on the back end data tabulation).
Well, there is always companies that could do the reseach for you; http://www.pollingpoint.com/, for example. There are problems going that route as well, seeing as an outside company wouldn't understand the workings of the D&D brand-name very well..
Scott: We do employ outside companies that facilitate research.
I think the best answer would be a cross between the CRC and the online surveys. Basically, create a campaign asking individuals if they want to be apart of WotC feedback circle. If they accept, give them some small discount of some type as an incentive to agree to take all WotC online polls and mail polls (if the customer doesn't have internet). Again, that may be more work then what your willing to put forth...
Scott: We have a D&D online panel that does exactly this. Surveys are done quarterly.
A better low mantance option would be to lift the boards' ban on polls and give us a specific format that you would want us to follow. Of course, this would create more work for WizOs, and would not be as accurate as the above mentioned idea.
Scott: As I said before online polls are pretty useless because a simple program script (bot) can be written to vote over and over to skew the results. As an example this happened in Washington State when the public was asked to vote on one of three state quarter designs. Someone wrote a bot that voted tens of thousands of times for one design.
Depending on the subject of the poll some one inclined enough to cheat on the issue could do so with little trouble.
Well written posts in a feedback forum seem to be a pretty good way to go.
Up until recently I played an MMORPG called Dark Age of Camelot (DAOC). The company who published that game, Mythic, solicited from their players a group of "team leaders" who would gather information from players on certain topics. There was a "leader" for each of the player classes, housing, roleplaying, PVP combat etc. Theses team leaders would gather the information from various message boards, emails conventions and then present to Mythic a report on "Player Recommendations" a few times every year. Mythic's R&D department would then reply back to the player base as to which idea's they liked, didn't like and what they were currently working on.
Scott: Our community Manager Mike Lescault worked with Mythic on much of what you are talking about here.
Mike: Scott's correct and it's funny that you bought it up. I relaunched the DAOC Team Lead program in 2003 and developed the policies and procedures you mentioned, serving as the TL coordinator for a couple of years before I moved into design full-time. I think it's definitely a great example of the level of community involvement I'd like to see.
I'm not sure that the same exact model would work in the case of D&D, but I want to try and explore similar types of feedback models.
####
Hope you find this interesting.
Cheers!