There is certain methodological problem with doing surveys o so long as you get mostly positive results. Let me ilustrate.
================================
You have 1.000.000 players of DnD.
You publish first survey. Half of the players participate.
50% (250.000) likes it, and 50% (250.000) does not.
Out of those, who don't like it, 100.000 goes away to play 4e.
Next survey. Out of 400.000 participants 60% (240.000) likes the changes. 40% (160.000) does not.
Result: Hey, we are on a right track. 60% is more then 50% right? Some drop in numbers of participants is natural, right?
Another 100.000 goes away to play Pathfinder.
Next survey. Out of 300.000 participants incredible 70% (210.000) likes the changes. 30% (90.000) does not.
Result: Wow. The changes are still better and better. 70% is much more better than 50% in the first playtest. Yeah, some people are tired by the polling.
Another 60.000 people goes away to play 13th Age.
Next survey. Out of 240.000 participants we get to 75% success rate (yeah, it is tough to get to final line, right... 180.000 people still likes what they see, and yeah... 60.000 does not, no big deal).
Conclusion: We are getting really positive reactions to the game lately. We will publish. And possibly, 18% of the previous fan base will buy it.
================================
Statistics of this kind is very dangerous thing. Unless the surveys are very much controlled for the number of participants this might be very nasty surprise at the end. If you wish DnD happy future (and I do) you might not take this with light heart.
Disclaimer: I dropped form the surveys after third playtest not liking what I see. So I basically ilustruate the problem as well.