I'm with mamba here. I don't KNOW yet whether their data-gathering methodology is going to result in things that I like or things that I don't like. That's not really the point. My point, when criticizing their survey style, is that I don't think that they're getting very useful data if their goal is actually to find out what people like and what they don't.
Heck, when I'm filling out the surveys, I don't know how to use it to TELL THEM what I like and what I don't like. I can't imagine how they'll understand me. (And that's not even getting into the fact that I don't believe that everything I like should get "five stars" and everything I don't like should get "one star", which is so common in our modern world. I'm simply not that kind of extremist. The write-in parts don't help much either, because while I could write a book about my opinion on the rules of D&D, I simply don't have the time to do it every playtest survey. While I believe that they read them, I don't think that MY opinion is going to sway them enough to make it worth my while to type it. (Beyond a few sentences, at least.)
You identify how they are getting useful data about what people like and don't like - they are getting general data from the ranking, which can be done very quickly, and then specific feedback from written comments. That's how you "TELL THEM what [you] like and what [you] don't like."
Your point about the written comments taking time that you couldn't spare is a feature, not a flaw, of this type of methodology. They don't really want written feedback from folks who aren't deeply invested in that particular issue. They want written feedback from folks who feel strongly enough to find the time. For example, on the last survey I whipped through most of the responses, just giving a ranking and no comment. However, on a few specific points (monk basic design, Moon druid subclass, etc.) I gave significant, detailed written feedback.
This is a very standard design for a survey intended to 1) gauge overall reactions at a broad scale, 2) identify specific pressure points, 3) generate more specific feedback and suggestions on those pressure points. My work, for instance, does a very similarly constructed employee survey every year and uses it to identify management priorities - this is widespread methodology. WotC didn't just throw something together at the last minute; this is meticulously constructed survey that is very much up to current industry standards, and they have clearly invested substantial resources into this process. It is obviously being conducted by industry professionals.
Also, WotC has masses of data that we lack, which allows them (or more accurately, the professionals conducting the survey) to analyze the responses in aggregate and identify what the rankings, etc. mean
in context. This is how they have established that a proposal that has fallen below the 70% satisfaction level is not currently worth pursuing for this project. That doesn't mean the idea is thrown in the trash - Ardlings, for example, fell well below that threshold, yet WotC has stated that they intend to keep working with the basic idea.
I think most in us lack the frame of reference or information to really make an informed criticism of how this survey is being conducted or analyzed. We tend to reach for what we know and understand, which is leading to a lot of false assumptions (e.g. that the satisfaction numbers are equatable to letter grades in school, or that the onerous nature of written feedback is a flaw).