This was one of the major problems with the surveys. Early on people would be thoughtful, critical, and honest in their feedback. Then, WotC scraps anything that didn’t get at least decent reception right off the bat, because the designers are on a deadline, and as
@mearls has shared, past experience has shown that when an idea polls poorly, revising it almost never resulted in significantly improved scores. So, they figure no point wasting time on things that don’t get at least mixed results from the jump. But then players see that ideas they thought were promising but needed work got abandoned, and that erodes their trust in the process. So, they start responding more conservatively, being less critical of ideas they think are redeemable, in hopes
that will result in them getting iterated on instead of abandoned. But by then the deadlines are getting closer, so the designers are pressured to also be more conservative with their designs, sticking closer to what they know the audience already likes so they can get the approval they need and move on. This further erodes trust in the process, and responses start getting more polarized. Everything you like enough to not want to see abandoned is a 9 or 10 out of 10; everything else is a 0 or 1 out of 10.
I think the new red/yellow/green method is an improvement over the system they used for the D&D Next and One D&D playtests, because it makes it more clear whether a low to middling approval is an “I don’t want this” or an “I want this, but I want it to be better.” Because the former isn’t worth wasting time trying to fix, but throwing the latter out with the bathwater leads to overly conservative designs and players getting frustrated with the survey process.