The evidence is that it exceeded 70% and they threw it out despite this.
So no evidence. I thought so.
Here’s mine:
The whole video goes into detail about their process, but around the 10 minute mark is where Crawford literally tells the audience what I’ve been telling you in this thread.
At no point does he say that 70% means it necessarily gets iterated on, and he even gives an example where something didn’t, that being the Jump Action. Feedback was meh, they looked at it internally, and decided against trying to rework it further.
So, if this is news to you, that isn’t really their fault, and your rage should have come back in January. At this point it’s just kinda silly.
yes, because that is what iterating means. How can you get from 70% to 80% with iteration when you never show it in a playtest again?
Maybe it doesn’t. If the qualitive feedback is mid at best, and they sit at a table and hash it out and don’t see a good way forward for it, why should they feel obligated to waste time putting another version in a playtest doc? Just to satisfy absolutists who won’t be happy anyway? Nah.
Your 'they iterated internally' is an unfounded, nonsensical excuse, nothing more.
If I’d said that, maybe. I didn’t, though. I basically pointed out that you don’t know at all whether they did or not, you’re just making assumptions that support the angry conclusion you want to heap further justification on top of.
When did I ever say the playtest process was good, healthy, or effective? I’ve been critical of their approach since D&D Next. I just accept that it’s not going to change.
EDIT: If anything, the fact that this shows they’re willing to make decisions based on more than just the satisfaction numbers is a positive thing in my opinion. That means it’s not pure design-by-committee, and they are actually willing to make decisions based on their own design goals and sensibilities. Good for them.
Same. I think their overall approach is pretty terrible, but I'm not mad about it. I'm just not going to drop the money on the new books.
I do like the playtest process, but that’s partly because it has seemed pretty clear to me for years that this is how it works. If the written feedback and the feedback outside the surveys don’t back up the math, they examine that feedback, take a look at the thing in question, and sometime just set it aside without a second iteration because they are making a judgement call.
IMO, that’s a good process.