Gaps in Playtest Timing


Over the course of the One D&D playtest, a pattern has emerged where multiple weeks pass between the closing of one playtest and the opening of the next. Naturally, as someone who's excited to see new playtest content, I find this a bit frustrating. As I think about it more, though, it also seems wasteful in terms of time for playtesting and discussion.

Long testing periods allow players to do more actual playtesting before responding to the surveys, but this benefit lasts only as long as the survey is actually open. Once the advertised survey period ends, every day that passes without the next packet being published is a day that no one can be constructively playtesting. This means that the developers are throwing away weeks of playtesting time every time they transition from one packet to the next. Removing a few of these gaps, we could easily be finishing up the Warrior classes playtest and looking forward to the Mages. I have to imagine this would have resulted in higher quality feedback than trying to test six classes in a single packet.

What makes this particularly surprising is that the playtest is organized in a way that should facilitate rapid transitions. If two back-to-back packets focused on the same topic, then the developers would of course need a gap between them to respond to feedback. But if the packets have alternating or cycling topics, then whatever feedback the developers are waiting for, they can be preparing a playtest of something else. This is exactly the system one would design if the goal was to always have an ongoing playtest, yet the developers persist in leaving gaps that, so far as I can see, do nothing to benefit the development process.

log in or register to remove this ad

I'm inclined to think that these aren't exactly "playtests" at all. They're really just windvanes - checks to see what the response is to what they're planning.

To be fair, I think actually playtesting on this enormous scale would be a Herculean task.

An Advertisement