4E playtesting or lack thereof

Status
Not open for further replies.
In a post by Windjammer in the closed thread "WotC Strategy of Planned Obsolescence?", an issue of playtesting 4E stuff was brought up.

http://www.enworld.org/forum/4842667-post17.html

I vaguely recall something about the RPGA being used to playtest 4E before it was released. I remember reading older Dragon magazine articles which mentioned older editions also used the RPGA for playtesting, such as for 2E AD&D.

Does anyone know how much playtesting was actually done on older splatbooks, campaign settings, etc ... back in the 1E/2E AD&D and 3E D&D days?

Some of the links in Windjammer's article refers to articles by FrankTrollman suggesting that very little to no playtesting was actually done on later 3.5E splatbooks, such as "The Book of Nine Swords".

If it turns out that the 4E splatbooks had very little to no playtesting done on them before publication, it would be interesting to see how robust 4E is to overpowering/underpowering and whether balance is determined by a publicly unknown (ie. proprietary WotC) secret mathematical formula run on all new crunch rules.

Hi.

Do you play 4e?
 

log in or register to remove this ad

If WotC is using a secret mathematical formula to check new crunch for balance, I assume they could be running computer Monte Carlo simulations of some sort on the new crunch. If done correctly, the Monte Carlo simulations will run through thousands of cases and tell you whether a set of new rules is on average overpowered or underpowered, and how wide the variance is.

I'm quite certain there is no secret, magical formula (beyond perhaps, +1/2 level for players and +level for monsters, which is hardly a secret).

This is WOTC, not the Pentagon.
 


Wizards of the Coast: Almost a year ago, we released a fully playable playtest version of the Artificer that was about half the size of the intended final version. Thousands of you read it, played it, and emailed us feedback or discussed it with our designers on our forums. Based on your comments and your experiences, we've made the following changes, and are pleased to announce the release of the full, final version of the Artificer.

This Thread: The final version isn't the same as the playtest version. There's no evidence at all that the released version of the Artificer was playtested in any way.
 

OK - then what did you mean by the "priceless" thing?
See above. I have absolutely no interest to engage in attributing or declaring tacit intentions, be they other people's or my own.

Fifth element said:
What sort of playtesting do you consider "solid"?
I guess that type of question is easier to answer in the negative, and even then easier to answer by example rather than principle. Supposing I were to pitch a new class or racial write up to 4E Dragon, building a PC with that class/race once and seeing him through a couple of combats would probably not do to ensure that my class/race is compatible with the remainder of the system. What matters, in the end, is not some absolute quantity of time and rigor that goes into playtesting but whether that time and rigor is enough to ensure that the product being playtested doesn't cause widespread problems in the customer base upon releasing it.

Does that help you understand my claim, or do you still think it's too hazy? (If so, let me know which part in particular you find elusive.)

Fifth element said:
The wording - "no single version" - also discounts the possibility of solid playtesting of the system as it evolved during playtesting. What do you consider a "single version"?
I said "single version released by WotC in June 2008 to June 2009". So "release" is the key word. Official WotC releases include: a physical product released by WotC, like a rule book, an offering on D&D Insider, or an errata download on the WotC webpage. (If I'm inadvertently overlooking possibilities of official releases, let me know!)
 

Hello,

I read this thread and the source thread written by the OP. It would seem to me that 4e and its progression of rules and the level of balance is about on par with the previous edition.

Yes there are elements in both 4e and 3e that I think were hastily released, and perhaps did not receive as much oversight and testing as they deserved. But due to the modular nature of both systems, it is fairly easy to strike unbalanced deliveries from the game.

As for the specific item of skill challenges in 4e I would say that no they were not perfect. back when my group was giving 4e a whirl the thing that really disappointed us about them had nothing to do with any specific mechanical aspect of the system itself. It was that it seemed that so much of the non-combat 3e stuff was grouped into this single mechanic. But aside from that complaint I did not find the system to be unplayable. we actually had a fairly exciting skill challenge once about the group trying to drive a sled down a mountain in pursuit of another sled.

Sure, I think it could have used some more work, but it is my opinion that the assertion that the mechanic was not playtested all is false.

love,

malkav
 

Wizards of the Coast: Almost a year ago, we released a fully playable playtest version of the Artificer that was about half the size of the intended final version. Thousands of you read it, played it, and emailed us feedback or discussed it with our designers on our forums. Based on your comments and your experiences, we've made the following changes, and are pleased to announce the release of the full, final version of the Artificer.

This Thread: The final version isn't the same as the playtest version. There's no evidence at all that the released version of the Artificer was playtested in any way.

Wizards of the Coast: Almost a year ago with the relase of the 4E DMG, we released a playtest version of the skill challenges. Thousands of you read it, played it, and emailed us feedback or discussed it with our designers on our forums. Based on your comments and your experiences, we've made the following changes, and are pleased to announce the release of the full, final version of the skill challenge subsystem. Here's your DMG 2 (that will be $29.90 please) - OR - here's your free download at our website.*

This Thread: uuhhm, okay. I just wish you had written that on the blurb of the DMG 1 before I bought it.

-----

Cadfan, in all seriousness, there's a world of a difference between material like the Artificer or the Barbarian, classes which become part of 4E as soon as their playtest versions are released on DDI, and stuff like hybrid-classes which are banned from RPGA play for a reason - their being half-baked and having not nearly seen enough playtesting for WotC to say "yep, we're fine with it, some tweaking left to do, but nothing that will cause trouble".

* I mean that "OR". I'm just as curious as anyone else here which format WotC will choose to re-release the skill challenge subsystem (should they choose to do so).
 

If WotC is using a secret mathematical formula to check new crunch for balance, I assume they could be running computer Monte Carlo simulations of some sort on the new crunch. If done correctly, the Monte Carlo simulations will run through thousands of cases and tell you whether a set of new rules is on average overpowered or underpowered, and how wide the variance is.
I wouldn't be surprised at all if they do run simulations, but I'd guess it's far from top-secret. The math is clearly all there, just waiting to be crunched. I know that many of the designers have really good heads for math, as demonstrated on some of the podcasts.

Regardless, though, like I said - I have no idea what particular ball got dropped for skill challenges. It could have been eleventh-hour changes, it could have been ivory tower design... I have no idea. I don't think you can attribute it to playtesting, though. It's not a hammer that can pound all nails.

-O
 

amethal said:
Thus either WotC didn’t playtest the skill challenges which appeared in the DMG, or they chose to publish something they knew was wrong, or the playtesters failed to spot something that a great many other people (apparently) spotted straight away.

Now, this is different than an assertion that #1 is the case. You've offered three options, which I think cover the possibilities pretty well.

To me, #3 seems the most likely. I personally don't have any issues with the system in the DMG, but I know many people do. Maybe the designers just saw it the way I do, who knows.

We don't really have any evidence to support any one of these three possibilities, so if you are going to assert that a particular one is correct, you need to provide the evidence when you do so.

That "which appeared in the DMG" is too strong of a qualifier. Switching from "Skill challenges" to "Skill challenges which appeared in the DMG" is a big difference.

I can easily see that skill challenges were play tested, but, in the mad rush to put out the DMG, the test results were not fully integrated into the rules. Or, they found a lot of problems, and had to make a quick decision to get the product out.

That to me looks like true problems in the play test (in which I am including steps to integrate the results into the final rules). I think that is more arguable than to posit that there was no play testing at all.
 

See above. I have absolutely no interest to engage in attributing or declaring tacit intentions, be they other people's or my own.
Eh? You were saying that people shouldn't attribute motives to you, so I wasn't, and honestly don't care in regards to the thread as a whole. But what's wrong with asking for clarification on something you wrote? It's just a weird comment. I mean, was it intended as an insult?

What matters, in the end, is not some absolute quantity of time and rigor that goes into playtesting but whether that time and rigor is enough to ensure that the product being playtested doesn't cause widespread problems in the customer base upon releasing it.
Was it your impression that the released skill challenge mechanics caused widespread problems?

I agree they were mathematically wacky, and that this wackiness should have been caught. They don't look like they're broken at first glance, but a thorough breakdown shows that they don't do quite what you'd expect them to do. But do you believe peoples' home games were negatively affected by the rules as released to an extent that a gaming group not currently engaged in statistical analysis would even notice?

My contention is that the rules are, indeed, broken, but not in such a way that a gaming group - including a playtest group - would necessarily notice.

-O
 

Status
Not open for further replies.
Remove ads

Top