Those are the two extremes, perhaps, and it's a question of toward which do we want to tend and-or how accepting of failure are we.
I reject the dichotomy. It presumes that only one result (whether it be success or failure) is the "focus" or "point." Without
both being valuable, it's a non-starter.
Here I disagree. Often the very best of work comes from those who don't yet fully know the rules or in some cases even know any rules exist.
"Often" is a strong word, isn't it? How many first-year art students produce Mona Lisas? How many first-year philosophy students write a
Tractatus? It is absolutely true that some people don't need formal education to learn the rules--they already learned them, whether by accident or on purpose, such that formal schooling
might trip them up (in exactly the way that questions like "are you sure you know where your feet are" can screw up a dancer or the like). But your assertion is too bold; you are, essentially, saying that training and education are completely unimportant for producing any work of art or design, and I'm
pretty sure history isn't on your side on this one.
To return your argument to you: There have been times even in mathematics--perhaps
the most rules-based discipline around--where someone who "didn't know it wasn't possible" did something fantastic. Such events are exceedingly rare, not because The Man gets fertile minds down, but because it is
extremely hard to have such brilliant insight when you don't know anything yet. And even in the only case I know of is George Dantzig, this wasn't some fresh-faced first-year mathematics student; Dantzig was a graduate student in a UC Berkeley statistics course under his own doctoral advisor. This is someone who absolutely
already knew the rules of the particular art he was practicing, and his work would have been outright impossible for someone who didn't have the formal training he had.
So yeah. It's true that formal training is neither a guarantee of success, nor a requirement for it. But it's damn useful, and chasing the dream of the totally untrained rube that bests the Ph.D. is going to result in fewer good works, not more, in the long run. Recognizing that training isn't
the end-all, be-all is emphatically not the same as deciding, "Welp, guess nobody ever needed to
practice painting before they actually start doing portraiture!"
It's still the best homebrew module I've ever seen and would more than hold its own with any published modules then or since.
Then that is a great thing. But again, you present a singular case. There are two reasons you might do this. The first, to demonstrate the exceptionally weak claim that training (of whatever kind) isn't
required for producing good work. This is true...but it doesn't actually oppose what I said, that creative rules exist to help produce better work and, thus, mastering them
means learning when to break them, which is why it's worth bothering with practice and training. The second reason is to demonstrate the far stronger claim that these rules are
never necessary...but that's a universal claim, and you can't make a universal claim from a particular instance.
This is the
logical reason why an anecdote isn't data, by the way. You either get weak claims (that, in this case, don't actually affect my own claim), or you fail to reach strong ones.
A better example is music: look how many bands are at their creative best when they're just starting out, before they learn all the 'rules' or maybe even fully how to play their instruments (cf Sex Pistols), and then slowly get worse as they learn the rules and start conforming to them.
And how many bands struggle with their sophomore album, not because of any kind of lack of training or anything else, but because they had
their entire lives to prepare their first album and perhaps a few
years to prepare their second? Your evidence isn't strong enough to back up your assertion here. There are
far too many confounding variables. (To name a few others: success gets to their heads so they make foolish choices; living the high life causes them to disconnect from their sources of inspiration or engage in activities that reduces their working time; the stress, anxiety, and constant attention of stardom negatively affects their ability to work; they lose interest in producing further work of the same kind; etc.)
It sounds like your DM (or you, if you were the DM) was running 4e on hard mode.
Nope. DM was explicitly and intentionally running 4e precisely by-the-book, because he wanted to know exactly how by-the-book 4e worked out. He was, I admit, a DM who primarily used old-school stuff prior to running 4e. But he was running things so thoroughly "by the book" that we didn't even use updated/errata'd materials, he really wanted to know
exactly how things worked circa PHB2 (to include Druid and Bard and such).
I see them as more similar: the fighter making an attack and the rogue keeping watch are both trying to "do a thing"; with the main difference being that the fighter has a known target and obvious success-fail condition where the rogue does not.
But in every case, the Fighter is doing something with a defined benefit. The Rogue is only able to do things because the DM decided there would be benefit, and actively worked to make that benefit exist. It is entirely possible to make extensible framework rules (such as 4e's Page 42) and simple always-on options (like Aid Another) that make it so effectively all conceivable actions that have benefit can be represented by something definitively worthwhile.
It
is the lack of a target (or target-like-thing--I want to be clear that "target" has a lot of baggage I'm not keen on, I'm just using your word) and the lack of an obvious success-fail condition (or, at least, one that can be found with reasonable ease) that is the difference that matters to me.
You're assuming all the anecdotes in that 47 are my own.
Unless you collected these systematically--which I sincerely doubt, since you're getting these from people you've gamed with, which
is not a representative sample--it's exactly the same problem. This is one major part (though far from the only one) of why surveys are
incredibly difficult to design, and why good social science is so difficult to do.
Not sure quite how this is supposed to read
Then I spoke unclearly.
What I am saying is that "you CAN have fun doing X"--as in, it is
possible for
at least one person to have fun doing X--is the least useful of all defenses for a game element. That is, let's look at the negation of the statement: "it is impossible for at least one person to have fun doing X." I think we can agree that
any design element which you could truly label with this statement would be an objectively bad game element--something that should never appear in
any game, ever.
But what does that mean? That means that
absolutely all design elements that are ever worth considering--
literally every single one of the possible rules or components you could put into a game--must meet the common standard of, "At least one person
could enjoy this." Thing is? It's going to be
really hard to assert that a given element is objectively bad for all possible games (as you yourself have stated, more or less). So...that basically means we have a criterion--"element must have the potential for fun for at least one person"--which is
effectively always applicable, regardless of the design element we look at.
Now, if the criterion were, "A majority of players who want to play a game of type Y report having fun while doing X," that would be completely different. That WOULD be a matter of evaluating whether component X generates fun. But that is a
dramatically different claim from "it is possible for at least one person to have fun while doing X."
You're talking about the expected results of rolling six times, I'm talking true average: taking the actual average of 4d6k3 (which is 12.24 or something close) and breaking that out into the closest set of six whole-integer numbers.
...and now we go back to my original argument. "True average" people should actually be
exceedingly rare. The odds of rolling exactly two 13s and exactly four 12s are (approximately) .1327^2*.1289^4 = 0.00000486131, or about one in every 200,000. (Note that I am ignoring the order for this; the results will be the same if you account for ordering, as the factors will cancel out.) The perfectly average person is actually quite rare, as I said initially. Instead of this "true average" (which is quite rare), we should instead look at the
expected results. And that's what the AnyDice calculation does. It looks at what the most likely highest stat is, the most likely second highest stat, etc. And, lo and behold, it is nearly identical to the Elite Array!
And now I forget why we were talking about either one.
Some posters (I
think you among them?) had said that it is unnatural or unrepresentative to have characters with such high stats. I have been pointing to the statistics of such things to show that no, it is this unnatural enforcement of the
exceedingly rare "true average" behavior that leads you to think these results are divergent; they are in fact
more natural,
more representative of the distribution used. (Admittedly, btw, 3d6-strict would generate lower overall numbers, but the fact is that 14-16
isn't nearly as unusual as you claim even with such methods.)
I think you might have misinterpreted me somehow, which isn't that difficult all in all.
We're also talking about two different definitions of 'lucky' here, which probably isn't helping. <snip> So yes, only the lucky survive; but here we mean 'lucky in play'. Being 'born lucky' matters little if at all.
Okay then. Two questions:
1. If being "born lucky" doesn't actually matter, why do you care? It seems you have argued that
your own position is irrelevant, because it's actually the underlying system math (being highly lethal, having save-or-die rolls, great uncertainty about results) that decides whether characters live or die, not their individual statistics. So why not
let players play those "born lucky"? it won't matter in the end, but they'll get their little bit of enjoyment from big numbers.
2. Why are these two forms of luck so different? I genuinely don't understand. The snipped parts didn't really illustrate why luck during character generation is of an entirely different kind from luck elsewhere in play.
What I've been trying to point out is that this is exactly what my numbers don't tell me.
Sorry man, gut feels aren't the same as statistical analysis. I get that things don't
look all that favorable to you. But crunching numbers (particularly on a much larger, unbiased data set) is what actually answers questions like this.
Rare indeed is an adventure where one player goes through seven characters!
Would 4e even be able to handle a truly long campaign without some serious slowdown in character advancement?
Um...yes? There are several 1-30 adventure paths written for 4e (including the excellent
Zeitgeist, which I'm still dying to play through...ah, someday.) It is entirely possible to play a long-runner game with a perfectly reasonable pace of advancement. Say you level up every 3-5 weekly sessions; that gets you roughly 13 levels per year, so accounting for breaks and needing at least a few sessions to wrap everything up once you hit max level, a two-and-a-half year campaign would make perfect sense. I've only been a participant in one game that has ever lasted nearly that long...and that's the game I currently DM.
Because the rules-as-written are crap?
But they don't
have to be. That's why I keep talking about 4e. It's a game where the rules as written AREN'T crap. They sure as hell aren't
perfect, but they're quite effective at what they shot for. Dungeon World is another game where the rules as written emphatically are not crap. 13th Age is a third. It is
entirely possible to design rules that, as written, are ACTUALLY GOOD. That are actually WORTH using, so that you break them only when you know you need to. We're just caught on this idea that because rules will always need exceptions, you may as well not care about design quality and constantly force the DM to re-design the game on the fly. It's
incredibly frustrating to me the "well if they aren't perfect I don't want them" attitude that pervades the tabletop design community.
Recognizing errors is easier in hindsight than on the fly, to be sure.
Again: this assumes the ability to
see that there was an error in the first place. It is entirely possible to never realize what is wrong, and simply feel dissatisfied or continually work to "fix" your frustrations by going down blind alleys or adjusting unrelated elements. Hence why I bring up Dr. Howard Moskowitz and chunky spaghetti sauce all the time:
a full third of Americans had literally NO idea that they had been hankering for extra chunky spaghetti sauce their entire lives, because
having a preference or desire and
knowing what fulfills it are two completely different things.
I am NOT just saying, "Oh, well, these things can be hard to do on the fly." I'm saying these things may literally be
impossible for some people to figure out on their own, because the solution requires re-conceiving the problem with tools they don't know exist and asking questions they've never even considered.
For me, whacking save-or-die out of 1e would take about the same amount of work as introducing it to 5e.
Then I applaud your substantial design skill. I can emphatically say that ripping out all of 1e's save mechanics so that I felt confident I could have the experience I wanted, without running into nasty surprises, would be an absolutely
daunting task.
So the problem lies simply in their labelling of the first few levels as 'heroic', and you'll hear no disagreement from me on that.
'Heroic' shouldn't start until at least 5th level. But the marketing department has other ideas...and so low-level play gets rather badly mis-labelled.
Or, instead of saying "oh well that choice was bad," maybe we should recognize that there are (at least)
two different ideas of what low-level play is? Like, you are literally saying your idea of low level play is the objective way low-level play SHOULD be, for everyone. I, as an alternative, am asserting that we should recognize that there's a sizable audience (particularly brand-new players) for whom "low-level play" SHOULD be somewhat "heroic" (while still being relatively simple, to introduce them to the game)....and yet ALSO recognize that there's another sizable audience (which includes you) for whom "low-level play" SHOULD NOT be even slightly "heroic" (while still potentially being a very rich, detailed experience
if desired). There is no way to uphold these two attitudes with a singular progression for absolutely everyone...and thus the "zero levels" idea comes into play. That way, it is
equally correct to say that "low-level play" "is heroic" and "is not heroic," because "low-level play" refers to two different things: 1st level characters (who are presumed to have demonstrated their heroism) and "apprentice" characters or whatever we want to call them, who explicitly have
not (fully) demonstrated their heroism yet.
By introducing this feature, you respect that there are two radically different styles of play, and design game rules that actually try to make each group happy, rather than forcing one to dance by the other's tune. That's why I argue so stridently for it. It actually says, "You know, BOTH of you want something that is D&D, so BOTH of you deserve to get what you want."
There's also not enough warning given in the PH to advise players that bad things will inevitably happen to their characters.
Oooooooooooor maybe "bad things will inevitably happen to [your] characters" isn't something objectively good, but is a really specific and fairly narrow interest among tabletop roleplayers, and thus generally isn't catered to directly? Further, maybe it's an interest that can be catered to purely through electing to (as you described earlier) run a game in "hard mode," with opt-in features that increase risk and reduce survivability?