• The VOIDRUNNER'S CODEX is coming! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

D&D 5E Assumptions about character creation

Lanefan

Victoria Rules
Honest question: Why should all "average" things equate to no bonus? It seems to me that the (as noted, purely hypothetical) "perfectly average" person is actually...decent at a lot of things? Treating "humans are average" as "humans have no bonus" leads to weird results on a d20 distribution, see the aforementioned "falls off ladders an average of 1 in 4 attempts" problem. Wouldn't it make more sense to say, well, average isn't +0, it's maybe +1 or +2?

(Obviously the other way to "fix" this is to alter the DCs of "easy" tasks so that they're actually very difficult to fail for someone with +0 modifier, but I'm not sure if that would work either.)
For something like falling off a ladder, most of the time climbing an ordinary ladder would be a no-roll auto-success in my eyes. If there's external stress involved e.g. a need to be extremely quiet or the ladder is unstable or the climber is being chased by ghouls then sure, a roll is warranted. I don't think this is controversial.
That...just seems like a really confining view of things. To me, "born lucky" should mean...well, someone who basically doesn't lose, or if they do lose it's incredibly unusual for them. As TVTropes defines the phrase, "A character that's so mind-bogglingly lucky, it defies all probability," up to and including having winning states seek the character out despite no personal effort to engage with them. E.g. a 95% win rate would be low for someone "born lucky." It...basically sounds like you define any likelihood of success higher than "clear majority" (2/3) as being "born lucky" which...I wasn't exactly dealt the best hand in life, and I'm not even that pessimistic about it.
The numbers I used were just for example. Replace them with 80% and 99% if you like; my point remains the same.
I honestly don't believe it's actually that bad, especially because I'm not talking about making something fancy. But substitute whatever simple skill you like. Would you feel you were "pretty good" at reading if 25% of the time you stared at the page and literally could not determine what the words said?
For a typical book, no. But there's been books I've read where I've have been ecstatic with a comprehension rate as high as 75% - with that I might have even passed the courses!
Or "good at walking" if 25% of the time crossing a room caused you to fall down? (This last one cited because "unable to cross a room" was literally referenced as a desired character status upthread.)
A normal healthy person would ordinarily be no-roll auto-success here. That said, if someone's character concept is that it's very old or infirm or somehow physically disabled then maybe walking across a room unassisted does become a roll-worthy challenge for it. A corner case, to be sure, but maybe (?) this is the sort of thing @MoonSong has in mind.
You're correct that roleplay is a wide-open field for contribution...except that whatever your stats are, that remains precisely identically true. There is literally zero difference in the potential roleplay contributions of a character with no stat above 6, compared to one with no stat below 18. Exactly zero difference. Which means that roleplay shouldn't be considered as part of the metric. It's already factored in--because it will be there no matter what, unless the player chooses not to (which, as noted, is on them--players should be free to choose not to engage, and even though that certainly has its problems, we as TTRPGers value the freedom to choose what to engage with). But having some strong statistic, some area of talent or expertise or whatever, actively enables additional stuff. It lets you put your money where your (roleplaying) mouth is, so to speak.

Perhaps a better way to say the above: Assume you have two players of effectively identical roleplaying ability. They're good players who work well with each other and the DM, who don't just do things for min-max potential but do take advantage of the tools provided to them, often in creative ways that surprise and delight the DM and fellow players. Alice has a character with high stats. Bob has a character with low stats. In any circumstance where roleplay alone is sufficient to address an issue, Alice and Bob are on reasonably equal footing--it is their choices and their interests which will be the key determinant of who contributes more in any given situation, and even a mediocre DM can handle that so both really do contribute to a similar degree. In any situation where base statistics set the terms of contribution (not just combat--anything depending on them), Alice clearly has the edge, and there's basically nothing Bob can do about that other than beg the DM for an extra advantage. So that covers two of the three possible situations: pure roleplay challenges, and pure statistic-based challenges. The remainder is, obviously, mixed challenges...but the problem is that no mix ends up providing a net benefit to Bob instead of Alice. Bob can only get back up to where Alice is...assuming Alice doesn't also do things to eke out extra mechanical advantages.

On pure roleplay, Alice and Bob are equal--no points scored. On pure statistics, Alice is ahead--one point to her. On mixes, there is never a situation where Bob can score a point, he can only (sometimes) avoid losing further points. That's the problem I have.
In our game last night a combat arose where the opponents could only be hit by magic weapons, and due to previous misfortune one non-caster PC didn't have one. So instead of just standing there (which he easily could have done) he found other ways to be useful - keeping watch, directing traffic (kind of like a field general), and once or twice acting as a blocker a.k.a. damage soak.

As for Alice and Bob in combat or other stat-dependent scenarios, instead of begging the DM for extra advantages maybe Bob might look to either finding ways to contribute around what Alice is doing or outright supporting Alice, while recognizing Alice is the key person in those moments. Similar to a quarterback and his o-line: the o-line are there purely as support to let the quarterback do his thing, but without those linemen the quarterback is toast.
The plural of anecdote is not data.
Every time I see this statement it irks me, as what is data but the plural of observations, and in any non-scientific realm what are observations other than anecdotes?
Technically, yes. That is, a true normal distribution has 68.2% of its probability density between -1 and +1 standard deviation, and 95.4% of its probability density between -2 and +2 standard deviations. Of course, IRL, many things are not normally distributed, but the normal distribution is generally a good prior when looking at human variability. (Often you end up with an asymmetrical distribution with a long upper tail--e.g. it's quite rare for a human to be 3 standard deviations below average height for their gender and geographic origin, but meaningfully more common to be 3 SD above, because of the physical and biological stressors involved.)
Yes, and there's fairly simple ways to model that very thing (long-tail height distribution) in the game if one wants but it involves rolling a couple more dice during char-gen.
Okay. So, with that "funnel effect" thought in mind: What happens if you just skip over the funnel process, and focus on only those who survived a funnel? What happens to their distribution? How likely are they to be, compared to the un-funneled population, "special"? Should we expect the post-funnel population to meaningfully resemble the overall population in either average or spread of results?
I assume this is what's being done when DMs start their games at 5th level (or whatever other not-1st level), and fair enough for thems as wants it. Not me, though. I love low-level play, both as player and DM, and to skip it would rather butcher the fun for me.
So you don't want people "born lucky," but you want something "where only the lucky survive"? That sounds pretty clearly like you want most of your characters to fail...which is exactly the thing I'm talking about being asymmetrical toward player interests.
What I want is somewhat irrelevant. What I expect - and what I expect my players to expect - is that bad things (often but not always meaning death) can and will happen to their characters.

And 'born lucky' (as in having a significantly higher stat line than usual) does not necessarily translate into 'lucky to survive' once the puck drops, at least in my/our own games.

I have almost every character sheet I've ever DMed, as does the other main DM in our crew; and just for kicks I ran some numbers a while back. I took the character sheets of every character I could find that had lasted over a certain length of time (my cutoff was ten adventures, and at the time there were about 90 such characters), and a large random mittful of 100 or so of those who hadn't lasted as long, and compared their starting stats (after racial adjust) vs their career length. In both the 'high' and 'low' groups there were some characters whose careers were artificially cut short by the game ending, I didn't bother winnowing these out as their spread was fairly consistent across the board, i.e. a wash.

The difference was surprisingly low. I'm not statistician enough to say whether it was even 'statistically significant' or not, but the eye test told me that to a very large extent starting stats are at best a very low determinant of a character's future career length.
A player that wants to find failure a lot can always up the difficulty, as it were. It is much harder to remove difficulty already baked into the game. Sorta like how it's very hard to whip an unreliable difficulty metric (such as 3e's CR system) into shape as a reliable one, but it's quite easy to either ignore or intentionally modify a reliable difficulty metric (such as 4e's XP budget system) such that you no longer have reliable difficulty estimates.
As I just said in another thread, encounter builders are for the birds. Any half-decent DM is quickly going to learn via trial and error what her party can handle, and it'll be different for every party and every player group. No encounter builder can possibly account for variables in party size, party composition, degree of optimization, optional rules in or not in use (relevant in 5e and a factor in old-school), accumulated magic items and possessions, internal level variance, or a bunch of other factors - so unless the intent is to tell people how to play (you must have x characters of a-b-c-d party composition always at the same level, which - sadly - 4e kinda leaned into) it's probably best for a DMG to largely steer clear of encounter-build formulae and just give some rough-edged advice.
I'll also be honest: it's a little ironic that you challenged the stuff I said earlier about selective pressure. In a world where "only the lucky survive" adventuring, those "born lucky" will become overrepresented among the population. Exactly how quickly depends on exactly how hard you mean that "only," but if I take you at the usual meaning of the phrase (as in, you're guaranteed to die unless luck factors in sooner or later), you're basically saying that those "born lucky" should predominate among adventurers, whether PCs or NPCs.
See above.
 

log in or register to remove this ad


EzekielRaiden

Follower of the Way
They are less than a quarter of the population. Or should be. Again, my beef isn't with point buy. My beef is with rolling methods that make these numbers higher.
Are they? Where are you getting these statistics? Maybe in OSR D&D you might argue that, but not so much in modern D&D. Even if it were, again, I don't think this properly accounts for the selective pressures that apply to adventurers (whether PC or NPC). If, as Lanefan says, luck is the determining factor in winning (and your desire for sub-50% success rates would support that), then "born lucky" or "uberman" adventurers would be more common in relatively short order.

Someone with an IQ of 115 is at int 13. (Within a SD of the mean) Someone at IQ 125+ is in the 16+ range.
Er...no. Someone with an IQ of 115 is exactly at 1 standard deviation above the mean, not within one. I would say that's 14, possibly 15. Again: where are you getting these statistics? These don't even match the "standard deviation = modifier" approach, as you've made 1 SD too large and 2 SD much too small.
But failing one quarter of the time is significantly better than failing about half the time, and that failing more than half the time.
I mean, yes? I never said otherwise. Lower probability of failure is lower probability of failure.

As for where I'm taking the numbers 25% versus 60%. Somewhere else you mentioned that achieving something 40% of the time wasn't the sign of an incompetent character, but that is still a character that fails something 60% of the time.
I don't recall saying that myself, but I have a memory like a sieve. So, allow me to correct the record (whatever it may be): I don't actually think 40% success rates with easy tasks (again, this is critical, I'm specifically speaking of easy tasks, which seems to be very quickly forgotten...) is anything like a reasonable success rate. I'd call that a reasonable success rate for fairly difficult tasks, like trying to persuade someone fairly skeptical or pulling off a gourmet multi-course meal on a tight budget. For an explicitly "easy" task, failing 60% of the time would be a sign of incompetence as far as I'm concerned.

To summarize:
"Easy" tasks (that can still be failed) should be very high success rate (90%+)...otherwise they are not "easy." Failure is a genuine surprise.
Moderate tasks should be in the (very roughly) 60%-75% success rate range. Reasonably achievable, but failure isn't a surprise per se.
Difficult tasks should be in the 40%-50% range--you're about as likely as not to fail them, so there's high tension for each such effort.
Formidable (for lack of a better term) tasks should be roughly 35% or less: success wouldn't be a surprise per se, but you expect to fail.
Nigh Impossible: 10% or less chance of success. Success is a genuine surprise.

All of the above labels are just descriptives to indicate that difficulty rises. And note all the "roughly"s in there--these are squishy categories, not absolute bright lines.

Combat, for example, is mostly full of Moderate tasks (fighting opponents of very roughly analogous combat ability), with Difficult tasks not uncommon and some rare Easy tasks along the way. "Skill challenges" (things overcome through a sequence of skill checks, rather than combat per se) may run the gamut, but generally also sit in the Moderate range with a few Easy and a few Difficult tasks depending on the approaches players choose.

I don't remember how the character with six 18's came to be into this discussion, but to me the ubermen I'm talking about are characters consistently above average on every stat and with more than one score above the normal range (16+)
Where is this average defined? (Essentially the same "where are you getting these statistics from" thing.) The typical stat is somewhere between 10 and 13 for most 5e characters--only their two best and their worst stats fall outside this range.
The failure rate of a 16+ is too low for me to notice, but once we are in an 8- I'm more likely to notice it. I'm not bothered by failing, but rather having so many failures and so many risks make things sweeter when I win at the end.
I find this a little hard to believe, frankly. Changing the rate of success by 20 percentage points (going from 8 to 16 in the stat) is meaningful, to be sure, but going from "wow I can't believe I never fail" to "ah, good, I do fail sometimes" doesn't jive. You'd have to be going from something like 75% success to 95% success for that--and you've already made expressly clear, you're expecting something close to 40% success range.

And, again, please remember that I am chunking different kinds of actions into different groups. Failing 60% of the time on easy tasks means failing essentially all the time on what I called "Difficult" tasks above--forget Formidable or Nigh-Impossible. Failing 60% of the time on tasks that are supposed to be hard, on the other hand, is perfectly reasonable, even expected, unless you're a genuine expert in your field...and first-level characters generally shouldn't be experts in their field (yet).

I was talking about the value of low stats, and how low stats produce experiences that high stats don't. And how these weaknesses count in ways that just roleplaying a weakness while remaining mechanically optimal don't.
Alright, I'm...not really seeing how that disagrees with what I said, then. Choosing to not take Strength into high numbers in 4e is a valid choice, and because the game expects you to take on greater challenges with time, you not only start off weak with those things, you'll get (compared to "appropriate" challenges) even weaker with time. Fixed DCs do mean you slowly get better, but if you've gone from a 1st-level no-longer-green adventurer (which 4e explicitly says that's what 1st level characters are--they've been tested, but haven't made any kind of a name for themselves yet) to a literal demigod or living incarnation of magic (two different 4e Epic Destinies), I don't know if it should be all that surprising that you have a reasonable chance of busting down a door that you couldn't physically budge originally.

And....I'm not getting where this "roleplaying a weakness while remaining mechanically optimal" thing comes from. What does this refer to? I have given specific examples of how you can quite easily have real weaknesses, because the stats of D&D don't perfectly correspond to all human variations. I don't see how "cowardly" MEANS "low Charisma." A cowardly person who easily persuades others to protect them IS "charismatic" and making use of it, while still roleplaying a real and serious character fault, one that will deny them opportunities and put them in bad spots frequently. A character that has a single-minded devotion to her faith SHOULD have high Wisdom, as far as D&D is concerned--but that doesn't mean that she can't also have a deep acquisitive streak and a compulsion to steal shiny things. (This is, in fact, exactly the reason why we have the "tu quoque" fallacy IRL--someone can be quite Wise because they're flawed and have made lots of mistakes.)

No one is asking you to have Strength 16 and pretend that you can't actually lift boulders that you totally can. Instead, I'm suggesting that an 8 (which you never improve) is a perfectly reasonable "weakness" for adventurers, who must face great danger and thus any weakness can be a serious one. And that flaws/faults/whatever you want to call them are both much more interesting, and much more achievable regardless of system or rules or whatever else, for limiting the opportunities of a character and forcing them out of their comfort zone.

Like...do you really NEED the game to tell you, "You emphatically, unequivocally, consistently suck" in order to actually feel your character has any limits at all? Because that's really confusing, and...basically just factually incorrect?

I have had similar experiences. Except that instead of considering my rolls unfair related to others, I consider them too good to what I want to play. You yourself have said it, there aren't many chances to play, so every character has to be something you want to play. I want characters with mechanical weaknesses, not characters without them that I pretend they have.
Okay but like...you can totally have mechanical weaknesses. That's why I gave the example I did, of the healer-focused Paladin. It's still completely possible to be a good Paladin who isn't strong (objectively; 8 is the lowest Strength score a standard 4e character can have, same as standard 5e), and in fact being a Paladin who is a good healer actually welcomes people with that particular weakness. It even has other mechanical weaknesses attached--most Cool Things Paladins can do that rely on Charisma are risky to do in melee combat, yet melee combat is exactly where Paladins need to be in order to take hits (the default role for Paladins). Meaning, by playing a low-Strength Paladin, you really are limiting yourself in a serious and meaningful way, while at the same time supporting the very story you spoke of (sucking at strength but wishing to SEEM or APPEAR strong, while actually being a very good healer). A low-Strength 4e Paladin doesn't have to pretend to be bad at typical Strength-based checks, they will be bad at them (failing more often than succeeding on anything but explicitly Easy checks, and even then only at the very earliest levels--even Easy checks may become a challenge as levels increase). So there's no "well I'm actually statistically optimal, I'm just faking being bad," hence my confusion.
A 14 doesn't necessarily, mean special. A 16+ is, 16+ is beyond two s.d. And it isn't a straight multiply by six scenario. We solve this by a binomial calculation. 100%-4.21%= 95.79% and that to the sixth power. 77.25% of people won't have a single 16+ (the statistical outlier for high stats) . Though I concede, about 19% of normal people will have one 16+
Again, I disagree with these statistics, but even taking your numbers as they are, that's basically my point. If about 19% of all people have at least one 16+, what happens when we apply the "this is a population regularly threatened with death, which actually has many more deaths than the population overall" selection pressure? People who have no stat above 12 (for example) will appreciably die off more often than people who don't, which will skew the distribution toward the upper end, inflating that 19%--perhaps by a lot. Like, if the death rate is 25% for people with at least one score of 16+, but 60% for people who don't have at least one 16+, then we're looking at (.19*.75)/(.19*.75+.81*.4) = .30547..., or about 30.5%, a substantial increase. And again, I don't buy this statistical distribution you've claimed--I'd like to see where you're getting the mean and standard deviation from for this.

This is a symptom of the system, you say a 75% success rate is too low for a good character doing something easy, I consider it too high for a character that is supposed to be bad at it. 40-45% failure is good enough for what I want. A bit under average, but mostly on the normal range.
I just...how is that an easy thing? How can an "easy" thing be something a (supposedly) average person fails at almost half the time? What does "easy" even mean in this context?

For something like falling off a ladder, most of the time climbing an ordinary ladder would be a no-roll auto-success in my eyes. If there's external stress involved e.g. a need to be extremely quiet or the ladder is unstable or the climber is being chased by ghouls then sure, a roll is warranted. I don't think this is controversial.
Given the "I don't want to pretend to have mechanical weaknesses when I don't," I think it rather is more controversial than you think. And even apart from that, the whole "roll steath every single round until you fail and get seen" problem would seem to indicate that there's yet further controversies to just giving people automatic successes.

The numbers I used were just for example. Replace them with 80% and 99% if you like; my point remains the same.
Well...no, it kind of doesn't. Because, for example, 85% success rates are only barely achievable for the all-18s person in 5e (and I don't even know if they can be achieved in 4e). Having Proficiency and a +4 modifier in 5e vs. a DC of 10 means 1d20+6. So you still fail on a roll of 1-3, or 15%. Even leaving aside that 99% success isn't a number you can really achieve in any d20 game, 95% success (the closest we can get without genuinely "doesn't fail ever") is literally impossible on such checks at 1st level...and this is for something a character IS supposed to be "good" at, because they have Proficiency. If we look at things a character is supposed to be untrained with ("bad" at, to a loose approximation), it's now d20+4, meaning you fail on a roll of 5 or less...which is below your 80% figure.

The numbers really do actually matter here. The numbers you gave are totally reachable...but don't come across as "born lucky." The numbers that do come across as "born lucky"...aren't reachable without effort (such as investing Proficiency). That's my point here.

For a typical book, no. But there's been books I've read where I've have been ecstatic with a comprehension rate as high as 75% - with that I might have even passed the courses!
Again: conflating any check whatsoever (or, rather, specifically Hard checks) with specifically Easy checks, which I have explicitly stated several times. I 110% agree that for checks that are SUPPOSED to be hard, comprehension rates as high as 75% (or whatever) should be great. BUT I AM NOT TALKING ABOUT THAT. Please, please, PLEASE stop this incredibly annoying pivot from talking about what I actually said, to talking about a distinctly different thing, as if they were equivalent. They're not. Hard checks SHOULD have a different success rate!

A normal healthy person would ordinarily be no-roll auto-success here. That said, if someone's character concept is that it's very old or infirm or somehow physically disabled then maybe walking across a room unassisted does become a roll-worthy challenge for it. A corner case, to be sure, but maybe (?) this is the sort of thing @MoonSong has in mind.
Given the examples explicitly described, yeah, that's certainly what I thought MoonSong was talking about.
As for Alice and Bob in combat or other stat-dependent scenarios, instead of begging the DM for extra advantages maybe Bob might look to either finding ways to contribute around what Alice is doing or outright supporting Alice, while recognizing Alice is the key person in those moments. Similar to a quarterback and his o-line: the o-line are there purely as support to let the quarterback do his thing, but without those linemen the quarterback is toast.
But those ARE begging the DM for other ways to contribute. Literally every single one of those things requires negotiating with the DM to even have the potential to do something useful. Because without the DM's active involvement in making those things useful, they don't contribute any more than giving flowery descriptions of the clothing he wears, or a Bard writing actual poetry to use when she casts a spell: all cool things, arguably vital to the best experience of roleplay, but not contributing to the party's success.

Every time I see this statement it irks me, as what is data but the plural of observations, and in any non-scientific realm what are observations other than anecdotes?
Because anecdotes aren't collected with any degree of rigor. If you treat them as data, they suffer the "sample size of 1" problem--meaning, their statistics become literally meaningless because we divide by (N-1)...and when N=1, what does that do?

But to give you the full version of that statement, which gets abbreviated for concision: In literally every single thing where a spread of possible results happens, you are going to get individual cases that do X thing, pretty much guaranteed. Consider astrology. Astrology is pseudoscience, plain and simple, and for the vast majority of its predictions, it is either specific enough to actually be demonstrably wrong and then is demonstrably wrong, or vague enough to be "predicting" all possible results. But we absolutely expect that, a portion of the time, astrological predictions that actually ARE specific specific enough to be wrong turn out to be true.

Let's say Dave is criticizing the pseudoscientific nature of astrology. I then say to him, "Ah, but I once got an official reading that formally and documentedly predicted the day my now-wife would propose to me AND that I'd win the lottery on the same day, AND I DID!" Does my observation count as data? Most people interested in empirical rigor would say it absolutely does not, because I'm ignoring a vast data set of counter-examples and focusing only on my personal experience, which happened to be one of the rare cases where a specific prediction actually came true.

So: Does you having more fun with this one particular character, who happened to have lower stats, indicate that having fun is totally independent of (or even negatively correlated with) having high stats? No. It simply indicates that you, personally, on one occasion, had such a contrast. It doesn't illustrate any trends, it doesn't provide us a lick of meaningful evidence, because it is an isolated case without any consideration for the distribution from which that single case was drawn.

In other words: your personal story is great, and is certainly something you observed. But it doesn't actually tell us anything on its own. It is an anecdote...but it isn't data.

Yes, and there's fairly simple ways to model that very thing (long-tail height distribution) in the game if one wants but it involves rolling a couple more dice during char-gen.
Well...yeah. Like 4d6 drop lowest. Which literally gives a distribution where it is average to have a 12-13 and no high result is especially unusual (18 with 4d6 drop lowest is like, four hundredths of a point above being exactly 2 standard deviations up.)
I assume this is what's being done when DMs start their games at 5th level (or whatever other not-1st level), and fair enough for thems as wants it. Not me, though. I love low-level play, both as player and DM, and to skip it would rather butcher the fun for me.
Okay. So...how do you square this with the explicit and repeated advice that low level in 5e is meant to be a hard experience and most players should skip it? Like, this is again literally something the developers have told us. Why should EVERYONE have to slog through hard content you enjoy? Why should the default experience be Dark Souls?

What I want is somewhat irrelevant. What I expect
"Wanting" vs "expecting" sounds like a distinction without a difference. Care to elaborate?

And 'born lucky' (as in having a significantly higher stat line than usual) does not necessarily translate into 'lucky to survive' once the puck drops, at least in my/our own games. <snip>The difference was surprisingly low. I'm not statistician enough to say whether it was even 'statistically significant' or not, but the eye test told me that to a very large extent starting stats are at best a very low determinant of a character's future career length.
I mean...yes? I literally said that earlier. The difference should be (assuming a "high stat" character has just 1-2 stats of 16+ while a "low stat" one does not) at best a 10% or 20% higher overall survival rate, which might end up being (say) the difference between 50% and 60%. Which would mean if you've played, say, a thousand characters total and 60% of them were "low-stat" while 40% were "high-stat," you'd be looking at about .6*.5*1000 = 300 "low-stat" survivors vs .5*.4*1000 = 200 "high-stat" survivors--overall numbers not all that different. (Plus...old school D&D is explicitly super lethal, as I already said, which skews things toward most characters not surviving.) A more useful metric would be survival time rather than survival rate, because again AIUI and as I have experienced it, old-school play has a very high body count. Like, losing at least one character on average every session is normal.

As I just said in another thread, encounter builders are for the birds. Any half-decent DM is quickly going to learn via trial and error what her party can handle,
Hard disagree. I'd like to see your evidence for this claim.

See above.
Not sure which specific thing I'm supposed to see above, so some elaboration would be useful (though I presume you'll cover that in responses to things I've said above in this post!)

Moonsong said:
I'm not bothered by failing, but rather having so many failures and so many risks make things sweeter when I win at the end.
This!
Technically speaking, I'm not bothered by this either. But when the successes are rare and the failures frequent and dramatic (e.g. lots of character death, such that I go into each new character presuming it will die and will rarely be surprised otherwise--something you explicitly described yourself as doing), it doesn't make the successes feel sweet. It makes them feel like false hope dangled in front of me. And, again, it is much easier to ADD "you fail a lot, and probably die a lot" to systems that don't feature that, than to remove it from games that do; a system built without early-D&D "save or die" mechanics can always have a few added in, whereas a system full of them presents a minefield that must either be cleared, navigated...or run afoul of.

If you'd like an analogy, for me, failure (and especially death) is like vinegar: pungent, but removing it from my repertoire would be bad and would make my dishes taste worse. It must be used carefully or else it sours everything else about a dish, however, and that doesn't improve if you try to balance it out with some sweetness and let it sit for a while (the vinegar can, in fact, become stronger due to fermenting some of the sugar). Likewise failure and death. It isn't even "I'm not opposed to their presence"--I WANT failure and death as carefully-used tools, because without them, the cool and exciting stories I want to experience cannot happen. But when failure is the typical state of affairs, when even easy tasks are a great challenge, when I go into each character not wondering if they'll die, but wondering whether I'll get one session or four out of them before they do die...why invest? Why care? I'll just get disappointed again. Whatever I build, whatever I accomplish, it'll be taken away from me sooner or later. Might as well not bother. Might as well not even play. Then at least I can chill out doing nothing, rather than getting my hopes up one more time just to have them dashed against the rocks.
 

I don't consider 3d6 to represent the average person. There's been no reason to think so, since basic D&D was the last time this was suggested as a way of making characters.

If you prefer PCs to be unexeceptional, it makes just as much sense to say that the average person is generated by 4d6 drop lowest.
 
Last edited:

Chaosmancer

Legend
Technically speaking, I'm not bothered by this either. But when the successes are rare and the failures frequent and dramatic (e.g. lots of character death, such that I go into each new character presuming it will die and will rarely be surprised otherwise--something you explicitly described yourself as doing), it doesn't make the successes feel sweet. It makes them feel like false hope dangled in front of me. And, again, it is much easier to ADD "you fail a lot, and probably die a lot" to systems that don't feature that, than to remove it from games that do; a system built without early-D&D "save or die" mechanics can always have a few added in, whereas a system full of them presents a minefield that must either be cleared, navigated...or run afoul of.

If you'd like an analogy, for me, failure (and especially death) is like vinegar: pungent, but removing it from my repertoire would be bad and would make my dishes taste worse. It must be used carefully or else it sours everything else about a dish, however, and that doesn't improve if you try to balance it out with some sweetness and let it sit for a while (the vinegar can, in fact, become stronger due to fermenting some of the sugar). Likewise failure and death. It isn't even "I'm not opposed to their presence"--I WANT failure and death as carefully-used tools, because without them, the cool and exciting stories I want to experience cannot happen. But when failure is the typical state of affairs, when even easy tasks are a great challenge, when I go into each character not wondering if they'll die, but wondering whether I'll get one session or four out of them before they do die...why invest? Why care? I'll just get disappointed again. Whatever I build, whatever I accomplish, it'll be taken away from me sooner or later. Might as well not bother. Might as well not even play. Then at least I can chill out doing nothing, rather than getting my hopes up one more time just to have them dashed against the rocks.

To throw my own anecdote into this, I've expeirenced within the past year a situation with a play by post game that for me encapsulates what Raiden here is talking about.

And I mention it is a play by post game, because the length of time between "turns" exacerbates the issue a lot I think, since even a quick scene can take two to three days to type out.

But, I want to stay active in the group, I don't want to just disappear into the background, so I am often putting my character forward. And, quite often, after a few paragraphs of what my character would say, my DM asks for a persausion check. And I hate it. Literally, the last time he asked for one, I did not want to roll and almost asked him to reconsider. Because I have a +3 to Persuasion. I actually have proficiency in the skill.

But, success usually requires a 13 to 15 result. Which means at best I have a coin flip chance of succeeding. And I've failed repeatedly. Once in a situation where the DM realized that he did not want me to fail, because it would ruin the story, and they had to back pedal. Which made it even more obvious that I had failed to a degree that almost ruined 2 years worth of gaming. And that can never be fun.


And sure, like I said, the format of this game, the slower rolls, is likely making it worse. Since if we roll once a week it is a big deal, but I've never found repeated failure particularly fun. It just drags me down. It makes me wish we weren't rolling the dice, because rolling the dice is just leading to me making things worse.
 

Bagpuss

Legend
Once, a friend and I were making characters for a one shot con game. He flipped out when I put a 13 in my prime stat. "I won't play with you if you do that!"

To be fair there is a certain logic to that.

When you are forming your adventuring party are you going to pick a weedy looking fighter or the next one in line that is noticeably stronger?

A dangerous profession like adventuring is going to weed out the less capable pretty quickly, and people aren't likely to want to work with people that are a liability.

Still it is fun to play against type every now and again.
 


Lanefan

Victoria Rules
Are they? Where are you getting these statistics? Maybe in OSR D&D you might argue that, but not so much in modern D&D. Even if it were, again, I don't think this properly accounts for the selective pressures that apply to adventurers (whether PC or NPC). If, as Lanefan says, luck is the determining factor in winning (and your desire for sub-50% success rates would support that), then "born lucky" or "uberman" adventurers would be more common in relatively short order.
More common, perhaps, but not exclusive - some "born unlucky" adventurers will survive despite their "unluckiness". And it's fun to see, for those who like cheering for the underdog.
To summarize:
"Easy" tasks (that can still be failed) should be very high success rate (90%+)...otherwise they are not "easy." Failure is a genuine surprise.
Moderate tasks should be in the (very roughly) 60%-75% success rate range. Reasonably achievable, but failure isn't a surprise per se.
Difficult tasks should be in the 40%-50% range--you're about as likely as not to fail them, so there's high tension for each such effort.
Formidable (for lack of a better term) tasks should be roughly 35% or less: success wouldn't be a surprise per se, but you expect to fail.
Nigh Impossible: 10% or less chance of success. Success is a genuine surprise.

All of the above labels are just descriptives to indicate that difficulty rises. And note all the "roughly"s in there--these are squishy categories, not absolute bright lines.
I largely agree here, and I wonder if part of the problem is that the game's terminology use and the common meanings are getting in each other's way.

Instead of "easy", for example, something with a 90+ success chance should be labelled as "trivial" or "simple".
"Easy" then covers those tasks with maybe a 65-85% success chance.
"Moderate" then hits those in the 40-60% range.
"Difficult" takes care of the 15-30% range.
"Formidable" is 10% or less.
"Nigh impossible" is just that: if you roll 01% you might succeed, or might not; and on anything higher you fail.

There's an argument to be made that if something has a 90+% success chance and failure carries no real danger then just let it happen and carry on. It's when failure carries a real danger that even the most trivial tasks need to be looked at e.g. you're climbing a ladder (trivial task) but if you get unlucky and fail those ghouls are gonna catch you... :)
Like...do you really NEED the game to tell you, "You emphatically, unequivocally, consistently suck" in order to actually feel your character has any limits at all? Because that's really confusing, and...basically just factually incorrect?
It'd be a refreshing change, now and then, from "You emphatically, unequivocally, consistently can't be stopped" which 4e-5e play tends toward at anything other than very low levels.
Given the "I don't want to pretend to have mechanical weaknesses when I don't," I think it rather is more controversial than you think. And even apart from that, the whole "roll steath every single round until you fail and get seen" problem would seem to indicate that there's yet further controversies to just giving people automatic successes.
Stealth is its own hot mess, to be sure.

I'm in the camp that says one roll represents your best attempt until-unless something materially changes in the fiction. So, if you're trying to stealth your way across a large lawn or open field to the side of a castle, one roll says how well your stealth attempt goes for the whole distance. Even if it takes you 7 rounds to cover it, you only roll once.

Were you to then have to use a gravel path to get around the corner of the castle to the entrance, that'd be another stealth roll as something - in this case the type of surface you're walking on - has changed.
Well...no, it kind of doesn't. Because, for example, 85% success rates are only barely achievable for the all-18s person in 5e (and I don't even know if they can be achieved in 4e). Having Proficiency and a +4 modifier in 5e vs. a DC of 10 means 1d20+6. So you still fail on a roll of 1-3, or 15%. Even leaving aside that 99% success isn't a number you can really achieve in any d20 game, 95% success (the closest we can get without genuinely "doesn't fail ever") is literally impossible on such checks at 1st level...and this is for something a character IS supposed to be "good" at, because they have Proficiency. If we look at things a character is supposed to be untrained with ("bad" at, to a loose approximation), it's now d20+4, meaning you fail on a roll of 5 or less...which is below your 80% figure.

The numbers really do actually matter here. The numbers you gave are totally reachable...but don't come across as "born lucky." The numbers that do come across as "born lucky"...aren't reachable without effort (such as investing Proficiency). That's my point here.
You're not getting it. I don't care what the specific numbers are - 10% vs 30%, 65% vs 85%, whatever; or how they relate to the specific game system - my point is the mere presence of that amount of difference between them makes the higher "born lucky" and the lower not.
Again: conflating any check whatsoever (or, rather, specifically Hard checks) with specifically Easy checks, which I have explicitly stated several times. I 110% agree that for checks that are SUPPOSED to be hard, comprehension rates as high as 75% (or whatever) should be great. BUT I AM NOT TALKING ABOUT THAT. Please, please, PLEASE stop this incredibly annoying pivot from talking about what I actually said, to talking about a distinctly different thing, as if they were equivalent. They're not. Hard checks SHOULD have a different success rate!
::sigh:: well, so much for that attempt at humour...carry on...
Given the examples explicitly described, yeah, that's certainly what I thought MoonSong was talking about.

But those ARE begging the DM for other ways to contribute. Literally every single one of those things requires negotiating with the DM to even have the potential to do something useful. Because without the DM's active involvement in making those things useful, they don't contribute any more than giving flowery descriptions of the clothing he wears, or a Bard writing actual poetry to use when she casts a spell: all cool things, arguably vital to the best experience of roleplay, but not contributing to the party's success.
You're conflating success with contribution. They are not the same!

Contribution is in the attempt to do something. Does a Fighter who stands into melee and manages to miss on every single swing she takes still contribute? Hell yes. Or a Rogue who can't get in to a combat due to lack of space but who instead keeps watch behind is still contributing, even if there's nothing back there to see.

Not contributing is to attempt nothing. The Rogue who, instead of keeping watch, just tunes out until the battle's over contributes nothing because he isn't even trying.
Because anecdotes aren't collected with any degree of rigor. If you treat them as data, they suffer the "sample size of 1" problem--meaning, their statistics become literally meaningless because we divide by (N-1)...and when N=1, what does that do?
I'm not a statistician so I've no idea why you'd divide by anything. If I have 47 anecdotes where 27 of them say one thing (more or less), 14 say a second thing (more or less), 3 say a third thing but they're all exactly the same, and the other three are outliers all I need is an eye test to tell me that one of those is fairly common, one is uncommon, one is rare but probably can't be ignored, and there's some outliers. Whether that sample of 47 translates to anything bigger is open to question, but it still indicates a trend.

Look at the differing views and opinions people have of the various D&D editions we've had. Listen to one and it won't tell you much, but listen to enough of them and you'll get a good idea about how popular each one is/was in relation to the others.
So: Does you having more fun with this one particular character, who happened to have lower stats, indicate that having fun is totally independent of (or even negatively correlated with) having high stats? No. It simply indicates that you, personally, on one occasion, had such a contrast. It doesn't illustrate any trends, it doesn't provide us a lick of meaningful evidence, because it is an isolated case without any consideration for the distribution from which that single case was drawn.
But what this does indicate beyond question is that it can be done, because it has been done; and therefore can be done again. What I'm arguing against is a system that prevents it from being done again because that system is designed to not allow that situation to arise in the first place.
Well...yeah. Like 4d6 drop lowest. Which literally gives a distribution where it is average to have a 12-13 and no high result is especially unusual (18 with 4d6 drop lowest is like, four hundredths of a point above being exactly 2 standard deviations up.)
True average on 4d6k3 would be 12-12-12-12-13-13 or 12-12-12-12-12-13.
Okay. So...how do you square this with the explicit and repeated advice that low level in 5e is meant to be a hard experience and most players should skip it?
I don't even try to square it, because I see that advice as being horrible, and as something that I would never wish to promote or support.
"Wanting" vs "expecting" sounds like a distinction without a difference. Care to elaborate?
Hypothetical example: I as DM might for some reason want to see a high degree of character continuity through the next campaign. But, knowing how the game system* works and knowing that my players tend to be reckless**, I can easily come to expect (or predict, same thing in this case) that the degree of character continuity is very likely going to be somewhat lower than I want.

* - I can change this.
** - I can't change this.
I mean...yes? I literally said that earlier. The difference should be (assuming a "high stat" character has just 1-2 stats of 16+ while a "low stat" one does not) at best a 10% or 20% higher overall survival rate, which might end up being (say) the difference between 50% and 60%. Which would mean if you've played, say, a thousand characters total and 60% of them were "low-stat" while 40% were "high-stat," you'd be looking at about .6*.5*1000 = 300 "low-stat" survivors vs .5*.4*1000 = 200 "high-stat" survivors--overall numbers not all that different. (Plus...old school D&D is explicitly super lethal, as I already said, which skews things toward most characters not surviving.) A more useful metric would be survival time rather than survival rate, because again AIUI and as I have experienced it, old-school play has a very high body count. Like, losing at least one character on average every session is normal.
I was measuring survival time, in terms of adventures survived/appeared in. I could measure in terms of sessions played (I have those numbers too, or at least the data to generate such) but digging down and analysing to that degree of granularity is extremely tedious: these aren't in any database, I do it all by hand.

And at some point every character either perma-dies or perma-retires, with some of those perma-retirements being forced by the campaign ending; so there's always a hard start point (the character's introduction to play) and a hard end point; long-term characters often have several sub-stops and sub-starts in between.

It's also probably worth noting that my campaigns go on for many years, with some turnover of long-term characters occurring due to player choice: they want to play something new now, and maybe cycle the old character back in later (as player, I do this all the time!). An analysis like this in a much shorter campaign would be far less useful, as the campaign would end before any of that long-term turnover could occur.

I have no idea what all those numbers mean in what I quoted.
Hard disagree. I'd like to see your evidence for this claim.
My evidence is that encounter-builders simply make far too many assumptions about far too many things to be of much real use once the puck drops. 3e's CR system assumed 4 PCs all of the same level and wealth, containing a certain spread of classes. 4e's EL system assumed 4 or 5 PCs all of the same level and wealth, and with each 'role' represented.

How many DMs have those perfect parties? Not many! And so every DM who doesn't have one of those perfect parties is left in trial-and-error mode but with the added complication of having to either argue with or dispense with an encounter-building system that's just getitng in the way.

Better, I say, that DMs learn by trial-and-error right from the start; as they're inevitably going to need that skill eventually anyway.
Not sure which specific thing I'm supposed to see above, so some elaboration would be useful (though I presume you'll cover that in responses to things I've said above in this post!)
I was referring to the bit where I noted my number-crunching re starting stats vs expected length of adventuring career.
Technically speaking, I'm not bothered by this either. But when the successes are rare and the failures frequent and dramatic (e.g. lots of character death, such that I go into each new character presuming it will die and will rarely be surprised otherwise--something you explicitly described yourself as doing), it doesn't make the successes feel sweet. It makes them feel like false hope dangled in front of me. And, again, it is much easier to ADD "you fail a lot, and probably die a lot" to systems that don't feature that, than to remove it from games that do; a system built without early-D&D "save or die" mechanics can always have a few added in, whereas a system full of them presents a minefield that must either be cleared, navigated...or run afoul of.
By the same token, a system full of save-or-dies can always have some of them toned down or stripped out entirely - it runs both ways.
If you'd like an analogy, for me, failure (and especially death) is like vinegar: pungent, but removing it from my repertoire would be bad and would make my dishes taste worse. It must be used carefully or else it sours everything else about a dish, however, and that doesn't improve if you try to balance it out with some sweetness and let it sit for a while (the vinegar can, in fact, become stronger due to fermenting some of the sugar). Likewise failure and death. It isn't even "I'm not opposed to their presence"--I WANT failure and death as carefully-used tools, because without them, the cool and exciting stories I want to experience cannot happen. But when failure is the typical state of affairs, when even easy tasks are a great challenge, when I go into each character not wondering if they'll die, but wondering whether I'll get one session or four out of them before they do die...why invest? Why care? I'll just get disappointed again. Whatever I build, whatever I accomplish, it'll be taken away from me sooner or later. Might as well not bother. Might as well not even play. Then at least I can chill out doing nothing, rather than getting my hopes up one more time just to have them dashed against the rocks.
I take it you're not a fan of 'rogue-like' computer games, then. :)
 

Zsong

Explorer
Bah. Unless you're rolling ability scores in order take what you get no rerolls, you're not really rolling anyway (don't want). You're just doing an awkward run around the dull bland predictability of point buy.

However, I do enjoy how people rolling dice completely destroys the optimisers baselines for all their calculations.
I just don’t allow multiclassing and that weeds out the power gamers.
 

Hriston

Dungeon Master of Middle-earth (He/him)
Again, an orc can end up with a higher Intelligence than a different gnome character. But that has nothing whatsoever to do with being an orc. If the orc character’s player had chosen a gnome instead, they would have an even higher int score. Ergo, gnome would have been a better choice (assuming a higher intelligence is your priority when building a wizard, which I grant is not everyone’s priority.)

I see the disconnect. I am not saying that the player choosing to play the orc wizard is choosing to play a worse wizard. I am saying that player is making a worse choice of race, given that they are also choosing to play a wizard. They may end up with a better wizard than the player who chose to play a gnome wizard (for a given definition of “better wizard,” which again I concede is far from universal); however, the player who chose to play a gnome still made a better choice of race for their wizard, even if the wizard didn’t end up being better overall.


No, but the orc player did, by choosing an orc, guarantee themselves a lower intelligence than they would have had if they had chosen a gnome.


So?

I’m not really interested in how the orc wizard compares to other wizards. Yes, of course, if you roll for scores it is entirely possible to end up with a higher score in an ability that you did not receive a racial bonus to than another, separate character got in a score that they did receive a racial bonus to. But that doesn’t really mean anything other than “random rolls have random results.” I can’t do anything useful with that information. What I care about is how a character with one race compares to itself with a different race. That is valuable information when building a character that can influence the player’s decision of what race to play.

I think there's an interesting (to me) difference here in how we're each looking at ability score generation. It seems to me like your assumption is that there's a fixed set of scores that a player is somehow predetermined to roll, and that their choice of race is weighed in relation to this assumption, that there's just this one set of numbers. So the racial modifiers are always compared directly to one another because the assumption is that they'll be added to the same number. This seems entirely rational, and I tend to think of things this way myself, but I'm not sure if it's the only or even right way to look at it. The alternative view is that when you choose a race (and class) first, before rolling, you're choosing a character whose scores are each a field of possibilities. Your highest score can reasonably be expected to be anything from 10 to 18, so for an orc wizard that's an Intelligence of 8 to 16, while for a gnome wizard, an Intelligence of 12 to 20. The two characters overlap in the 12 to 16 range, which the orc has a roughly 93% chance of coming away with, while the gnome's chance of having a score that low is a still sizable 20%. So around 19% of the time, the two characters are roughly equal in the Intelligence department, assuming they both put their high score there. Now, obviously the gnome has better odds of having a high Intelligence, but I think it's a long way from being an "always better" choice. It's just better most of the time, and the thing is you never know when the dice are going to give your orc a high (enough) Intelligence or your gnome an Intelligence that's less so.
 

Remove ads

Top