• NOW LIVE! Into the Woods--new character species, eerie monsters, and haunting villains to populate the woodlands of your D&D games.

D&D General [rant]The conservatism of D&D fans is exhausting.

And yet when I brought up the example of the Warlord enemy from 5e, what was the very first thing that was suggested?

To call it magic. No explanation of how. No explanation of what. No reference to the "fear" spell (which wouldn't work this way anyway!) Just "rewrite the existing monster as though it's magic, and there's no problem".

The very thing you just claimed never happens, happened in this thread, BEFORE you claimed it never happens!

Here's the quote, in case you were wanting a reference.

And yes, this specifically includes the Battle Master! Not only that, it does so in a way that exploits (a) the writing as being vague, and (b) "at least semi-mystical", meaning, magic as an excuse without anything more than "it's magical, move along."
Are you serious? You're going to conflate a preliminary suggestion for making a warlord with some supernatural abilities with, "Well, since his persuasion check was successful and your PC has to obey it, I'm saying "MAGIC!!!!." and now you have to obey it?"

Those aren't the same. One is an idea for how to make some of the warlord abilities work without non-magical mind control, and which would have more fleshed out lore if actually designed, and the other was weak justification after the fact.

So if that's your example, then the thread hasn't seen what I'm talking about.
 

log in or register to remove this ad

I'm not, 1) since your premise of 100% matching the real world is not one anyone uses or necessary, 2) you can in fact(and it is a fact) have a logical output from an input not based completely in log(your premise of completely illogical not existing at all).

Really this whole argument boils down to 1. Most of the thread has been people arguing against 1, but I don't think any of us have been advocating for 1 at all. I think as the argument gets deeper, people are being forced to stake territory that probably isn't even connected to their original ideas. But it is ultimately about whether one accepts that a GM can present a believable enough world with enough of a life of its own, that as the characters go about their business, they not only feel like the NPCs and events exist around them, but those things have a degree of internal consistency that means player choices do matter (i.e. there is a difference between going to visit your old friend Iron Arm Lung on Tuesday to stay for the week instead of Friday because on Wednesday a group of assassins are heading to his door). None of this has to be a perfect model. It just needs to be good enough for the group. I think in most cases that doesn't mean you are constantly charting all the moving parts like pinballs in a machine, but that you at least consider those things as they come up, and even make good use of tables to help facilitate the process (i.e. you set down what the assassins are doing, where they are, where Iron Arm Lung is, once these parts start becoming relevant)
 

I think, to some degree, this would depend on framing. There have been 266 popes, and there've been a baker's dozen of years with at least three popes. 1276 had four popes, and all but 1978 happened between 827 and 1605. If we're talking about the early modern world or later (roughly 1500 on), I'd agree that it's implausible. In a medieval world, it seems plausible enough. Not common, but not shocking, both possible and reasonable.
I've been thinking about this and I don't think it is plausible for the old world, either. 12 of the 13 times occurred over a 778 year period, or once every 64.8 years on average. That's a less than 2% chance of it happening in any given year.

If someone in one of those years said in say February after a new pope was selected that he thought there would be three popes this year, it would be 1) unreasonable to think that based on a less than 2% chance of happening, and 2) improbable that he would be correct. Those fail to pass the reasonable/probable criteria for being plausible.
 
Last edited:

@robertsconley

From how I read Rob it sounds like his game runs like a solid, well-designed software system honed over decades of improvements and architectural refinements. For such a system, the majority of time is spent just informing the user what the system is capable of and watching it work its magic. This is something very different from a one-shot or standard 10-session campaign that essentially works like a startup.

As it so happens, I was trained as a software engineer. While I did not formally minor in it, I took a fair number of courses in systems analysis. My intended minor was actually geography, with an eye toward entering the GIS field as it existed in the 1980s, but life went in a different direction.

Since then, I’ve worked as the head programmer for a company that manufactures metal-cutting machines and later moved into work involving automated production lines for metal processing. It’s a small, family-owned company with a tight software and hardware development team, so all of us wear multiple hats, including manning the support lines to this day.

The main applications of this to tabletop roleplaying for me are:

  • Applying the principles of systems analysis, which extend beyond software to any complex structure, including RPGs and campaigns.
  • Listening to users (or players), identifying their needs, and designing solutions.
  • Diagnosing complex issues from incomplete information and resolving them effectively.
  • Developing skills in technical writing and communication.
  • Handling chaotic systems: for instance, dynamically controlling the height of a cutting head over uneven metal to maintain precise tolerances at high speed.
  • Managing competing interests: I’m responsible for coordinating the needs of bosses, tech staff, sales, and customers to design coherent, usable systems. So yes, it’s as much about people as it is about software development.
From all of this, the main takeaway is that I have a deep appreciation for the implications of different creative goals, if I know what those goals are. And just like in my industry, even though every manufacturer is cutting metal, each has a different system with its approach. These result in different implications, workflows, and even “feel” despite the same end product: a pile of cut metal parts.

Similarly, just because all RPG campaigns aim to deliver fun and adventure, the way they do it matters. Some approaches, like my Living World sandbox, are structurally distinct from others, and that leads to differences in how players experience the setting.

So I push back against claims that what I do is the same as any other referee or playstyle. I have done the analysis, the playtesting, and refined the process piece by piece over years, using player feedback to adjust it, sometimes drastically, but often through fine tuning. And I’m still not done. I have areas where I’m weaker than I’d like to be.

Last night I caught up with an old friend who’s a big fan of Burning Wheel and has run several campaigns. Since he’s played in my campaigns and knows me well, we had a great conversation about similarities and differences. This was the big takeaway:

1747831479984.png


And by “roleplay-heavy,” he means players who create distinct personalities, motivations, and goals, with campaigns that revolve around those character traits.

I can see how Burning Wheel strongly supports that approach. It supports it at every level, from character creation to conflict resolution to campaign management. However, it expands on that to create its own distinct take, like minimizing prep.

1747831758044.png



After our talk, I’d say the major difference between my Living World approach and Burning Wheel is focus.

  • In my Living World sandbox, the focus is on characters interacting with the setting.
  • In Burning Wheel, the focus is on challenging the characters’ beliefs and motivations.
And again, to be clear, it is not a zero-sum game. Burning Wheel supports interacting with the setting. My living world sandbox creates challenges to character's beliefs and motivations.

The campaign structure I use, especially the "World in Motion" principle, is built to encourage interaction with the setting. Most of that happens through first-person roleplaying with NPCs, rather than scene-framed tests of character conviction.

The implications are:
There is more prep in my system, because I cannot predict where the players will go or who they will interact with. So I prepare more than what BW referees typically have to do for their

Because my focus is on interacting with the setting, having a character with rich beliefs or motivations is optional, not required. A player can roleplay a version of themselves with the character’s abilities and still have a fulfilling experience.

Beliefs and motivations only come into play if the player chooses to engage in situations where they matter. And the downside compared to Burning Wheel if that a player expects their character belief and motivations to be challenged there is no guarantee that will happen.

In the end, both systems can yield great campaigns. But they are not the same system with different window dressing. The difference in focus, structure, and procedure leads to different experiences. That’s why I emphasize that my approach is not just “what any good referee does”, it’s a deliberately structured method with different assumptions, consequences to play, and outcomes.

The implications are:
There is more prep in my system, because I cannot predict where the players will go or who they will interact with. So I prepare the world, not a narrative.

Because my focus is on interacting with the setting, having a character with rich beliefs or motivations is optional, not required. A player can roleplay a version of themselves with the character’s abilities and still have a fulfilling experience.

Beliefs and motivations only come into play if the player chooses to engage in situations where they matter. I don’t design around challenging them, I design around the world reacting to their choices.

In the end, both systems can yield great campaigns. But they are not the same system with different window dressing. The difference in focus, structure, and procedure leads to different experiences. That’s why I emphasize that my approach is not just “what any good DM does”, it’s a deliberately structured method with different assumptions and outcomes.

And in addition to commenting on @Enrahim's post, this should answer @hawkeyefan question on why first person roleplaying is such a big deal. Because it is one of the primary ways that the characters are interacting with the setting.

Also, to clarify another point that @hawkeyefan brought up about my XP system being neutral in regard to what the player chooses to do as their character.
 

Or, to ask the question in simple terms: How can the players affect the DM's decision about the plausibility of what the DM already knows, but the players do not and cannot yet know?
I’ve said before that my players can, and do, challenge outcomes, ask questions, and analyze what happened based on what they know or can discover. Just because they don’t have omniscient access to the campaign notes doesn’t mean they’re flying blind. They test the world, interact with it, and reflect on it across sessions. That’s how plausibility functions as a limiter: it has to make sense when interrogated over time, not necessarily in the moment.

If you think a constraint is only real, if it prevents a referee from acting in the moment, even with prep, then this is an irreconcilable difference.

However, where you are mistaken is that in my Living World sandbox campaigns, the constraint is procedural: what happens must follow from world logic and prior events. That is not the same as “anything goes,” even if it isn’t immediately visible to the players.

To be clear, that still doesn't satisfy your condition that the referee can be prevented from acting in the moment. That is an irreconcilable difference.
 

Again, you're conflating different things. You cannot have logical outputs from completely illogical inputs.
You are the one conflating things here. Logic - or, more accurately, logical reasoning - is a process of reaching a conclusion based on drawing from inferences and making connections. While it's true that starting from a flawed premise will result in a flawed outcome (garbage in, garbage out), the actual process can still be rationally sound. That's what being stated by @Maxperson & others.
 

I've been thinking about this and I don't think it is plausible for the old world, either. 12 of the 13 times occurred over a 778 year period, or once every 64.8 years on average. That's a less than 2% chance of it happening in any given year.

If someone in one of those years said in say February after a new pope was selected that he thought there would be three popes this year, it would be 1) unreasonable to think that based on a less than 2% chance of happening, and 2) improbable that he would be correct. Those fail to pass the reasonable/probable criteria for being plausible.
Hmm. I agree this may not meet the threshold for plausibility in the real world. Two percent is probably not enough, especially if our definition of plausibility shades towards probability rather than reasonableness.

I do think reasonableness should be weighted somewhat more heavily, perhaps. Given that we were thinking about baseball -- no, just me? -- there have been 326 no-hitters thrown since 1876, about two a season. Pedro Martínez never threw a no-hitter. Statistically speaking, it was unlikely he would, despite his dominance at the peak of his career. Better chance of having three popes. But if you told me that it was implausible that Petey would throw one on any given night, I don't know that I'd agree, unless the Sox were facing the Yankees.

And I think that it's this reasonableness that is more important in fiction. Plausibility in fiction has always struck me as something of a post hoc justification for whether or not the story is reasonably credible or reasonably free from contrivance rather than an indication of probability. For gaming, if we were playing Ars Magica (or some other game with a similar setting) such that the Papacy was important to our game, and the GM were rejecting out of hand a Year of Three Popes when whatever mental models or other heuristics he were using indicated it should happen because it only happened 2% of the time in the real world during the medieval period and was thus implausible, I'd be annoyed if I found out about it. It's plausible enough for fiction.
 

Well, here, the issue with the vagueness isn't that it can capture a lot of different methods.

It's that almost anything goes.
Yes, anything goes within the limits of what is plausible. What is plausible is determined / defined primarily when the setting is initially created and then modified as the campaign goes on and the understanding of the setting increases. For both DM and Players.

That is what works for us and I don't know any other way to do it myself. I don't DM from on high and say this is the way all things are and have to be.

Some people DM differently and may have a much more authoritarian approach. This is my world and this is how it works. I have seen them on these forums in fact. That is not how I DM or my group plays. So I can't really speak to how others DM and what their motivation is.

For us it works to be collaborative. Now, to be clear, I (the DM) do about 95% of the world building in our games. Primarily because that is what I like to do and my players not so much. However, the crucial aspect is explaining my vision, listen to their vision / desire, and the working together to have a coming vision for the setting / campaign. Once we all buy into what the general game physics are (RL + magic or whatever) it develops a general shared understanding of what is plausible.

Once you have a shared understanding on what is plausible, the DM acts as a limiter on the Players and the Players act as limiters to the DM. If there is a disagreement, even during play, you discuss it and come to an agreed upon understanding. That agreement then informs the parameters of what is plausible in the setting/game moving forward.

What is plausible is not a constant set in stone on day 1. It is impossible to know everything that is plausible from the get go. So has the game goes on the definition of the setting and what is plausible in it increases over time.

Or, to ask the question in simple terms: How can the players affect the DM's decision about the plausibility of what the DM already knows, but the players do not and cannot yet know?
I take this to mean what happens when there is a disagreement on what is plausible? I guess I should step back and say that the foundation of all of this is trust. The DM trusts the Players and the Players trust the DM. It would be hard for me to play a game I care about without mutual trust. So, I am assuming that everyone trust that decisions and discussion are done with the interest of the game in mind and not individual agendas.

Now, back to your question. If I understand your question correctly, this is how it would happen in our games. There are, off the top of my head, two scenarios where this commonly occurs.

Scenario #1:
The DM makes a decision a Player deems implausible.
  1. The player describes their issue with the decision.
  2. The DM describes their reasoning for the decision.
  3. The group discusses the two sides and tries to build consensus on a direction. If that fails, we move forward with the majority opinion. We give maybe 5 min. max to this.
  4. If one party (DM or Player) is still unsatisfied we discuss it some more at the end of the session. This is mostly like to only change things moving forward, but can change outcomes from sessions that were just completed.

Scenario #2:
A Player makes a decision the DM deems implausible. This could continue like Scenario #2, but it could also continue like this:
  1. A Player makes a decision that the DM thinks is implausible.
  2. The DM considers the decision, but makes no comment. The DM adjusts how they view the plausibility of the setting.
  3. At the end of the session, the DM describes the scenario to the group and confirms with the group that they agree with the Player's (and potentially now the DM's) understanding of the settings plausibility. Discussion at this point proceeds very much like scenario #1.
 

But, that's exactly what the whole "internal logic" line of argument is claiming. That the decisions are not coming from the DM, but, rather from the "internal logic" of the DM. The fact that every single element of the setting outside of the characters comes from the DM apparently doesn't matter - the setting itself somehow has an internal logic and consistency that the DM examines and then uses to determine what is plausible before making things happen in the setting.

But, for some reason, this keeps getting ignored.
By ignored, do you mean that's what everyone is actually saying?

You have really got to stop strawmanning people and actually address them honestly. Seriously.
 

Hmm. I agree this may not meet the threshold for plausibility in the real world. Two percent is probably not enough, especially if our definition of plausibility shades towards probability rather than reasonableness.

I do think reasonableness should be weighted somewhat more heavily, perhaps. Given that we were thinking about baseball -- no, just me? -- there have been 326 no-hitters thrown since 1876, about two a season. Pedro Martínez never threw a no-hitter. Statistically speaking, it was unlikely he would, despite his dominance at the peak of his career. Better chance of having three popes. But if you told me that it was implausible that Petey would throw one on any given night, I don't know that I'd agree, unless the Sox were facing the Yankees.

And I think that it's this reasonableness that is more important in fiction. Plausibility in fiction has always struck me as something of a post hoc justification for whether or not the story is reasonably credible or reasonably free from contrivance rather than an indication of probability. For gaming, if we were playing Ars Magica (or some other game with a similar setting) such that the Papacy was important to our game, and the GM were rejecting out of hand a Year of Three Popes when whatever mental models or other heuristics he were using indicated it should happen because it only happened 2% of the time in the real world during the medieval period and was thus implausible, I'd be annoyed if I found out about it. It's plausible enough for fiction.
The line for where plausible becomes implausible is blurry for sure.

With no hitters, that to me is still very implausible for any given night, even for Petey. While the numbers have changed over the last 150 years, there are currently 30 teams that each play 162 times a season. That's 4,860 chances for pitchers to get a no hitter. Two a season would be 1 out of every 2,430 chances at it. While it's certainly possible to have a no hitter on any given night, and some pitchers are more likely to achieve it than others, it still doesn't seem plausible(reasonable or probable) that it would happen on any given night.

I agree with you that reasonability is more important than probability, but reasonability is connected very closely to probability. An extreme longshot is not reasonable, due to its improbability. In a game with magic, gods who take a hand in the world, etc, plausibility stretches away from the real world with regard to what is probable or reasonable, but not by gigantic margins.

I'm not sure what you mean by, "...rejecting out of hand a Year of Three Popes..." Implausible things can happen in RPGs. They just remain implausible. Would you clarify what you meant by that?
 

Into the Woods

Remove ads

Top