D&D 5E About Morally Correct Outcomes in D&D Adventures [+]

Emoshin

So Long, and Thanks for All the Fish
So if you are wondering whether it is okay to steal a few dollars from the cash register at work, you have to imagine a world where everyone could do that any time they felt like it. If that seems like a bad idea, then the action is unethical, and you should not do it. Period. Even if you desperately need the money for medicine to save your sick child. No exceptions.
I forget, did Kant allow for qualifiers in a stated maxim in order to make it universally OK? i.e., is okay to steal a few dollars from the cash register at work if I desperately need the money for medicine?
 

log in or register to remove this ad

G

Guest 7034872

Guest
I thought deontology was rule-based ethics, not necessarily black-and-white? You could have conflicting duties (Hamlet comes to mind as one most Westerners would be familiar with).
That is correct; it has nothing to do with "black-and-white" scenarios.

Under careful constructions of them, both consequentialist/utilitarian ethics and deontological ethics admit of "shades of grey" regarding rightness/wrongness of action and/or degrees of duty. They diverge over what makes some action right or wrong, not over whether or not there can be degrees of rightness/wrongness. What deontology always does have, though, is a set of rules wherein at least some actions--irrespective of consequences--are always wrong no matter what. Deontologist philosophers, though, often energetically disagree with each other as to what the relevant rules are and which actions are unconditionally prohibited.
I forget, did Kant allow for qualifiers in a stated maxim in order to make it universally OK? i.e., is okay to steal a few dollars from the cash register at work if I desperately need the money for medicine?
Kant himself was not warm on such ideas, but nearly all Kantians today intentionally construct their proposed rules in order to allow for things like this, but always only up to a point.
 

Emoshin

So Long, and Thanks for All the Fish
Kant himself was not warm on such ideas, but nearly all Kantians today intentionally construct their proposed rules in order to allow for things like this, but always only up to a point.
I also read that one issue with Kantian ethics is that in the real world, in real-time, it's going to take some time and thought to construct a fitting universal maxim and maybe this is an emergency and you need to make an ethical decision like right now!!

That's not necessarily a problem for the hypothetical author of an adventure who is tasked with coming up with at least one morally correct outcome.

Going with the + premise of this thread, is Kantian ethics a good moral framework to determine if, say, the Rebels should blow up the Death Star?
 

G

Guest 7034872

Guest
I also read that one issue with Kantian ethics is that in the real world, in real-time, it's going to take some time and thought to construct a fitting universal maxim and maybe this is an emergency and you need to make an ethical decision like right now!!
That concern is there, but I think it usually relies on a misreading of Kant's understanding of Reason. In classical philosophy, Reason was understood to be much more than of the axiomatic-and-calculative sort, and I've long taken Kant to hold something similar, with his two alternative constructions of the Categorical Imperative used to help triangulate on that classical sense of "the light of Reason." It was never his view, for instance, that doing the right thing requires one always to sit down and calculate out what would happen in some possible world W where everyone always followed one's proposed maxim. The basic back-of-the-mind habit of upholding autonomy and avoiding heteronomy is often enough to do the trick.
 

G

Guest 7034872

Guest
Going with the + premise of this thread, is Kantian ethics a good moral framework to determine if, say, the Rebels should blow up the Death Star?
I always prefer to flip it around to this question: After figuring out, on independent grounds, which moral theory is correct, how do our results play out vis a vis the Rebels blowing up the Death Star?
 

Emoshin

So Long, and Thanks for All the Fish
I always prefer to flip it around to this question: After figuring out, on independent grounds, which moral theory is correct, how do our results play out vis a vis the Rebels blowing up the Death Star?
Oh. In the book (I hope that doesn't start sounding annoying), they run the same scenario through different ethical frameworks to compare the results. That was interesting to see each framework in action on the same benchmark.
 

G

Guest 7034872

Guest
Oh. In the book (I hope that doesn't start sounding annoying), they run the same scenario through different ethical frameworks to compare the results. That was interesting to see each framework in action on the same benchmark.
Makes sense. The issue I see there is that too many students (and professors, sometimes) will mistake intuitive or unintuitive results for acceptable evidence that the theory is sound or unsound, and that's false. Untutored moral sentiments are notoriously unreliable and often outright mutually incompatible, so going with whichever theory "fits my intuitions" stands in the way of me ever admitting that my intuitions about this could be wrong.

Contrariwise, if I start by working out to the best of my ability which moral theory is true, then I more easily can chase through all its resulting edicts about various actions' rightness or wrongness and start modifying my moral behavior to bring it in keeping with the theory I think is true.

I much prefer the latter on the grounds that my sentiments lie to me much more often than Reason does.
 

Blue Orange

Gone to Texas
Makes sense. The issue I see there is that too many students (and professors, sometimes) will mistake intuitive or unintuitive results for acceptable evidence that the theory is sound or unsound, and that's false. Untutored moral sentiments are notoriously unreliable and often outright mutually incompatible, so going with whichever theory "fits my intuitions" stands in the way of me ever admitting that my intuitions about this could be wrong.

Contrariwise, if I start by working out to the best of my ability which moral theory is true, then I more easily can chase through all its resulting edicts about various actions' rightness or wrongness and start modifying my moral behavior to bring it in keeping with the theory I think is true.

I much prefer the latter on the grounds that my sentiments lie to me much more often than Reason does.
I actually tried making a list of major decisions in my life and the number of times the gut was right and the number of times my head was right. I think my gut won out by a little.

Some of you might try doing this, actually--could be some people have better guts and some people have better formal reasoning. (I tended to rely more on reasoning, which may be why I'm successful but unhappy. But this is really one of the things that vary person to person!)
 

Emoshin

So Long, and Thanks for All the Fish
Some of you might try doing this, actually--could be some people have better guts and some people have better formal reasoning. (I tended to rely more on reasoning, which may be why I'm successful but unhappy. But this is really one of the things that vary person to person!)
For what it's worth, I think:
- for myself, my best decisions were a synthesis of intuition + reasoning. Intuition for where I didn't have enough information at hand. Reason for where I knew my intuition had blind spots from unconscious biases

- for other people and external circumstances, I try to lean more to reasoning, because I don't trust my intuition to make accurate inferences about externalities I don't fully understand
 

Clint_L

Hero
I forget, did Kant allow for qualifiers in a stated maxim in order to make it universally OK? i.e., is okay to steal a few dollars from the cash register at work if I desperately need the money for medicine?
Kant hated consequentialist ethics so he was a hard-ass about rules.

Kant was a super hardcore rationalist, so really focused on the idea of moral certainty. He felt that certain moral truths were self-evident, exactly as mathematical postulates were (thought to be) self-evident, so you could use these elemental truths to build a universal system of ethics just as rigorous as mathematics.

And I forgot to bring rule utilitarianism into the conversation, but South by Southwest seems more up to speed so I will pass the buck.
 

The-Magic-Sword

Small Ball Archmage
Notably, a kantian would probably lament how other violations of the moral rules create such situations in the first place-- e.g. part of the goal of lying and cheating being firmly unethical, is so you as a person living in a world where people try to be good, aren't lied to or cheated.

Kant is much more compelling in a world that acknowledges systemic consequence and externality, in which his moral imperatives are accepted and imposed widely across a culture.

RPGs do tend toward deconstructing Kant by demonstrating the inability to ensure that others acknowledge the imperative, making it an unsatisfying solution.

In game theory terms, it primes you to be a sucker (in the formal, game theory sense of the word). This makes it difficult to write a satisfying victory in a conflict franework that is also ethical.

I suppose one could frame punishing wrongdoing itself as an imperative.
 

Anon Adderlan

Explorer
Which is why the only moral framework should be the one provided by the players, in accordance with the characters they have created.
I'd argue it's the only framework which can be applied where endings are concerned.

Friendly reminder that this is a +++++ thread, which is experimental and means a lot of things, including positive contributions to the "What if"? hypothetical premise in the OP.
No amount of +++ will prevent criticism of a nonsensical premise though. And it cannot even be addressed until you explain how such endings can be enforced outside of player actions.

Holy sh*t

This is what the AI said:
In other words you need a well defined moral framework in order to present a moral ending. And even then it assumes no dilemmas and depends on player choices.
 

Shadowdweller00

Adventurer
The way I look at things is not from an ethical standpoint, but a character development and reward standpoint. On my own, I tend to favor gritty, grey-on-grey morality settings. But as DM, I consider it to be my duty to help facilitate personal goals for every player character.

That is to say, if a player is playing a con artist, I make sure there are occasional opportunities for bamboozling NPCs. If a character is a Big Damn Hero, I make sure to introduce opportunities for heroics. If I've got a cynical, noir-esque type, I offer some honor-fulfillment with a slice of bitter reality. Most of all, I try to show appropriate consequences for PC's personal choices.

Heroic options are important for heroic characters. But so are less-than heroic options. And they should be tailored to the PC's choices.
 
Last edited:

I'd argue it's the only framework which can be applied where endings are concerned.
Not entirely sure I understand this. It seems to mean "what is moral is whatever the PCs decide is moral" and that's...I mean technically it's a moral standard, in the sense that a refusal to choose at all is still a choice. But it's a null choice and I don't really see that as a "framework" in any meaningful sense.

No amount of +++ will prevent criticism of a nonsensical premise though. And it cannot even be addressed until you explain how such endings can be enforced outside of player actions.
Because a prewritten adventure is authored, just as a book or film is authored. A book can have a message or inherent moral compass even if individual characters do not act in accordance with that message or compass; generally, this will result in those characters becoming (or staying) unhappy or being punished (whether in a very practical way e.g. legal consequences or in a more symbolic way e.g. enduring preventable suffering.)

Vader turns evil, and suffers for it. His return to good requires a heroic sacrifice, which kills him, but the act allows him to obtain some measure of absolution, almost totally separate from Luke's own actions. (Heck, Luke briefly does embrace the dark side and then stops himself.)

In other words you need a well defined moral framework in order to present a moral ending. And even then it assumes no dilemmas and depends on player choices.
I dunno. I think it's quite possible to have "you need to resolve this dilemma in order to earn a happy ending" as a story element. It's a motive to induce people to Take A Third Option. Finding a way to save MJ and the busload of orphans.

And sometimes it will fail. Returning to Vader, you could argue that Luke "failed" to resolve the dilemma of stopping the Death Star and saving his father. He got the warm fuzzy consolation prize of his father redeeming himself, but not actually saving his life. There's even an alternate timeline comic where Leia went up to the Death Star with Luke, and things play out differently: they're able to save Anakin, but at the cost of failing to kill the Emperor, thus allowing the civil war to continue for longer. That pretty clearly paints this as some kind of dilemma, of having to choose what victories are worth seeking and what you're willing to accept imperfect or symbolic victory on.

Finally, D&D often includes actual deities, sometimes ones that are genuinely transcendental moral paragons. If Bahamut is a transcendental being literally made of pure Justice and Mercy and Goodness, then him telling you something is morally wrong is...kind of hard to argue with, within the premise of the story. Either you must reject that the story is what it claims to be, or you must somehow argue with (effectively) Goodness Itself embodied and conversant.
 

Lanefan

Victoria Rules
Because a prewritten adventure is authored, just as a book or film is authored. A book can have a message or inherent moral compass even if individual characters do not act in accordance with that message or compass; generally, this will result in those characters becoming (or staying) unhappy or being punished (whether in a very practical way e.g. legal consequences or in a more symbolic way e.g. enduring preventable suffering.)
A book can have these things because the author gets to present not only the moral compass but the characters' interactions with and-or reactions to it, and also controls the consequences.

An RPG module is a different matter. Here, while the adventure author can certainly (try to) write a message or inherent moral compass into the adventure, that author has no control over a) how the DM interprets and-or presents any of it and (more importantly) b) how the players will interact with it and-or react to it in character. The author also has little control over how - or if at all - the DM assesses consequences within the run of play.
Finally, D&D often includes actual deities, sometimes ones that are genuinely transcendental moral paragons. If Bahamut is a transcendental being literally made of pure Justice and Mercy and Goodness, then him telling you something is morally wrong is...kind of hard to argue with, within the premise of the story. Either you must reject that the story is what it claims to be, or you must somehow argue with (effectively) Goodness Itself embodied and conversant.
There's this, too: D&D's alignment-tied cosmology system strongly implies some sort of universal definitions of what comprises Good, Evil, Law and Chaos that the characters would likely know (if not necessarily adhere to); which while fine with me might not be fine for all.
 


Epic Threats

An Advertisement

Advertisement4

Top