D&D 5E About Morally Correct Outcomes in D&D Adventures [+]

Emoshin

So Long, and Thanks for All the Fish
So if you are wondering whether it is okay to steal a few dollars from the cash register at work, you have to imagine a world where everyone could do that any time they felt like it. If that seems like a bad idea, then the action is unethical, and you should not do it. Period. Even if you desperately need the money for medicine to save your sick child. No exceptions.
I forget, did Kant allow for qualifiers in a stated maxim in order to make it universally OK? i.e., is okay to steal a few dollars from the cash register at work if I desperately need the money for medicine?
 

log in or register to remove this ad

G

Guest 7034872

Guest
I thought deontology was rule-based ethics, not necessarily black-and-white? You could have conflicting duties (Hamlet comes to mind as one most Westerners would be familiar with).
That is correct; it has nothing to do with "black-and-white" scenarios.

Under careful constructions of them, both consequentialist/utilitarian ethics and deontological ethics admit of "shades of grey" regarding rightness/wrongness of action and/or degrees of duty. They diverge over what makes some action right or wrong, not over whether or not there can be degrees of rightness/wrongness. What deontology always does have, though, is a set of rules wherein at least some actions--irrespective of consequences--are always wrong no matter what. Deontologist philosophers, though, often energetically disagree with each other as to what the relevant rules are and which actions are unconditionally prohibited.
I forget, did Kant allow for qualifiers in a stated maxim in order to make it universally OK? i.e., is okay to steal a few dollars from the cash register at work if I desperately need the money for medicine?
Kant himself was not warm on such ideas, but nearly all Kantians today intentionally construct their proposed rules in order to allow for things like this, but always only up to a point.
 

Emoshin

So Long, and Thanks for All the Fish
Kant himself was not warm on such ideas, but nearly all Kantians today intentionally construct their proposed rules in order to allow for things like this, but always only up to a point.
I also read that one issue with Kantian ethics is that in the real world, in real-time, it's going to take some time and thought to construct a fitting universal maxim and maybe this is an emergency and you need to make an ethical decision like right now!!

That's not necessarily a problem for the hypothetical author of an adventure who is tasked with coming up with at least one morally correct outcome.

Going with the + premise of this thread, is Kantian ethics a good moral framework to determine if, say, the Rebels should blow up the Death Star?
 

G

Guest 7034872

Guest
I also read that one issue with Kantian ethics is that in the real world, in real-time, it's going to take some time and thought to construct a fitting universal maxim and maybe this is an emergency and you need to make an ethical decision like right now!!
That concern is there, but I think it usually relies on a misreading of Kant's understanding of Reason. In classical philosophy, Reason was understood to be much more than of the axiomatic-and-calculative sort, and I've long taken Kant to hold something similar, with his two alternative constructions of the Categorical Imperative used to help triangulate on that classical sense of "the light of Reason." It was never his view, for instance, that doing the right thing requires one always to sit down and calculate out what would happen in some possible world W where everyone always followed one's proposed maxim. The basic back-of-the-mind habit of upholding autonomy and avoiding heteronomy is often enough to do the trick.
 

G

Guest 7034872

Guest
Going with the + premise of this thread, is Kantian ethics a good moral framework to determine if, say, the Rebels should blow up the Death Star?
I always prefer to flip it around to this question: After figuring out, on independent grounds, which moral theory is correct, how do our results play out vis a vis the Rebels blowing up the Death Star?
 

Emoshin

So Long, and Thanks for All the Fish
I always prefer to flip it around to this question: After figuring out, on independent grounds, which moral theory is correct, how do our results play out vis a vis the Rebels blowing up the Death Star?
Oh. In the book (I hope that doesn't start sounding annoying), they run the same scenario through different ethical frameworks to compare the results. That was interesting to see each framework in action on the same benchmark.
 

G

Guest 7034872

Guest
Oh. In the book (I hope that doesn't start sounding annoying), they run the same scenario through different ethical frameworks to compare the results. That was interesting to see each framework in action on the same benchmark.
Makes sense. The issue I see there is that too many students (and professors, sometimes) will mistake intuitive or unintuitive results for acceptable evidence that the theory is sound or unsound, and that's false. Untutored moral sentiments are notoriously unreliable and often outright mutually incompatible, so going with whichever theory "fits my intuitions" stands in the way of me ever admitting that my intuitions about this could be wrong.

Contrariwise, if I start by working out to the best of my ability which moral theory is true, then I more easily can chase through all its resulting edicts about various actions' rightness or wrongness and start modifying my moral behavior to bring it in keeping with the theory I think is true.

I much prefer the latter on the grounds that my sentiments lie to me much more often than Reason does.
 

Blue Orange

Gone to Texas
Makes sense. The issue I see there is that too many students (and professors, sometimes) will mistake intuitive or unintuitive results for acceptable evidence that the theory is sound or unsound, and that's false. Untutored moral sentiments are notoriously unreliable and often outright mutually incompatible, so going with whichever theory "fits my intuitions" stands in the way of me ever admitting that my intuitions about this could be wrong.

Contrariwise, if I start by working out to the best of my ability which moral theory is true, then I more easily can chase through all its resulting edicts about various actions' rightness or wrongness and start modifying my moral behavior to bring it in keeping with the theory I think is true.

I much prefer the latter on the grounds that my sentiments lie to me much more often than Reason does.
I actually tried making a list of major decisions in my life and the number of times the gut was right and the number of times my head was right. I think my gut won out by a little.

Some of you might try doing this, actually--could be some people have better guts and some people have better formal reasoning. (I tended to rely more on reasoning, which may be why I'm successful but unhappy. But this is really one of the things that vary person to person!)
 

Emoshin

So Long, and Thanks for All the Fish
Some of you might try doing this, actually--could be some people have better guts and some people have better formal reasoning. (I tended to rely more on reasoning, which may be why I'm successful but unhappy. But this is really one of the things that vary person to person!)
For what it's worth, I think:
- for myself, my best decisions were a synthesis of intuition + reasoning. Intuition for where I didn't have enough information at hand. Reason for where I knew my intuition had blind spots from unconscious biases

- for other people and external circumstances, I try to lean more to reasoning, because I don't trust my intuition to make accurate inferences about externalities I don't fully understand
 

Clint_L

Legend
I forget, did Kant allow for qualifiers in a stated maxim in order to make it universally OK? i.e., is okay to steal a few dollars from the cash register at work if I desperately need the money for medicine?
Kant hated consequentialist ethics so he was a hard-ass about rules.

Kant was a super hardcore rationalist, so really focused on the idea of moral certainty. He felt that certain moral truths were self-evident, exactly as mathematical postulates were (thought to be) self-evident, so you could use these elemental truths to build a universal system of ethics just as rigorous as mathematics.

And I forgot to bring rule utilitarianism into the conversation, but South by Southwest seems more up to speed so I will pass the buck.
 

Remove ads

Top