• The VOIDRUNNER'S CODEX is coming! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

How would you classify "Good by any means neccessary"

delericho

Legend
Mardoc Redcloak said:
But either way you are sacrificing lives. If you kill one to save five, then you sacrifice one life for five; if you just let the five die, you sacrifice five for one.

Nope. If you act, you save five lives and take one. If you don't act... you do nothing. That five lives are lost is not your fault or your responsibility.

Mardoc Redcloak said:
Yes, you did. You are unavoidably part of the event, because if you had acted, they would not have died. Your inaction was a cause of their deaths - and you consciously, deliberately chose it knowing what the results would be.

Inaction, by it's nature, has no consequences. Unless a force is applied, events will proceed along their pre-existing path.

Mardoc Redcloak said:
As well say that when a person shoots someone and she dies, he didn't bring about the event - after all, it isn't his fault that human beings die when shot in the head.

I fail to see how this is comparable - in that situation, you are explicitly taking action, by firing the gun.

Mardoc Redcloak said:
Being good is not about avoiding responsibility for things. It is about helping others - and that includes saving lives.

Agreed. But the Good person simply cannot sacrifice the lesser good for the greater. Otherwise, real problems ensue.

Since I'll be replying to Firelance's comments on the Laws of Robotics a bit further down, allow me to refer to them here:

In the initial version, the three Laws of robotics are entirely benign, and lead to proper control of robots. However, as soon as you introduce the 0th Law, that robots cannot allow Humanity to come to harm, and allow that law to supercede all the others, you run into problems. Suddenly, it becomes entirely acceptable for the robots to enslave the human race, in the guise of keeping us safe and prosperous.

Mardoc Redcloak said:
This doesn't help your case, because the exact same logic could be used to justify the opposite action - what do you think the five people whose lives you saved would think? Once again, maximize the good, minimize the evil. If human beings (or, in the D&D case, sapient beings) are of equal moral worth, then five trumps one.

Only if the value is finite. Mathematics with infinite values is a bit wonky.

In any event, I believe that was my point entirely - ask the one whether the better consequence is him dying or not, and you'll get a very different answer from him than you would from the five.

Mardoc Redcloak said:
The proper viewpoint is the objective one, measuring people's lifes, preferences, and happiness equally instead of being partial to one or the other.

I would agree with that, but no human being has ever had an entirely objective viewpoint from which to judge these things.

Mardoc Redcloak said:
Certainly, but if the person making the decision did not know and had no reasonable way of knowing that this was going to happen, there is no basis upon which to hold him or her accountable for the failure.

At this point, though, I'm beyond the question of accountability. Since you cannot properly judge the consequences of an action, how can you possibly hope to make moral judgements as to the suitability of the action? Saving five lives at the cost of one sounds like a good trade-off, but there are so many permutations of possible outcomes that you're basically playing a lottery, and that strikes me as a lousy basis for making such important calls.

FireLance said:
It really depends on where you consider your moral responisbilities to start.

Indeed. And my answer is simple: your responsibility begins the moment you take action. Inaction doesn't count.

FireLance said:
Some would argue that inaction is equally a moral choice. After all, even though it applies to robots, Asimov's First Law of Robotics implies that allowing harm to come to another through inaction is just as bad as causing harm yourself.

The fun thing about that Law of Robotics in the example given of the runaway train, is that the robot finds itself trapped, unable to resolve the paradox caused by its programming. It cannot do nothing, or five people die, but it cannot act either, as that would cause one person to die.

However, the crucial thing about the Law of Robotics, as you relate it to morality, is that I fundamentally disagree: allowing harm to come about through inaction is Neutral. It's still a lousy thing to (not) do, since we're not called to Neutrality but to be Good, but it remains Neutral.

FireLance said:
In the simplest case, if you are standing on a river bank, there is a drowning man in the water, and there is a life preserver next to you, most people would agree that you have the moral responsibility to throw that man a life preserver and save him (even though he might eventually go into politics and become the Evil Leader - it's a risk we all take ;)).

I think most people would say that you should, and be fairly horrified at those who don't. However, to say that not acting would be Evil is to exclude the middle ground. Acting to save the man would be a Good action, despite the lack of consequences, while not acting is Neutral. The Evil act would be to somehow make rescue harder, perhaps by walking away with the life preserver.

FireLance said:
Now, if obtaining the life preserver puts you in some danger - say, it is at the top of an old, rotted, life guard tower that could collapse at any moment, some people would argue that you no longer have a moral responsibility to try to save the man since you would be putting yourself in danger. It is good if you do, but it is not evil if you do not.

IMO, this merely heightens what was already there: acting is clearly Good in this case, while not acting remains Neutral.

FireLance said:
It gets even messier if you must hurt or kill someone else to save that man, for example, if the life preserver is guarded by an enemy of the drowning man. Still, if you can get the life preserver without killing the enemy, even if you had to punch him out or severely injure him, some people would still argue that it is a good act, although hurting someone would normally be evil.

This is trickier. However, here you are given a free hand by the fact that the enemy is specifically engaged in an Evil action, which you are permitted to oppose.

A much harder question occurs where the guardian of the life preserver is not an enemy of the man, but merely someone who wishes to charge for the use of his resource (the life preserver). Here, the guardian is not engaged in Evil, but is rather Neutral, which means you are not free to simply defeat him and move on. In this case, you are contemplating an Evil action (an assault) and a Good action (the rescue) together.

On balance, I would be inclined to side with the view that says you have to save the man. However, it is also a view that says the Evil action should then be punished accordingly, that justice can be restored. (In the same way, I would steal food to feed my starving family, but that then leaves an Evil action which should be appropriately punished for the sake of moral justice.)

FireLance said:
I happen to agree with you :), but some people do find consequentialism to be an appealing moral philosophy.

It has the benefit that anything is acceptable, as long as it turns out well. Or if the consequences aren't too bad. This strikes me as very slippery ground on which to stand. Pathways to Hell, and all that.
 

log in or register to remove this ad

Dannyalcatraz

Schmoderator
Staff member
Supporter
Even if the greater good somehow involves "cosmological evil," it's still good; you're still saving lives. The fact that you might go to Hell for it just makes your sacrifice more profound.

There was a short story in which the protagonist made the classic infernal pact- he sold his soul for a wish.

Satan, knowing this was a good man was quick to sign the deal, worried only that the man would make a truly altruistic wish...

His fears were realized when the man wished that everyone else would be forgiven and go to Heaven.
 

Kristivas

First Post
OK, just one simple question.

If you would...
1. Torture the man who poisoned the town to get the antidote..
2. Hit the switch so that the train kills 1 man instead of 5..
3. Throw the life preserver to the drowning man..
4. Kill "Baby Hitler" to prevent all the misery he caused..

Are you good? Neutral? Or evil?
 

Nifft

Penguin Herder
Kristivas said:
If you would...
1. Torture the man who poisoned the town to get the antidote..
2. Hit the switch so that the train kills 1 man instead of 5..
3. Throw the life preserver to the drowning man..
4. Kill "Baby Hitler" to prevent all the misery he caused..

Are you good? Neutral? Or evil?

Worse than evil: you're a META-GAMER.

Cheers, -- N
 

delericho

Legend
Kristivas said:
If you would...
1. Torture the man who poisoned the town to get the antidote..
2. Hit the switch so that the train kills 1 man instead of 5..
3. Throw the life preserver to the drowning man..
4. Kill "Baby Hitler" to prevent all the misery he caused..

Are you good? Neutral? Or evil?

Assuming that that is the pattern of your life: Evil, but very close to Neutrality. #1 and #4 are clear Evils, #3 is a clear Good (albeit one without risk or sacrifice), and #2 is an Evil mitigated heavily by five lives saved.

The character gets some points for obviously Good intentions, but alignment is almost entirely determined by actions, and here the Evil outweighs the Good.

However, it is worth noting that three of the four are extreme cases, and unlikely to all be met by the same person. As such, it is highly unlikely that this would be the pattern of your life. And, since alignment is determined by actions, merely ticking the 'Yes' box on our alignment survey doesn't shift you to Evil - you have to actually do these things. As was mentioned earlier in the thread, unless and until you're really in the situation, it's hard to be certain how you would actually respond. You might claim to be ready to kill Baby Hitler, but once you're standing over the crib, gun in hand, you might come to think differently. Or not.

Furthermore, even if a single person somehow did meet all four situations, and did act as indicated, there is still the other 99%+ of his life to consider, which would usually be far less eventful. So, I wouldn't be remotely surprised to find that the vast majority of people who fell into the 'Yes to all' category were actually Good, albeit a flawed and imperfect Good.
 

Mardoc Redcloak

First Post
delericho said:
Nope. If you act, you save five lives and take one. If you don't act... you do nothing. That five lives are lost is not your fault or your responsibility.

That's what we've been arguing about. ;)

Inaction, by it's nature, has no consequences. Unless a force is applied, events will proceed along their pre-existing path.

Of course it has consequences. The consequence is that whatever is going on is not altered.

I fail to see how this is comparable - in that situation, you are explicitly taking action, by firing the gun.

But in order to argue that we are not morally responsible for the consequences of inaction, you must argue that if we are not the determining cause of something, it isn't our fault (at least, that's been the usual argument.) It follows from that line of reasoning that NOTHING but the action itself - the application of force to the weapon we are using - is our fault, since everything else is not determined by us, but rather by the circumstances and the law of nature.

Agreed. But the Good person simply cannot sacrifice the lesser good for the greater. Otherwise, real problems ensue.

Since I'll be replying to Firelance's comments on the Laws of Robotics a bit further down, allow me to refer to them here:

In the initial version, the three Laws of robotics are entirely benign, and lead to proper control of robots. However, as soon as you introduce the 0th Law, that robots cannot allow Humanity to come to harm, and allow that law to supercede all the others, you run into problems. Suddenly, it becomes entirely acceptable for the robots to enslave the human race, in the guise of keeping us safe and prosperous.

Why do you object to that? Probably because you hold that freedom is more important than safety and prosperity. I agree. But this can be incorporated into the standard by which we judge the consequences of our actions.

So, in this case, the action of enslaving humans has two relevant consequences:

1. Humanity is safe and prosperous.
2. Humanity is not free.

If we value freedom over safety and prosperity, even a consequentialist standard would lead to the conclusion that the action is wrong.

Only if the value is finite. Mathematics with infinite values is a bit wonky.

Even if the value is infinite (and I don't think that makes much sense), we can only conclude that we have no OBLIGATION to sacrifice the one for the five. We still can do so if we wish to; we are merely exchanging things of equal moral value.

In any event, I believe that was my point entirely - ask the one whether the better consequence is him dying or not, and you'll get a very different answer from him than you would from the five.

Yes, and I agreed with you, and argued that that line of reasoning only helps my case.

I would agree with that, but no human being has ever had an entirely objective viewpoint from which to judge these things.

All we can do is try our best.

At this point, though, I'm beyond the question of accountability. Since you cannot properly judge the consequences of an action, how can you possibly hope to make moral judgements as to the suitability of the action? Saving five lives at the cost of one sounds like a good trade-off, but there are so many permutations of possible outcomes that you're basically playing a lottery, and that strikes me as a lousy basis for making such important calls.

But the alternative is no less subject to uncertainty. Perhaps the one life you spare will become the next Hitler or Stalin. We simply cannot avoid making a choice that involves uncertainty.

Morally, we make decisions that involve uncertainty all the time. To bring this back to D&D, sure, you might think the people your character kills are evil, but unless you have a paladin or cleric handy (and some guarantee against misleading spells), is it ever really certain? By this line of logic, the only characters that should be considered "good" are the ones who take Vow of Non-Violence.
 
Last edited:


Kristivas

First Post
Mardoc Redcloak said:
Morally, we make decisions that involve uncertainty all the time. To bring this back to D&D, sure, you might think the people your character kills are evil, but unless you have a paladin or cleric handy (and some guarantee against misleading spells), is it ever really certain? By this line of logic, the only characters that should be considered "good" are the ones who take Vow of Non-Violence.


I think that would be going a bit far. In terms of the gaming world, killing a sentient creature isn't wrong. It's who you kill and why that makes it good or bad.
 

ruleslawyer

Registered User
delericho said:
Nope. If you act, you save five lives and take one. If you don't act... you do nothing. That five lives are lost is not your fault or your responsibility.

Inaction, by it's nature, has no consequences. Unless a force is applied, events will proceed along their pre-existing path.
Just pointing out that these two statements are mutually contradictory. :)

To throw in my 2 cents, I'd probably say that *strictly* consequentialist ethics are fatally flawed; at some point, pursuing consequentialist actions takes you completely outside the realm of any deontological "good." However, it's a question of degree. No eminent ethicist I have ever heard or read views ethics as a binary issue; there are always degrees of action and degrees of good or evil.
 

delericho

Legend
ruleslawyer said:
Just pointing out that these two statements are mutually contradictory.

and

Mardoc Redcloak said:
Of course it has consequences. The consequence is that whatever is going on is not altered.

Okay, allow me to rephrase.

Consider the example of a runaway train that's on a track heading towards five workmen. There is a switch that will shift it onto another track where one man is working. Unfortunately, you're five miles away, so can't take action.

Net result: Five men die, one lives.

Now, consider exactly the same situation, except that you are standing right next to the switch but choose not to change the tracks.

Net result: Five men die, one lives.

The result of inaction is exactly the same as if you had not been there at all. Hence, the net effect of your actions (or, in this case, inaction) is 0. That is what I meant by saying that inaction has no consequence.

Mardoc Redcloak said:
But in order to argue that we are not morally responsible for the consequences of inaction, you must argue that if we are not the determining cause of something, it isn't our fault (at least, that's been the usual argument.)

Yes...

It follows from that line of reasoning that NOTHING but the action itself - the application of force to the weapon we are using - is our fault, since everything else is not determined by us, but rather by the circumstances and the law of nature.

No. The determining cause of the death is that the person was shot in the head. The determining cause of the person being shot in the head is that you pulled the trigger. You are, of course, responsible for your actions and the consequences thereof.

By contrast, if you had not taken action, you would not have been responsible. However, I'm not aware of many people dying from not being shot in the head.

Mardoc Redcloak said:
(RE: Laws of Robotics)

Why do you object to that? Probably because you hold that freedom is more important than safety and prosperity. I agree. But this can be incorporated into the standard by which we judge the consequences of our actions.

Okay, then, another example: Applying Rule 0, it becomes acceptable for the robots to engage in outright murder of those who would endanger humanity. Better that one die rather than the whole be harmed, right?

Except that out there we have scientists working on cold fusion. Once it is perfected, it is inevitable that someone will find a way to weaponise it. And, once it's turned into a weapon, it is inevitable that the weapon will be used by someone somewhere. Since we cannot un-invent something, the only way to stop this is to prevent the research, and the most logical way to do this is to eliminate the inventor.

Who, of course, is both innocent and acting with benign intentions. So, in the name of preventing harm to humanity as a whole, it becomes acceptable to murder.

Mardoc Redcloak said:
Morally, we make decisions that involve uncertainty all the time. To bring this back to D&D, sure, you might think the people your character kills are evil, but unless you have a paladin or cleric handy (and some guarantee against misleading spells), is it ever really certain? By this line of logic, the only characters that should be considered "good" are the ones who take Vow of Non-Violence.

True. And the 'classic' D&D adventurer - the guy who invades the lairs of humanoid races, puts them to the sword, and steals their treasure - should not be considered Good.

However, most of the D&D campaigns I've seen tend to a more quest-based structure, with the PCs taking up arms against some imminent threat or dire Evil. And, of course, there is also a huge number of campaigns that don't consider alignment particularly closely.
 
Last edited:

Remove ads

Top