delericho
Legend
Mardoc Redcloak said:But either way you are sacrificing lives. If you kill one to save five, then you sacrifice one life for five; if you just let the five die, you sacrifice five for one.
Nope. If you act, you save five lives and take one. If you don't act... you do nothing. That five lives are lost is not your fault or your responsibility.
Mardoc Redcloak said:Yes, you did. You are unavoidably part of the event, because if you had acted, they would not have died. Your inaction was a cause of their deaths - and you consciously, deliberately chose it knowing what the results would be.
Inaction, by it's nature, has no consequences. Unless a force is applied, events will proceed along their pre-existing path.
Mardoc Redcloak said:As well say that when a person shoots someone and she dies, he didn't bring about the event - after all, it isn't his fault that human beings die when shot in the head.
I fail to see how this is comparable - in that situation, you are explicitly taking action, by firing the gun.
Mardoc Redcloak said:Being good is not about avoiding responsibility for things. It is about helping others - and that includes saving lives.
Agreed. But the Good person simply cannot sacrifice the lesser good for the greater. Otherwise, real problems ensue.
Since I'll be replying to Firelance's comments on the Laws of Robotics a bit further down, allow me to refer to them here:
In the initial version, the three Laws of robotics are entirely benign, and lead to proper control of robots. However, as soon as you introduce the 0th Law, that robots cannot allow Humanity to come to harm, and allow that law to supercede all the others, you run into problems. Suddenly, it becomes entirely acceptable for the robots to enslave the human race, in the guise of keeping us safe and prosperous.
Mardoc Redcloak said:This doesn't help your case, because the exact same logic could be used to justify the opposite action - what do you think the five people whose lives you saved would think? Once again, maximize the good, minimize the evil. If human beings (or, in the D&D case, sapient beings) are of equal moral worth, then five trumps one.
Only if the value is finite. Mathematics with infinite values is a bit wonky.
In any event, I believe that was my point entirely - ask the one whether the better consequence is him dying or not, and you'll get a very different answer from him than you would from the five.
Mardoc Redcloak said:The proper viewpoint is the objective one, measuring people's lifes, preferences, and happiness equally instead of being partial to one or the other.
I would agree with that, but no human being has ever had an entirely objective viewpoint from which to judge these things.
Mardoc Redcloak said:Certainly, but if the person making the decision did not know and had no reasonable way of knowing that this was going to happen, there is no basis upon which to hold him or her accountable for the failure.
At this point, though, I'm beyond the question of accountability. Since you cannot properly judge the consequences of an action, how can you possibly hope to make moral judgements as to the suitability of the action? Saving five lives at the cost of one sounds like a good trade-off, but there are so many permutations of possible outcomes that you're basically playing a lottery, and that strikes me as a lousy basis for making such important calls.
FireLance said:It really depends on where you consider your moral responisbilities to start.
Indeed. And my answer is simple: your responsibility begins the moment you take action. Inaction doesn't count.
FireLance said:Some would argue that inaction is equally a moral choice. After all, even though it applies to robots, Asimov's First Law of Robotics implies that allowing harm to come to another through inaction is just as bad as causing harm yourself.
The fun thing about that Law of Robotics in the example given of the runaway train, is that the robot finds itself trapped, unable to resolve the paradox caused by its programming. It cannot do nothing, or five people die, but it cannot act either, as that would cause one person to die.
However, the crucial thing about the Law of Robotics, as you relate it to morality, is that I fundamentally disagree: allowing harm to come about through inaction is Neutral. It's still a lousy thing to (not) do, since we're not called to Neutrality but to be Good, but it remains Neutral.
FireLance said:In the simplest case, if you are standing on a river bank, there is a drowning man in the water, and there is a life preserver next to you, most people would agree that you have the moral responsibility to throw that man a life preserver and save him (even though he might eventually go into politics and become the Evil Leader - it's a risk we all take ).
I think most people would say that you should, and be fairly horrified at those who don't. However, to say that not acting would be Evil is to exclude the middle ground. Acting to save the man would be a Good action, despite the lack of consequences, while not acting is Neutral. The Evil act would be to somehow make rescue harder, perhaps by walking away with the life preserver.
FireLance said:Now, if obtaining the life preserver puts you in some danger - say, it is at the top of an old, rotted, life guard tower that could collapse at any moment, some people would argue that you no longer have a moral responsibility to try to save the man since you would be putting yourself in danger. It is good if you do, but it is not evil if you do not.
IMO, this merely heightens what was already there: acting is clearly Good in this case, while not acting remains Neutral.
FireLance said:It gets even messier if you must hurt or kill someone else to save that man, for example, if the life preserver is guarded by an enemy of the drowning man. Still, if you can get the life preserver without killing the enemy, even if you had to punch him out or severely injure him, some people would still argue that it is a good act, although hurting someone would normally be evil.
This is trickier. However, here you are given a free hand by the fact that the enemy is specifically engaged in an Evil action, which you are permitted to oppose.
A much harder question occurs where the guardian of the life preserver is not an enemy of the man, but merely someone who wishes to charge for the use of his resource (the life preserver). Here, the guardian is not engaged in Evil, but is rather Neutral, which means you are not free to simply defeat him and move on. In this case, you are contemplating an Evil action (an assault) and a Good action (the rescue) together.
On balance, I would be inclined to side with the view that says you have to save the man. However, it is also a view that says the Evil action should then be punished accordingly, that justice can be restored. (In the same way, I would steal food to feed my starving family, but that then leaves an Evil action which should be appropriately punished for the sake of moral justice.)
FireLance said:I happen to agree with you , but some people do find consequentialism to be an appealing moral philosophy.
It has the benefit that anything is acceptable, as long as it turns out well. Or if the consequences aren't too bad. This strikes me as very slippery ground on which to stand. Pathways to Hell, and all that.