Knoxgamer said:
As to the real world, I think it's silly to begin with to try to define an action as "evil" without choosing an ethical system with which to judge said action. Is it maximizing good on a global scale? Is it upholding the 10 Commandments? Does it allow for maximum freedom of decision for the population on the whole? Different ethical systems will result in somewhat different definitions of what is good and evil. A Utilitarian might find that a doctor who murdered his sick patients in order to harvest their organs and save more lives than the one she could otherwise is a good person, because they're maximizing good.
Actually, no. That's a ridiculous mischaracterization of utilitarianism. Read John S. Mill on the subject. He's particularly good at showing that objections to utilitarianism either misunderstand the system or miss the point entirely.
The basis for the ethical system is that we have, innately, a benevolent desire to see good be done. Utilitarianism claims that the amount of good in the world is best measured by the happiness of the people in it...in part because happiness gives us something to measure, while other ethical systems don't necessarily give us a way to check to see if good is being done. So we want to maximize happiness. This is not a hard and fast rule. Mill points out that all ethical systems are merely codifications of the basic, unfocused desire to see good done. Hence all ethical systems are incomplete or lacking in some way, because this unfocused desire doesn't match up well with the realities, complexities, and contradictions inherent in real life.
The claim is that Utilitarianism does the best job of matching up with our innate benevolence, for a variety of reasons. One of these reasons is that instead of confining ourselves to a set of fixed laws, as in a deontological system, we can alter our approach if we find something that will make more people happier. We can, because we don't live only from moment to moment, but operate in full understanding of history and common sense, prevent things like harvesting the organs of one to save many. This, as anyone can see, will end up being pretty darn bad in the long run. So we can see that happiness will in fact not be maximized by allowing such a thing. Utilitarianism is morality by pragmatism. What will result in the best outcome for all? What would we all agree is probably a good course of action? What do we do if the conditions change? The system provides tools for approximating the benevolent impulse that all good people have, and its supporters claim it does a better job of doing this than competing systems do, since the whole point of the system is to deliberately attempt to accomplish this.
</off topic>
Anyway, in my own campaign, the characters live in a world in which dragons have taken over completely and enslaved most people. The city the adventure began in was run by a red dragon of particular cruelty. The society is pretty much chaotic evil, although the citizens aren't necessarily. The dragon's guard, who carry out his evil will, are selected from people who the dragon knows are compassionate and who will hate having to carry out his evil will. He won't employ anyone who might enjoy cruelty, since that pleasure is reserved for himself alone, and he enjoys seeing his servants perform actions that they hate. This also gives him the opportunity to spy on his servants to find out anyone who either enjoys the work or who might try to subvert his will by secretly showing compassion.