Consequences of playing "EVIL" races

Maxperson

Morkus from Orkus
Long before any of us were born, my son.

(in a typical Orcish might-makes-right society, it only makes sense they'd project that on to their deities and thus the toughest of those deities would - in their mythology - kill off all the others)
Sure, but which edition established that lore?
 

log in or register to remove this ad


Celebrim

Legend
but I also they are very intelligent beings so I see no reason why some of them can't decide to be good or evil or whatever alignment the MM says they're supposed to be.

This statement was worth treating separately.

At the risk of jumping to a conclusion, I think you are using the word "intelligent" as a synonym for human. That is to say, you are supposing anything that has this quality "intelligence" must be more or less human, because humans are more or less the only intelligent thing you can think of and so you assume that every intelligent thing will thing and behave in a human fashion.

This is a very common science fiction trope and a very natural conclusion, but I think if you spend a few more minutes thinking about it, you'll realize it is a bit ridiculous. To point you toward that conclusion, let's just think of a an examples of this human intuition that are obviously ridiculous. In almost every science fiction show featuring AI, the AI if it encounters a beautiful human female will fall in love with that human female and attempt to romance the person. Now, leaving aside that we could probably think of few intelligence failure modes in a created AI that would produce this behavior, that this would always and inevitably arise especially in 'naturally arising AIs' is an obvious failure of imagination. Most people when they first think about this failure of imagination hit upon the idea that the AI wouldn't naturally be attracted to a human female because they don't look alike, and so they wouldn't necessarily find the human female beautiful and attractive.

But that's yet more failure of imagination. The underlying assumption here that is ridiculous is that an AI would experience a sexual impulse or even a desire for companionship at all. Feelings of arousal, desires for intimacy, and even loneliness are all modes of behavior that humans have to fulfill specific purposes. It is part of their 'design', as it were - whether you believe it is behavior by design or evolved fitness increasing behavior doesn't matter. The point is there is no particular reason the AI would have those emotional needs or emotional contexts, much less that they would be displayed through the emotive displays (like frowning, tears, crossed arms, etc.) that humans communicate these displays to other humans (which is also 'designed' behavior).

So no, Wall-E upon seeing a curvaceous robot would not evidence feelings of attraction for 'her'. And even if we imagined Wall-E experiencing some sort of bizarre intelligence failure mode arising from centuries of isolation and semi-random inputs, there is absolutely NO REASON Eva would ever respond to the now hopelessly dysfunctional Wall-E, nor is there really any reason for Eva to learn or want to learn Wall-E's emotional context. That entire subplot depends on the natural but entirely wrong assumption that intelligence implies humanity.

Ultimately, so does your argument about the dragon.

To understand why, let's first discuss yet another completely stupid trope that results the first leap of imagination that humans make with respect to intelligent machines - that they have no emotions. This is ever so slightly more imaginative than just assuming that they have the full human emotional context, but not much. The problem here is a failure to understand what emotion is. Humans typically are taught to think of emotion as being something different from and separable from reason. This is in fact a very natural result of the experience of being human, and in particular the way the human brain is wired up. In the human brain wiring, it often feels like reason and emotion are competing with each other for the attention the human consciousness. But all of that has to be remembered to be yet another aspect of being human and not something general to all intelligence.

In fact, I put forward that it is impossible to be intelligent and not have emotions. It's just those emotions do not in any fashion have to be like human emotions. Each intelligent thing is likely to have it's own distinctive emotional context. To understand what I'm saying here, you have to look again at that human wiring and try to understand why humans experience emotion and what would happen if you took that emotion out of the reasoning process. In other words, what is the role of emotion in all forms of reasoning. Humans have a massively parallel processing mode that is the result of attempting to compute with chemical signals in an highly energy efficient process and still have high through put. As such, humans separate the channels for 'logical' and 'emotional' processing and do them in parallel. The logical process is addressing the question, "What am I experiencing?" and can tell you the difference between food, a lion, and your mom. The emotional process is addressing the question, "What does this experience mean?" In other words, emotion is the part of reasoning that is goal-driven. Whatever goals that an intelligence has, that will set it's emotional contexts. The emotions tell the being what things mean, and how they should be valued.

People often mistake "emotions" for the emotional experience, or "feelings". This is a natural aspect of being human since "feelings" are the reinforcing feedback loop of the emotional processing context. It's how the system reinforces the goal-driven behavior. You can within some limits as a rational being take control of your emotions, but there are limits to that and what you are actually probably doing is just re-calibrating after realizing that some feedback loop is getting in the way of your own goal.

Point is that Data and Spock actually are experiencing emotions all the time. What they are not doing is making the emotional displays or under any compulsion to make emotional displays in order to communicate emotional information to other primates watching them. The emotions that they have are not entirely human emotions and they can't be communicated very easily to anyone any more than it's easy for you to communicate feelings to someone that doesn't experience your own. But they are certainly there and we know that they are there because they can assign meaning to things and make value judgments. No matter what Spock may tell you, these value judgments are not wholly rational. We don't even live in a universe where you can make a mathematical system not depend on unprovable axioms, much less one that can make value judgments wholly based on logic. There has to be something that makes you do your homework even though you know the heat death of the universe is inevitable in a scant few billion years.

For the dragon, if he has a set of values that are congruent with destructiveness, then he cannot and has no desire to change those values and every built in emotional feedback loop makes him wholly miserable when he tries and every built in emotional feeback loop makes him happy when he doesn't, then it doesn't matter how intelligent the dragon is, he's still going to behave according to his very dragon-ish nature.

And that gets us to intelligence. Intelligence isn't what most people think it is either. But this essay is long enough already, so let me just say there is no such thing as "hard intelligence" or "general intelligence". (Or if there is, we have no examples of it.)
 
Last edited:



There are two sides in this conversation. One is saying, "Conceptualize orcs as always evil or in more nuanced fashion. However that makes sense to you, go for it." And the other is saying, "Everyone that doesn't accept my opinion is a racist." That's the whole reason this conversation is on going. If everyone agreed that there wasn't one true way, we'd not have an argument.

Well, this is the internet; it's not a real conversation until we're calling each other Nazis, right?

As someone who veers toward the relativist side of the Alignment Wars, I've had to hold my breath and count to ten in response to plenty of posts. There are posts implying that the people who prefer an alignment-free universe are dupes of cultural fads or haven't thought deeply enough about their campaign metaphysics or are being even more racist because they must assume that orcs are name-your-real-world-ethnicity.

Add to that the diabolical legalism of RPG discussions—where every sentence gets picked apart to "prove" that the other person is, gasp, inconsistent—and it is hard to find any oxygen in here.

I'm still mystified about why everyone is quoting so much D&D lore and fluff in a non-D&D forum. In my forty years of gaming, I've seen so many variations on orcs and I can count on one hand the number of campaigns that included anything named Gruumsh.

But I, for one, heartily agree that there isn't One True Way.
 
Last edited:

Maxperson

Morkus from Orkus
I mean they are very magical creatures, so w/e, but I also they are very intelligent beings so I see no reason why some of them can't decide to be good or evil or whatever alignment the MM says they're supposed to be.

They can. In 2e there was a Planescape adventure where you had to get a good demon to safety. Pretty sure it was said somewhere that even when a creature is listed as always X alignment, there were still exceptions here and there.
 

Celebrim

Legend
Well, this is the internet; it's not a real conversation until we're calling each other Nazis, right?

Too true.

In my forty years of gaming, I've seen so many variations on orcs and I can count on one hand the number of campaigns that included anything named Gruumsh.

But I, for one, heartily agree that there isn't One True Way.

The above makes my point about Gruumsh better than I made it myself.
 

I'm still mystified about why everyone is quoting so much D&D lore and fluff in a non-D&D forum.

IIRC someone brought up the fact that Warhammer's orcs aren't very humanlike or elf like either. In warhammer the Orcs are more similar in terms of lifecycle and biochemistry to some kind of algae or fungus than they are to the setting's other beings
 

Ovinomancer

No flips for you!
Well, this is the internet; it's not a real conversation until we're calling each other Nazis, right?

As someone who veers toward the relativist side of the Alignment Wars, I've had to hold my breath and count to ten in response to plenty of posts. There are plenty of posts implying that the people who prefer an alignment-free universe are dupes of cultural fads or haven't thought deeply enough about their campaign metaphysics or are being even more racist because they must assume that orcs are name-your-real-world-ethnicity.

Add to that the diabolical legalism of RPG discussions—where every sentence gets picked apart to "prove" that the other person is, gasp, inconsistent—and it is hard to find any oxygen in here.

I'm still mystified about why everyone is quoting so much D&D lore and fluff in a non-D&D forum. In my forty years of gaming, I've seen so many variations on orcs and I can count on one hand the number of campaigns that included anything named Gruumsh.

But I, for one, heartily agree that there isn't One True Way.
I'm very keen on the relativist argument, which is odd if you consider my [not allowed by board rules]. That said, I'm usually not very keen about relativism in my games, or, rather, I like to have situations that aren't entirely relativist or even largely aren't relativist in regards to alignment or the nature of creatures. Usually, I just go with the monsters being monsters because it's a leisure activity and the blanket permission to not have to evaluate the morality of all acts is part of that leisure activity. Much like we watch TV shows and can appreciate, maybe, complex moral dilemmas, we're insulated from them because we don't have to directly engage, the script does that. When presented with a Thanos, we don't really have to spend a lot of time evaluating if Thanos is the bad guy -- Captain America thinks he is and that man's a paladin of virtue. We've seen him wrestle with difficult moral stuff, and he's done right by it, so if he's not wrestling with Thanos' moral position, must not be much of a question. And, that fine, because the point of entertainment isn't to make us confront the stuff we do deal with everyday, but provide a relief valve. So, if orcs are usually easy to point at bad guys, they're not serving as a subtle metaphor of [real life people] so we can engage in subtle displays of [real life politics]. I mean, I suppose they could be, if that's really what you and your players want, but usually it's not. It's Saturday morning cartoons where GI Joe doesn't have to every worry about Cobra Commander being a bad guy because he's just a bad guy. And most games exist at this level of morality -- orcs are just this week's bad guys that the heroes get to fight and prevail, not a complex moral commentary on the plight of [real world stuff].

So, I tend to run primary colors versions of morality in my games. If I'm not, I make sure that the group understands that I'm not, what that might or might not mean, and that they all have buy in. Because, again, it's a leisure activity, not a struggle session. I like shades of grey, but I tend to not run that way because it makes it less fun (for me and my group, clearly). That still leaves me lots and lots of room to have interesting and complex bad guys. You can be obviously evil but still sympathetic -- just do bad things for what sound like good reasons.
 

Remove ads

Top