• NOW LIVE! Into the Woods--new character species, eerie monsters, and haunting villains to populate the woodlands of your D&D games.

Is any one alignment intellectually superior?

Which alignment is intellectually superior?

  • Any Good

    Votes: 14 4.3%
  • Any Evil

    Votes: 1 0.3%
  • Any Neutral

    Votes: 8 2.4%
  • Any Lawful

    Votes: 15 4.6%
  • Any Chaotic

    Votes: 2 0.6%
  • Lawful Good

    Votes: 12 3.6%
  • Lawful Neutral

    Votes: 24 7.3%
  • Lawful Evil

    Votes: 21 6.4%
  • Neutral Good

    Votes: 35 10.6%
  • (True) Neutral

    Votes: 35 10.6%
  • Neutral Evil

    Votes: 6 1.8%
  • Chaotic Good

    Votes: 9 2.7%
  • Chaotic Neutral

    Votes: 6 1.8%
  • Chaotic Evil

    Votes: 2 0.6%
  • None

    Votes: 132 40.1%
  • Other

    Votes: 7 2.1%

  • Poll closed .

log in or register to remove this ad

Celebrim, I was afraid you would say that. The problem is you are taking the term 'intellect' as a functioning of the brain in reference to 'instict' or 'evolutionary memory'. While most people, I beleive, consider intellect a metaphysical condition resultant from an obvious biological birth. Every action we take has a biological purpose or motivation--simply because we are biological entites. Therefore, biological argument is largely discarded as being constant--not always...John liked his patch-work brain example that I cringed at. We cannot escape the obvious connection...our brain is the command centre and really cannot be discounted, but I beleive this is a trap logically as the same can be said of physical conditioning, survival, ...all things...so how relevant is an argument that relates the same source over and over? Not very unless it is a new thought for the basis of a new assumption.
I think most people here would agree with you in a strictly biological sense...but, as I said, many people discount that as obvious or irrelevant to a topic that has been a metaphysical argument for centuries. Intellect as a separate experience devoid of biological motivation--like a lab experiment if you will--coupled with alignment (for lack of a better word) may be examined. Perhaps you don't want to do this or feel it is counter-intuitive or just plain short-sited, but that is why I feel your arguments aren't mixing well with the others. At first I thought it was a matter of execution of action deriving a moral set...as many of your examples relate to after-the-fact demonstrations of morality. And that the conflict was a matter of perspective concerning where intellect and alignment meet...but that is why I asked my question. I don't think I could ever give you a convincing argument if you are going to structure you idea of intellect and alignment through biology as that filter will continue to couple them through any argument.

Storyteller01, yeah, I know...but I was looking for a good discussion that didn't contain a conflict of assumptions. And it seemed you were happy to play devil's advocate. Your argument was clear.
 

Celebrim said:
But, for the life of me I can't see how the article and your pet theory on alignment have anything to do with anything. I'm glad you found the article stimulating, but if anything, all it proves is that one of the biggest problems with the law/chaos axis is that its not well defined and has been inconsistantly defined and so almost everyone has thier own theory about what it means. I don't buy your definition, I don't think that you really can support that definition of the law/chaos axis if that's what you are trying to push, but have fun with it.

No, I don't think it clearly fits the alignment system as written. But I do think it suggests that what differentiates many moral responses is not an emotionless intellectual evaluation of the facts but the emotional response related to empathy. I think it is no mistake that one of the defining characteristics of a sociopath is a lack of empathy, not a low IQ.

Celebrim said:
It doesn't matter. The average person will I think still emotionally attach greater value to the one he chooses as intellectually superior, including 'none'. It won't always happen. You may be an outlying point, in the same way that your researchers only were able to predict the subjects action 70% of the time.

Unless you have any data to support that, your argument seems to be simply begging the question.

Celebrim said:
That isn't what I said is it? You continue this misunderstanding all through the response. What I said is that people played characters that were either very like themselves in some way, or else that they played characters that were very unlike themselves in some way.

I forgot to put the opposite in my example. But, yes, I got that.

Celebrim said:
They didn't tend to play tangental characters, and usually didn't even occur to them that such existed.

And my anecdotal experience includes a lot of people who do play tangental characters. At least half the people in my regular group do this.

Celebrim said:
I've played characters I disliked, characters with a moral outlook radically different than my own, and just about every character I ever played has done things which are not in thier own best interests. But looking back over my own characters, even before I realized it, the characters I invested in emotionally essentially had two archetypes. And that's been true of every player I've ever met, even though their archetypes were often radically different than mine. Either they are playing a surrogate self, or they are playing a disposable self which they can safely do things that they would not do so that they can explore what it might be like, or they played both.

Yes, I've role-played with people like that. There are certainly role-players with one or two archetypes that they almost always play. And I'll even agree that some were likely doing exactly what you describe. But I've also played with plenty of role-players who don't always play characters like that. What's normal? Unless you've played with tens of thousands of people, I'm not sure that either of us has role-played with a large enough sample size.

Celebrim said:
The either primarily identified and created big exaggerated versions of themselves, or they primarily fantisied and did escapsist stuff which they'd never do in real life. A complex role player probably can and does mix and match a little in the same character, but I've yet to meet one that plays the not-self who is not informed by thier inner self.

So you've never met anyone who can play characters outside of a fairly narrow range of mindsets? I know of at least two approaches to playing a character that can produce those results.

I also know of at least one reason why some people who can probably don't. Many people have the same expectation that you do -- that a character provides some sort of window into the players psyche. It's kinda difficult to play a coldly ruthless sadist when you aren't if the person sitting across the table from you is going to assume that your character shows that you are really a coldly ruthless sadist inside. In fact, I tend to play characters that "get along" when I play with new groups simply to avoid player/character confusion, which is perhaps why your assumption that character choices are predictive of a player's psyche bothers me so much.

Celebrim said:
And I answered the abstract question of who would do the better job making trains run on time. Ultimately, it would not be the Nazi's. In 1945, the trains did not run on time. In fact, they'd had problems running on time for a while. In just 10 years or so, the Nazi's completely flubbed up maybe the best rail system in the world. That took serious ineptitude. No, that took beyond ineptitude. That took evil. So ultimately, you can't separate the two. You take Nazi's for train schedulers and pretty soon your rolling stock is wasting time on shipping out tanks and human chattel.

When they needed to pick someone to help put an American on the moon, did they pick someone who used slave labor to build rockets for the Nazis or a Nun? Or are you going to tell me about how inept and inefficient their rocket and jet engine programs were, too? Yes, there were plenty of problems there, too. But that didn't stop the Nazis from doing some things quite well, from making beautiful propaganda movies that are still considered influential today to building the first Volkswagen.

Celebrim said:
I think that it is self-evident that they are the same thing. If a theory is intellectually superior to another, it is intellectually superior because it is correct. If it is correct, then it is also moral, and hense morally superior. To imagine the opposite, that a position could be intellectually superior, but incorrect implies a contridiction.

What do you mean by "correct"?

Celebrim said:
If the position was the incorrect one, the intellectually superior position would then be to recognize its incorrectness, and then to adopt the correct position.

You seem to be assuming that the morally correct position can be intellectually derived. I do not. And I think the evidence presented in that article as well as the behavior of sociopaths and shortcomings of utilitiaranism and moral relativism suggests that a position derived entirely from intellectual reasoning is often quite morally wrong because the factors that make one moral or immoral (in a Good or Evil sense) are not entirely rational.

Celebrim said:
And if the position was immoral, it could not also be correct since that would imply that it is right to do what is wrong.

Depending on the objective, the correct solution can be the immoral solution. You seem to want to argue that either the fact that the solution is correct automatically makes it moral or the fact that it is immoral automatically makes it incorrect. I disagree, because the factor that makes a course of action moral or immoral may have nothing to do with the reason why an action is correct or incorrect. If this were not the case, we would not have any moral dilemmas because the correct solution would always agree with the moral solution.

Celebrim said:
And if no position was intellectually superior, then that would imply that there was no such thing as right or wrong - but then that wouldn't save you either because then the intellectually superior and correct position would be that there is no right and wrong (itself a moral position). Which ever way you go, the intellectually superior position is the morally superior position.

Claiming that no position is intellectually superior does not imply that there is no such thing as right and wrong. It may simply mean that right and wrong cannot be intellectually derived. There are plenty of examples of people who justify acts on sound utilitarian grounds who are still considered immoral. Why? Because an intellectual utilitarian assessment of the situation is not the only basis upon which people make moral decisions, nor it is even the primary basis.

Celebrim said:
Don't assume I can't imagine people thinking differently. That's what role players do. What I can't do (and maybe you are exceptional and can) is invest an emotional stake in something like that. Or to put another way, I could DM the character, but I'd never choose to PC such a character because it would bore me in the long term.

I know other people who can or do. I'll agree that I personally find it difficult to climb into the head of someone that I really don't like, but (A) I've done it for long games to understand the mindset and (B) not everyone plays characters over a long term in this hobby. Long campaigns are common but that's not the only way people play.

Celebrim said:
First, I think its reasonable to say that the Japanese are a very moral people.

I think that's a very simplistic analysis of the Japanese. That's about all I can say without going into details that are sure to offend someone.

Celebrim said:
Which isn't unexpected either, since I would expect the single most popular alignment to be 'alignment doesn't really have any meaning', or what I've refered to elsewhere as 'neutral apathetic'.

You are assuming that the answer "None" means "alignment doesn't really have any meaning" or implies apathy. I don't think that several of the detailed responses say anything like that. Of course you can simply dismiss them as outliers or claim they are hiding their true feelings, but at what point are you really looking at the data and at what point are you simply forcing the data to fit your theory?

Celebrim said:
I didn't say anything about automatic. I am speaking in tendencies and trends, and I still claim that analogies to moral ideologies are inadequate. Maybe an admiration for Aztec art doesn't mean anything because your average Aztec art peice wouldn't tread on anyone's sensibilities, but eventually I could find some peices that would tread on that revulsion center you place so much value in - and if you don't believe that you haven't seen enough Aztec art.

And that some people can appreciate the Aztec art despite the fact that it treads on their emotional sensiblities and makes them feel revulsion is a triumph of intellect over emotion. Intellectually, the art is harmless and what it depicts is simply a representation. It's only through an emotional investment in what is being depicted and empathy for the victims being depicted that we feel any revulsion at all. Revulsion is not an intellectual response. It's an emotional response. Divested of an emotional response, Aztec art is technically superior to plenty of other art. To me, this is what seems self-evident.

Celebrim said:
What the responce was then would probably say something about peoples moral valuations, but I doubt that they would be particular authentic and interesting observations unless the person wasn't aware that they were being observed.

Given that the same response can have many causes, I don't think that the response alone predicts anything.


Celebrim said:
I think that's interesting, but I don't think it necessarily follows. I think you've got an interesting theory there, but I think you are failing to recognize the limitations of basing mythic first causes on derived things.

Given how much of your argument is based on anecdotal evidence, simple assertion, and claims of self-evidence, I'm not really sure what you are expecting here.

Celebrim said:
Proved, no. That requires a more sophisticated understanding than is possible for someone that lives only in the present. That good is superior on utilitarian grounds is something that I take as a reasonable assumption from the available evidence, even if it doesn't pass the 'beyond a reasonable doubt' test required for proof.

And if someone doesn't believe that Good is superior on utilitarian grounds but still believes that Good is superior, how does that fit into your theory?

Celebrim said:
Sure, I've run evil characters as a GM. But that's different. The pattern you give is pretty typical. You probably strongly self-identify as good, so you don't want to play evil. But you do want to play characters that can struggle with the challenge of being good, and who walk the line closer than you'd ever dare in real life.

Is your claim that it's "typical" for people to run their own alignment or it's opposite or that its "typical" for people to explore the boundaries of their own alignment? The insight that Good people often don't enjoy playing Evil charactrers isn't all that profound.

Also note that I've played characters to explore other people's psyches and have had to psychoanalyze my own characters to understand what they were doing and feeling. It's not a matter of exploring boundaries so much as doing something to see how it plays out -- to see what sort of decisions a particular mindset or set of priorities produces. That I avoid Evil characters has little to do with my conscious self-identification and everything to do with that gut feeling of revulsion at what the characters are doing. I don't want to spend too much time with a character I don't like any more than I'd want to spend too much time with a real person that I don't like. But that doesn't mean that I can't climb inside of an Evil mind and do an effective job of playing it.

Celebrim said:
I had a friend who self-identified as good that felt so uncomfortable with that, that he never played non-nuetral (though he could DM bad guys just fine). I had another friend that played only lawful goods and chaotic neutrals. I've seen that axis ALOT, and I tend that way myself.

And I don't think I've ever seen that axis, which is why I don't place a lot of weigh on anecdotal evidence like this. Yes, I agree with the basic premise of your argument, which is that you can learn something about people from the characters they play and the alignments they pick but I don't think it falls into any single pattern. And once there are multiple reasons why people might make the same choices about characters or alignment, the ability to draw conclusions about a person from their choice is limited, if not non-existent. Your insights are certainly interesting but I think it's a stretch to make too many assumptions based on limited data like the responses to this question.

Celebrim said:
(That's one of the reason's I don't trust my self-identification. I think I'm probably flattering myself when I self-identify as NG, and that I'm probably really LG.) On the other hand, have you ever played with groups in which the players self-identify as evil and who never play good characters? I've been in three. Their LN's aren't flirting with the idea of evil, but with the idea of good.

I've never played with a group that self-identified as Evil. And I'm not sure that I'd want to.
 

fusangite said:
This is sophistry here. You could substitute the word "freedom" with any word people see as an end in itself: beauty, love, etc. and say the same thing.

Fair enough. I was thinking specifically in terms of the Good and Evil axis. From that perspective, everything else is means toward an end, including freedom. But it's fair to point out that people on the Law and Chaos axis would view order and freedom as ends. From that perspective, Good and Evil become means. And that's ultimately why I think the corner alignments are unstable.

fusangite said:
Furthermore, with this kind of reductionist logic, the only "end" that could be sustained in this reasoning is "happiness"; you could argue that everything -- freedom, love, beauty, etc. is just a means to happiness.

Actually, even that's not sustainable.

fusangite said:
And I know you don't intend to say that because you're pretty consistently opposed to utilitarian theories of morality.

Well, there is "utilitarian" and "Utilitarian". There is the specific sense that happiness is the desired end and the broader sense of doing "moral math" toward a more selfish end. But I agree with your point and thought I said as much in my reply.

fusangite said:
Some evil does this. Some evil causes this pain because it is indifferent to the pain it causes. The rules seem, from my reading, to acknowledge both a sadistic evil and an indifferent evil. The rules don't seem to support the idea that evil always equals sadism; it sometimes equals sadism.

If it's the indifference of a cat playing with a half-dead mouse to teach it's kittens to hunt, then I think it should be Neutral because that's what a cat is. A cat simply doesn't understand the concept of having empathy for the mouse. Many people interpret this sort of play as cruel because they project human empathy into the situation, but the cat does not understand what they are doing to be cruel. I'm not sure why sentience should change that and the American justice system certainly uses a conscious knowledge of right and wrong as a criteria for treating people as insane rather than criminals.

I personally think that this is one of those areas where the alignment system falls down trying to be everything to everyone. So I'm projecting a bit of what I think it should be rather than what it is here. I don't think the problem is with the idea of an alignment system or even the specific alignments but how vaguely the RAW defines them.

fusangite said:
Would you make that argument in today's world? I'm an annoying Canadian socialist and even I wouldn't go that far.

What argument is that? I can't tell exactly what you are asking from the edit. Would I call a person who tortures for pleasure Evil while a person who tortures out of perceived necessity Neutral? Depending on the nature of the torture, possibly.

fusangite said:
But the rules as written contradict this statement. The DMG specifically says that a DM should not take notice of stated intentions when determining alignment and must only take notice of PCs' actions.

While I think there is a practical reason for that, do you know any DM that actually pulls this off?

fusangite said:
(a) intelligence appears to be a criterion for alignments that permit means-ends distinctions; the more sophisticated the means-ends distinctions, the narrower the range of alignments available.

I don't think I agree with that. I think that huge alignment differences can be produced simply by adjusting some basic priorities or assumptions. Ultimately, I don't think that the ends and means distinction matters nearly as much as what you settle on for desired ends. In many instances, I think intellect is used primarily to justify the means to a desired end but it's the end that's desired that determined alignment. You'll find the same thing in real world politics. I don't think that any one political perspective corners the market in intellect even though they all claim that they do.

fusangite said:
(b) I brought up Coventry because the D&D alignment system poorly aggregates the data in any situation where a group is sacrificed for a greater good and produces essentially incoherent results

I don't think that it has to, though it might by the RAW.

fusangite said:
This term "strange bedfellows" I assume is what you mean by the fact that anyone with a complex relationship between their goal and strategy is returned as "neutral" under the system.

No, not at all. I mean that you could have a Lawful Good paladin that smites Evil without pause and a Lawful Good pacificst who would never kill another living creature. You'll see the same thing in coalition politics. Whenever you force a complicated situation into few categories, you wind up grouping things together that don't necessarily agree with each other or like each other, yet both fit the category description.

fusangite said:
This, returning to Coventry, is my point: I think there is something wrong with the system when it places radically dissimilar individuals in the "neutral" category simply because their goals and strategies exist in a sophisticated or complex relationship to one another.

I think it was a Neutral decision which does not necessarily make the person making the decision Neutral.

fusangite said:
While true, I would argue that D&D is predisposed to place geniuses in the "neutral" category, which is especially annoying because that's the same category it uses for creatures too stupid to have a morality.

Possibly. I think the more annoying aspect is that they use Neutral for both apathy and balance.
 

Celebrim said:
I think you can't really separate 'intellectual choice' from 'cultural conditioning' all that much. The two things shape each other.

I think you can seperate the two, even if they are often mixed or combined to produce a result.

Celebrim said:
The fact that these values are something which were held before we held them and which are generally held reflexively doesn't mean intellectual choice hasn't gone on.

But it can mean that and it generally does mean that intellectual choices are normally no longer par of the decision-making process. Reflexes are reflexes because they don't pass through an intellectual decision-making process.

Celebrim said:
We don't ever stop intellectually responding to our surroundings, and the more intelligent you are the more this is true.

In my experience, this is not true. In fact, you seem to say the same thing describing skeptics below and it's my experience that more intelligent people are often drawn to the conclusion that they already know it all and can dismiss new data.

Celebrim said:
Even John's forebrain-hindbrain reactions don't seem to me nearly as distinct as John feels they are. Maybe this is because I work in an evolution lab and see the hindbrain merely as 'evolutionary conditioning' and 'legacy intellect' which you are carrying with you as part of your whole intellectual process.

The anterior insula and prefrontal cortex are two different parts of the brain. They produce independent and often conflicting moral assessments of the same situation. If that's not "distinct", I'm not sure what would fit your criteria. Is this a part of your "whole intellectual process"? Sure. Is that what the question meant by "intellectually superior"?

Celebrim said:
John claims that the hindbrain's decisions aren't 'rational'. I don't necessarily agree.

They are not rational in the sense that they are not consciously reasoned decisions. And as the article points out, even though one can often produce a rationalization for an emotional decision, the decision may have ultimately been emotional and not reasoned.

Celebrim said:
Nor do I find in my own mental experience that higher intellectual experience can't be and doesn't summon forth powerful emotions.

Nothing in the article claims that. But the powerful emotions are distinct from the reasoned intellectual process. Given the context of this dicussion (identifying the "intellectually superior" alignment), I think that "intellectual" means "reasoned", not "whatever your brain tosses out". And if you were the free the reasoned component of your brain from such powerful emotional responses, you might make very different moral decisions as a person. In fact, I've experienced just that in character, as have other people I have talked to.

In one particularly interesting case, a woman was with her husband in a theater and he asked her if she was thinking in character. She was. Why did he ask? The character she was thinking like was a street urchin who had a very different sense of filth than the player did and she had wrapped herself around a filthy railing like her character would. The player, upon realizing what she was doing, was suitably disgusted what she had done.

Celebrim said:
The most powerful way that cultural conditioning works in my opinion is the way humans instinctively react to the unknown information. Whenever someone tells you something that you have never encountered, or most especially when you see a group discussing something you don't know, you're first instinct is usually to believe the data and even go so far as to pretend as if you'd heard something or thought something like that before because you don't want to appear ignorant. (Small children do this alot.) Thereafter, your tendency is to become a strong defender of that initial data because you do not want to appear to be ignorant. You become emotionally invested in your commitment to defending that data, because each time you defend it, it increases the (emotional) consequences of having been wrong all those times. So, when we encounter contridicting data, we almost never immediately give up our convictions and jump to the new position immediately. The results in our first exposure to any
opinion being more shaping than any other.

One could argue that we are all providing great examples of this at work...

You might find this interesting:

http://www.sciencenews.org/pages/sn_arch/6_29_96/bob1.htm

Celebrim said:
But I would never go so far as to suggest that we stop being intellectually engaged in the doubts and defences of the opinion or that we are all doomed to be clones of our parents, peers, or society. Intellectual choice and cultural conditioning are universially present.

Yes, but much of what we are is neither cultural nor intellectual. Look at what autism does to a person's decisions and behavior or the people who are born without an innate sense of fear. Yes, the intellectual reasoning and cultural conditioning are there, but so are a lot of other things that make us what we are. And you'll get the same predictable responses from certain games wether you play them with people in the United States, Nigeria, India, or Peru and even if you play them with chimpanzees.

Celebrim said:
As for specific empathy, I think it comes in part from that stored legacy of rationality which teaches us that empathy improves fitness, but it too is obviously shaped by intellect and culture in some sort of complex exchange.

Have you ever investigated autism or spent time with someone with any degree of autism?
 

Umbran said:
There is no objective measure of "rational" weight - there's no one reason that universally will be "greater" than another. Any such weighting is subjective to the individual doing the weighting, and includes that person's own emotive insights. Ergo, there's no such thing as intellectual superiority.
Heh, I've already mentioned this earlier but I used my emotion to make my decision on which alignment had a greater rational mass. This struck me and made me smile.

As an aside there is the possibility that one alignment is the most representative of an absolute truth that man, at this point in our present mental status, cannot quite reach. If this is the case, all we can do is stab at an answer that "feels" most correct. I betrayed my gut instinct by defaulting to the none answer, I did this because I would have a hard time articulating Lawful Good as the most intellectually superior alignment.
 

Whoa man, pass me the pipe.

John Morrow said:
Nothing in the article claims that. But the powerful emotions are distinct from the reasoned intellectual process. Given the context of this dicussion (identifying the "intellectually superior" alignment), I think that "intellectual" means "reasoned", not "whatever your brain tosses out". And if you were the free the reasoned component of your brain from such powerful emotional responses, you might make very different moral decisions as a person. In fact, I've experienced just that in character, as have other people I have talked to.

Is there such thing as a decision that is not moral? The fact that your particular religious or political system defines a certain set of issues as moral IMO does not put a person on firm enough ground to be throwing around terms like "moral decision" without definition. (Although you can, I'm not calling you evil for doing so.) And identifying a certain part of the brain as being stimulated at a certain time doesn't answer the question for me as to whether to end result of an action was influenced more by "reasoning" or "emotion" - even assuming the circular definitions given for each.

In fact, saying things like "intellectually superior" assumes that you can judge such things universally for all people. The Neutral Evil people that responded to this thread naturally assumed that the "intellectually superior" meant survival of the fittest - which is more an indication of their evil and depravity (and I assume that such people are flattered by these terms) than any sort of "intellectual superiority". Their decisions have produced results that their "intellect" has judged "superior" because their goals and tolerances are different than other peoples.

Was my immoral decision to participate in an alignment thread an intellectual or emotional one?
 

gizmo33 said:
Is there such thing as a decision that is not moral? The fact that your particular religious or political system defines a certain set of issues as moral IMO does not put a person on firm enough ground to be throwing around terms like "moral decision" without definition. (Although you can, I'm not calling you evil for doing so.)

Personally, I don't consider aesthetic decisions to be moral decisions. As for throwing terms around without definition, I suspect a good hunk of the disagreement here hinges on differences over definitions.

gizmo33 said:
And identifying a certain part of the brain as being stimulated at a certain time doesn't answer the question for me as to whether to end result of an action was influenced more by "reasoning" or "emotion" - even assuming the circular definitions given for each.

They are not just identifying the parts of the brain but how much they are being stimulated. In fact, the researchers claim that they can generally predict, by the relative degrees by which both parts of the brain are stimulated, which course of action people will choose for various moral problems. Did you read the article?

grizmo33 said:
Their decisions have produced results that their "intellect" has judged "superior" because their goals and tolerances are different than other peoples.

As with several other responses, this entirely ignores the possibility that some people do not associate "intellectually superior" with "morally superior", a possibility explicitly recognized in the original question.

grizmo33 said:
Was my immoral decision to participate in an alignment thread an intellectual or emotional one?

Stick your head in the appropriate sort of MRI machine and I'm sure they could tell make an educated guess.
 

John Morrow said:
They are not just identifying the parts of the brain but how much they are being stimulated. In fact, the researchers claim that they can generally predict, by the relative degrees by which both parts of the brain are stimulated, which course of action people will choose for various moral problems. Did you read the article?

I sort of read it.

I got about this far: "How We Think
Brain Researchers Are Using MRIs to Predict Our Decisions Before They Are Made. The Results Are Intriguing, and a Little Disturbing"

Before I decided (with my monkey brain) that what I was reading was journalism, and not science (as it is defined by the community that self-applies this label). "A scientific journal article", I told myself, "would have proposed a carefully worded hypothesis and then shown results that only had a bearing on that hypothesis." And so I am extremely skeptical about the results as interpreted by a couple of journalists and filtered through some casual comments. At least I'd like to know what machines they are using to measure the level of "disturbingness" of results. I didn't see anything about morality in the article. I did see a quote about what an economist thinks about a scientific paper he read.


John Morrow said:
As with several other responses, this entirely ignores the possibility that some people do not associate "intellectually superior" with "morally superior", a possibility explicitly recognized in the original question.

I guess I don't know what you're talking about here. My comment stands whether or not you equate these two. Take either, or both, and define what is "superior" by either criteria. What I'm saying is that you can't define, universally, what is "intellectually superior" or "morally superior" without talking about what you expect as a result of either intellectual or morally based decisions. Even something as simple as an IQ Test is burdened, I think legitimately, with questions as to whether or not the thing it seeks to measure even makes sense as a universal. Even if you know what YOUR particular criteria is, casual use of these terms will confuse other people who do not hold your definition.

John Morrow said:
Stick your head in the appropriate sort of MRI machine and I'm sure they could tell make an educated guess.

They would only know which region (IF ANY) of my brain was active at the time I made this decision. I would not expect anyone with scientific training to interpret the results using IMO vague terms like "morality" and "reason". I'd leave that to journalists (whose job, admittedly, it make things interesting).
 
Last edited:

FreeTheSlaves said:
As an aside there is the possibility that one alignment is the most representative of an absolute truth that man, at this point in our present mental status, cannot quite reach.

To be logician-picky, we cannot say there is a possibility. We cannot say there is not a possibility, but that isn't the same thing. If the Universe has an objective existance separate from human observers, then the existance of the possibility is not determined by our knowledge or lack thereof.

So, I cannot say the possibility does not exist. But if we are not yet wise enough to know, we shan't find the answer falliling around in the dark looking for what "feels right". Such behavior would not be a rational approach to the problem, because if we are not yet wise enough to know, we won't be wise enough to recognize the true answer, should we stumble upon it.

So, I would think that the rational approach would be to set about increasing wisdom. If our mental status cannot reach the truth, and we want the truth, then improving our mental status would be the rational course.
 

Into the Woods

Remove ads

Top