Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*Dungeons & Dragons
Muscular Neutrality (thought experiment)
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="EzekielRaiden" data-source="post: 9529273" data-attributes="member: 6790260"><p>That's fine. My only point in bringing it up was that it isn't some weird bizarro thing for a cosmological force for Good to have lines it absolutely will not cross, even if crossing them would provably lead to the world being better, because there are some things that are simply Not Acceptable, no matter how much good might come of them down the line.</p><p></p><p>And this works at all levels. It might be the case that if I were to murder a specific set of individuals today, right now, people who have committed no wrongs worthy of commentary, then in a thousand years we would live in an absolute perfect utopia, completely free of all suffering and without any coercion or exploitation.</p><p></p><p>I still would adamantly refuse to murder those people. "Utopia justifies the means" is an extremely, overwhelmingly <em>dangerous</em> argument to make. As soon as you start justifying heinous acts because <em>eventually</em> they'll pay off, you have just invited every possible question of "well what if you do just a <em>little</em> bit more evil now, to get a better world <em>sooner</em>, or to make that better world <em>even better</em>, or to share it with <em>more</em> people, or..." You no longer have the ability to just reject those questions as flatly unacceptable behavior; you have to give a reasonable answer as to why <em>this</em> evil act, at <em>this</em> time, is justified, while <em>that</em> evil act at <em>that</em> time is unjustified.</p><p></p><p></p><p>Okay but if you're using "local" to mean two different things (mathematical optimization <em>and</em> regional variation), you're going to make swiss cheese of what I said--which is why I balked. I was exclusively using it in the mathematical optimization sense. If one is currently at a (mathematical) local maximum of the perfection-of-the-world function, then by definition you must make the world worse before you can make it better. There are plenty of takes on Good--both cosmological and personal--that refuse to be party to making the world worse. Especially if making the world worse actually results in going negative, making the world actually <em>evil</em>, before you can make it more good than it was before.</p><p></p><p>Again: "utopia justifies the means" is an <em>incredibly dangerous</em> position. It invites many of the worst impulses a sapient being can have, all while sincerely believing that following those impulses is <em>good</em> for the <s>victims</s> <em>beneficiaries</em> of that "compassion."</p><p></p><p></p><p>I disagree, about as strongly as it is possible to disagree, with your dismissal of moral agency as the critical differentiator (but more on this in a moment). In the absence of agency, choice is irrelevant. Hence, to choose to do good in the absence of agency means nothing. A robot (for example) catching a person before they fall off of a building has saved a life, but it has done so purely because it is following the programming inserted into it. We do not say that that robot is <em>morally upstanding</em> because it did the one and only thing its programming permits. Likewise, while we might praise a dog that helps rescue people who are stuck in the snow, their extremely minimal individual agency limits their ability to actually be good or evil. It's not just the absence of agency in general, it's the absence of <em>sufficient</em> agency.</p><p></p><p>However, rereading what you've said here, it looks like you're stating that "elevation of agency" <em>defines</em> Good. That is not the case. That would be like saying that being liquid <em>defines</em>, say, Coke. Being liquid is certainly a necessary condition for a substance to be Coca-Cola, but it is definitely not a sufficient condition. Likewise, it is necessary for anything worthy of the label of "Good" to prioritize agency, because in the absence of agency, a person is identical to the robot example I gave above, an automaton carrying out programming without moral merit. What actually <em>defines</em> Good is what actions the entity/force/etc. actually encourages (or discourages).</p><p></p><p></p><p>And I would argue that any setting which has done that is a setting where "Good" has been watered down into either merely "Lawful" or into some insipid caricature, usually by making its members incapable of moral choice (they're preprogrammed robots) or too stupid to understand that what they think is beneficial is actually very, very detrimental.</p><p></p><p>The difference between the two--merely Lawful vs insipid caricature--is often whether the so-called "Good" beings/entities/forces/etc. are <em>aware</em> that their actions will cause the harm that the "muscular' Neutrals wish to avoid. If they know and understand it and pursue their goals anyway, they were never Good in the first place, they were just Lawful in a funny hat. If they don't know and cannot be made to know, then either they refuse to learn, and are thus idiots, or are genuinely<em> incapable</em> of learning, and are thus automata. The automaton isn't stupid, but it lacks agency. The idiot has agency, but is too stupid to actually use it.</p><p></p><p>Essentially, in order to have the "muscular" Neutrals be truly, genuinely reasonable, they have to actually be <em>right</em> about the "balance" they protect. If their balance is illusory or ineffable, something they pursue as an article of faith because it is functionally beyond proof, then the "muscular" Neutral lacks any actual moral argument; they do crazy things for crazy reasons. But as soon as you admit that the "muscular' Neutral is actually <em>right</em> about existence, Good (and many forms of Evil) must become either too rigid or too stupid to understand that their actions will cause harm to the very beings they wish to aid and protect.</p></blockquote><p></p>
[QUOTE="EzekielRaiden, post: 9529273, member: 6790260"] That's fine. My only point in bringing it up was that it isn't some weird bizarro thing for a cosmological force for Good to have lines it absolutely will not cross, even if crossing them would provably lead to the world being better, because there are some things that are simply Not Acceptable, no matter how much good might come of them down the line. And this works at all levels. It might be the case that if I were to murder a specific set of individuals today, right now, people who have committed no wrongs worthy of commentary, then in a thousand years we would live in an absolute perfect utopia, completely free of all suffering and without any coercion or exploitation. I still would adamantly refuse to murder those people. "Utopia justifies the means" is an extremely, overwhelmingly [I]dangerous[/I] argument to make. As soon as you start justifying heinous acts because [I]eventually[/I] they'll pay off, you have just invited every possible question of "well what if you do just a [I]little[/I] bit more evil now, to get a better world [I]sooner[/I], or to make that better world [I]even better[/I], or to share it with [I]more[/I] people, or..." You no longer have the ability to just reject those questions as flatly unacceptable behavior; you have to give a reasonable answer as to why [I]this[/I] evil act, at [I]this[/I] time, is justified, while [I]that[/I] evil act at [I]that[/I] time is unjustified. Okay but if you're using "local" to mean two different things (mathematical optimization [I]and[/I] regional variation), you're going to make swiss cheese of what I said--which is why I balked. I was exclusively using it in the mathematical optimization sense. If one is currently at a (mathematical) local maximum of the perfection-of-the-world function, then by definition you must make the world worse before you can make it better. There are plenty of takes on Good--both cosmological and personal--that refuse to be party to making the world worse. Especially if making the world worse actually results in going negative, making the world actually [I]evil[/I], before you can make it more good than it was before. Again: "utopia justifies the means" is an [I]incredibly dangerous[/I] position. It invites many of the worst impulses a sapient being can have, all while sincerely believing that following those impulses is [I]good[/I] for the [S]victims[/S] [I]beneficiaries[/I] of that "compassion." I disagree, about as strongly as it is possible to disagree, with your dismissal of moral agency as the critical differentiator (but more on this in a moment). In the absence of agency, choice is irrelevant. Hence, to choose to do good in the absence of agency means nothing. A robot (for example) catching a person before they fall off of a building has saved a life, but it has done so purely because it is following the programming inserted into it. We do not say that that robot is [I]morally upstanding[/I] because it did the one and only thing its programming permits. Likewise, while we might praise a dog that helps rescue people who are stuck in the snow, their extremely minimal individual agency limits their ability to actually be good or evil. It's not just the absence of agency in general, it's the absence of [I]sufficient[/I] agency. However, rereading what you've said here, it looks like you're stating that "elevation of agency" [I]defines[/I] Good. That is not the case. That would be like saying that being liquid [I]defines[/I], say, Coke. Being liquid is certainly a necessary condition for a substance to be Coca-Cola, but it is definitely not a sufficient condition. Likewise, it is necessary for anything worthy of the label of "Good" to prioritize agency, because in the absence of agency, a person is identical to the robot example I gave above, an automaton carrying out programming without moral merit. What actually [I]defines[/I] Good is what actions the entity/force/etc. actually encourages (or discourages). And I would argue that any setting which has done that is a setting where "Good" has been watered down into either merely "Lawful" or into some insipid caricature, usually by making its members incapable of moral choice (they're preprogrammed robots) or too stupid to understand that what they think is beneficial is actually very, very detrimental. The difference between the two--merely Lawful vs insipid caricature--is often whether the so-called "Good" beings/entities/forces/etc. are [I]aware[/I] that their actions will cause the harm that the "muscular' Neutrals wish to avoid. If they know and understand it and pursue their goals anyway, they were never Good in the first place, they were just Lawful in a funny hat. If they don't know and cannot be made to know, then either they refuse to learn, and are thus idiots, or are genuinely[I] incapable[/I] of learning, and are thus automata. The automaton isn't stupid, but it lacks agency. The idiot has agency, but is too stupid to actually use it. Essentially, in order to have the "muscular" Neutrals be truly, genuinely reasonable, they have to actually be [I]right[/I] about the "balance" they protect. If their balance is illusory or ineffable, something they pursue as an article of faith because it is functionally beyond proof, then the "muscular" Neutral lacks any actual moral argument; they do crazy things for crazy reasons. But as soon as you admit that the "muscular' Neutral is actually [I]right[/I] about existence, Good (and many forms of Evil) must become either too rigid or too stupid to understand that their actions will cause harm to the very beings they wish to aid and protect. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Dungeons & Dragons
Muscular Neutrality (thought experiment)
Top