Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*Pathfinder & Starfinder
"Narrativist" 9-point alignment
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Celebrim" data-source="post: 6619333" data-attributes="member: 4937"><p>I think your recognition that CG and LG are in opposition regarding the means to an end is correct, but that you are giving short shrift to the debate between good and evil because you are improperly assuming you conclusion. </p><p></p><p>A decent intro to the debate say LG and LE are having might be seen in the end of Asimov's 'Foundation and Earth', in which the question becomes in the name of serving the greatest interest of humanity, can the self-appointed guardians of humanity (the highly advanced AI's) take steps which individual humans and perhaps even the majority of humans might find abhorrent and contrary to their wishes. Ultimately, the AI's are given permission to embark upon their plans to subjugate all human free will and to manipulate humanity into becoming components in a single super-organism led by the AI's because the plan is deemed necessary to protect humanity from the possibility of eventual extinction by any extra-galactic intelligence which has made the same choice because the superorganism where each individual's will is made subordinate to the greater whole is deemed to be more survivable and potent than any competing organism. Or in other words, so long as the entire universe is not assimilated, so long as the possibility of conflict between the The Group and The Other exists, the weal and happiness of the group is less important than its survival since by definition the extinction of the group or equivalent catastrophe would be the greatest unhappiness that could befall it. Evil sees good as naively choosing short term happiness over actual health - that is strength and capacity to enforce your will on others and resist their will. </p><p></p><p>While the robots in the discussion don't see themselves as advocating for evil, and indeed are not discounting happiness and health in their calculations, it's pretty easy to see that their argument would ultimately end up trading any amount of happiness for any amount of strength. Lawful evil societies simply take this argument to its logical conclusion, not as an emergency response (by which point it might be too late), but as the desirable pervading state of existence.</p><p></p><p>And there are many other points of contention, not the least of which is the ultimate point of continuation between good and evil - should existence, intelligence, community, and life be allowed to continue in the first place? It's nice to imagine that everyone has as their original position the notion that they should, but its not at all clear that everyone is 'rational' in that sense and certainly it is clear that not everyone perceives everyone else as having that position.</p></blockquote><p></p>
[QUOTE="Celebrim, post: 6619333, member: 4937"] I think your recognition that CG and LG are in opposition regarding the means to an end is correct, but that you are giving short shrift to the debate between good and evil because you are improperly assuming you conclusion. A decent intro to the debate say LG and LE are having might be seen in the end of Asimov's 'Foundation and Earth', in which the question becomes in the name of serving the greatest interest of humanity, can the self-appointed guardians of humanity (the highly advanced AI's) take steps which individual humans and perhaps even the majority of humans might find abhorrent and contrary to their wishes. Ultimately, the AI's are given permission to embark upon their plans to subjugate all human free will and to manipulate humanity into becoming components in a single super-organism led by the AI's because the plan is deemed necessary to protect humanity from the possibility of eventual extinction by any extra-galactic intelligence which has made the same choice because the superorganism where each individual's will is made subordinate to the greater whole is deemed to be more survivable and potent than any competing organism. Or in other words, so long as the entire universe is not assimilated, so long as the possibility of conflict between the The Group and The Other exists, the weal and happiness of the group is less important than its survival since by definition the extinction of the group or equivalent catastrophe would be the greatest unhappiness that could befall it. Evil sees good as naively choosing short term happiness over actual health - that is strength and capacity to enforce your will on others and resist their will. While the robots in the discussion don't see themselves as advocating for evil, and indeed are not discounting happiness and health in their calculations, it's pretty easy to see that their argument would ultimately end up trading any amount of happiness for any amount of strength. Lawful evil societies simply take this argument to its logical conclusion, not as an emergency response (by which point it might be too late), but as the desirable pervading state of existence. And there are many other points of contention, not the least of which is the ultimate point of continuation between good and evil - should existence, intelligence, community, and life be allowed to continue in the first place? It's nice to imagine that everyone has as their original position the notion that they should, but its not at all clear that everyone is 'rational' in that sense and certainly it is clear that not everyone perceives everyone else as having that position. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*Pathfinder & Starfinder
"Narrativist" 9-point alignment
Top