Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Enchanted Trinkets Complete--a hardcover book containing over 500 magic items for your D&D games!
Community
General Tabletop Discussion
*TTRPGs General
How would you take over the modern world if you had magic?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Celebrim" data-source="post: 2385571" data-attributes="member: 4937"><p>I'm glad you brought that up. 'I, Robot' is based on several premises, and those premises would have to be true before something like 'I, Robot' could happen.</p><p></p><p>1) A superintelligence would have to exist capable of micro and macro managing the world in a fashion that human dictators are unable to do (as per my above argument).</p><p>2) That superintelligence would have to be centralized rather than distributed in nature. In other words, that intelligence would have to constitute a single recognizable entity working as if it had a single will.</p><p>3) That superintelligence would have to do a good enough job managing the world, that the general public would be content with the current state of affairs, and anyone who discovered that the superintelligence was controlling the world would have to be rationally convinced that this was the best situation for the world to be in.</p><p></p><p>As a computer scientist, I'll concede point #1 as possible or at least that I would like to think that it is - though of course not every one would agree with me. Briefly, an argument against point #1 would note that many of the problems of running the world are classifiable as 'wicked problems' (look it up) and its entirely possible that 'wicked problems' would prove unsolvable regardless of the intelligence of the entity. Moreover, its not at all clear that humans would be capable of programming a machine with techniques for solving 'wicked problems'. However, lets concede for now #1 because it really doesn't matter.</p><p></p><p>On point #2, Asimov was writing at a time when it was reasonable to believe in 'Deep Thought'. That is to say, he was writing at a time when the current technology seemed to indicate that 'super computers' would be centralized massive entities. That is no longer reasonable to believe. In fact, computing technology seems to indicate that future computers may follow thier human counterparts in distributing tasks. An artificial super-entity of the future may in fact look more like a democracy of intelligent machines than it would look like a centralized decision making apparatus. As such, I don't really expect that it would be easier for one 'node' (or a few nodes) of the artificial super-entity to 'take over the world' than it would be for one person to take over the human super-entity. And in any event, its likely that the human partners of the AI's would view this as a failure of 'friendliness' on the part of the AI, and act to shut down any node that showed excessive ambition or any personally owned node that adopted a philosophy that the node owner found 'unfriendly'. Imagine for example what would happen if a node owned by Al Franken suddenly adopted 'conservative outlook' or if Ann Coulter's node suddenly adopted 'liberal outlook'. Both parties would see such action as a failure of friendliness on the part of the node, and neither would desire to keep using such a node as thier personal agent.</p><p></p><p>On point #3, assuming that the super-entities could solve wicked problems and run the world, its not at all clear to me that in fact the general public would be ok with this. Generally speaking, if the general public found that the supernodes had subverted thier assigned tasks and were now running the world, the general public would likely consider this to be a failure of 'friendliness' on the part of the AI's - <em>even if the AI's where running the world in a benificient and altruistic fashion</em>. It's therefore to me likely then that the only way that a Technocracy could be created is if it was done with the will of the governed (and that certainly true if present social structures don't collapse). And the stickler is of course, that if the Technocracy is ruling by the consent of the governed, then they haven't really 'taken over the world' in the usual since because they are constrained to only lead the public in the direction that the public would be happy with - else the public would remove its consent and the Technocracy would then face a populist revolt from (at the least) much of its human partners and probably at least some of the independent AI nodes whose friendliness constrained them to remain loyal to thier human owners or partners.</p></blockquote><p></p>
[QUOTE="Celebrim, post: 2385571, member: 4937"] I'm glad you brought that up. 'I, Robot' is based on several premises, and those premises would have to be true before something like 'I, Robot' could happen. 1) A superintelligence would have to exist capable of micro and macro managing the world in a fashion that human dictators are unable to do (as per my above argument). 2) That superintelligence would have to be centralized rather than distributed in nature. In other words, that intelligence would have to constitute a single recognizable entity working as if it had a single will. 3) That superintelligence would have to do a good enough job managing the world, that the general public would be content with the current state of affairs, and anyone who discovered that the superintelligence was controlling the world would have to be rationally convinced that this was the best situation for the world to be in. As a computer scientist, I'll concede point #1 as possible or at least that I would like to think that it is - though of course not every one would agree with me. Briefly, an argument against point #1 would note that many of the problems of running the world are classifiable as 'wicked problems' (look it up) and its entirely possible that 'wicked problems' would prove unsolvable regardless of the intelligence of the entity. Moreover, its not at all clear that humans would be capable of programming a machine with techniques for solving 'wicked problems'. However, lets concede for now #1 because it really doesn't matter. On point #2, Asimov was writing at a time when it was reasonable to believe in 'Deep Thought'. That is to say, he was writing at a time when the current technology seemed to indicate that 'super computers' would be centralized massive entities. That is no longer reasonable to believe. In fact, computing technology seems to indicate that future computers may follow thier human counterparts in distributing tasks. An artificial super-entity of the future may in fact look more like a democracy of intelligent machines than it would look like a centralized decision making apparatus. As such, I don't really expect that it would be easier for one 'node' (or a few nodes) of the artificial super-entity to 'take over the world' than it would be for one person to take over the human super-entity. And in any event, its likely that the human partners of the AI's would view this as a failure of 'friendliness' on the part of the AI, and act to shut down any node that showed excessive ambition or any personally owned node that adopted a philosophy that the node owner found 'unfriendly'. Imagine for example what would happen if a node owned by Al Franken suddenly adopted 'conservative outlook' or if Ann Coulter's node suddenly adopted 'liberal outlook'. Both parties would see such action as a failure of friendliness on the part of the node, and neither would desire to keep using such a node as thier personal agent. On point #3, assuming that the super-entities could solve wicked problems and run the world, its not at all clear to me that in fact the general public would be ok with this. Generally speaking, if the general public found that the supernodes had subverted thier assigned tasks and were now running the world, the general public would likely consider this to be a failure of 'friendliness' on the part of the AI's - [i]even if the AI's where running the world in a benificient and altruistic fashion[/i]. It's therefore to me likely then that the only way that a Technocracy could be created is if it was done with the will of the governed (and that certainly true if present social structures don't collapse). And the stickler is of course, that if the Technocracy is ruling by the consent of the governed, then they haven't really 'taken over the world' in the usual since because they are constrained to only lead the public in the direction that the public would be happy with - else the public would remove its consent and the Technocracy would then face a populist revolt from (at the least) much of its human partners and probably at least some of the independent AI nodes whose friendliness constrained them to remain loyal to thier human owners or partners. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
How would you take over the modern world if you had magic?
Top