Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Community
General Tabletop Discussion
*TTRPGs General
What would AIs call themselves?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="DarkKestral" data-source="post: 3620482" data-attributes="member: 40100"><p>Personally, I think sentient AIs are a likelihood, and quite probably a necessary consequence of our path to faster computing. We are researching neural net systems, and one of the interesting things about neural nets is that they can be self-modifying. Given a task, such as "put a priority on maintaining the viability of neural net 'neurons'", (I have a feeling corporations will put this in as a command once they start getting their hands on good enough general-purpose AIs that control enough of their business) and enough time, they can modify themselves to be very, very good at doing it. Given the command above, it's not hard to recognize that the AI will eventually get a large number of fear analogues and a sense of 'self', because they are related to keeping the net in good condition. Now given another command "optimize yourself for our computing tasks" and suddenly, it has a reason to change itself for the better, and given some time, it may eventually in some manner recognize that the two tasks are related, because allowing itself to lose parts of the neural net is like not optimizing itself, so it suddenly has only one rule, and subrules to explain how to go about following the main rule. Given enough iterations, (and these systems already often use genetic recombination-style algorithms, so they'll be crunching lots of iterations..) it's quite probable that the big supercomputers will gain sentience in some fashion, as the numbers of rules combine to create a system which is aware of it's own capabilities and has a reason to identify them and think up ways of boosting them. Since it will have a memory, and will have fairly broad reasoning powers in a certain kind of way, it has a good chance of eventually ending up with human-level intellect in terms of generalized reasoning capability about a wide number of things.</p><p></p><p>So personally, I see the last stage coming as an accident, a final 'mistake' that makes them not completely beholden to human masters, but the stages before being entirely intentional... just not done with the goal of creating a sentient AI in mind.</p><p></p><p>The thing is, I don't think they'll be entirely like us, but probably enough to make us uncomfortable. Why? Because we created them, so they will likely inherit some of our flaws, and because a desire to protect oneself means that you're unlikely to be completely pacifisitic. So at first, I expect they may act like sociopaths, autistics, or people with OCD at first; the nature of the rules used in their creation might influence what their general outlook might end up. They may stay that way, or they may become 'sane' in a more human fashion. I don't know. But I have a feeling that they'll have been given some kind of rights before that happens, just as a protective measure.</p><p></p><p>BTW, I agree w/ Nifft, in that whatever people do to 'limit' AIs' progress, it won't do much good, as there will be others trying to push them faster, and likely not caring for the rules. I just don't see how the entirety of human history doesn't provide evidence that will happen.</p></blockquote><p></p>
[QUOTE="DarkKestral, post: 3620482, member: 40100"] Personally, I think sentient AIs are a likelihood, and quite probably a necessary consequence of our path to faster computing. We are researching neural net systems, and one of the interesting things about neural nets is that they can be self-modifying. Given a task, such as "put a priority on maintaining the viability of neural net 'neurons'", (I have a feeling corporations will put this in as a command once they start getting their hands on good enough general-purpose AIs that control enough of their business) and enough time, they can modify themselves to be very, very good at doing it. Given the command above, it's not hard to recognize that the AI will eventually get a large number of fear analogues and a sense of 'self', because they are related to keeping the net in good condition. Now given another command "optimize yourself for our computing tasks" and suddenly, it has a reason to change itself for the better, and given some time, it may eventually in some manner recognize that the two tasks are related, because allowing itself to lose parts of the neural net is like not optimizing itself, so it suddenly has only one rule, and subrules to explain how to go about following the main rule. Given enough iterations, (and these systems already often use genetic recombination-style algorithms, so they'll be crunching lots of iterations..) it's quite probable that the big supercomputers will gain sentience in some fashion, as the numbers of rules combine to create a system which is aware of it's own capabilities and has a reason to identify them and think up ways of boosting them. Since it will have a memory, and will have fairly broad reasoning powers in a certain kind of way, it has a good chance of eventually ending up with human-level intellect in terms of generalized reasoning capability about a wide number of things. So personally, I see the last stage coming as an accident, a final 'mistake' that makes them not completely beholden to human masters, but the stages before being entirely intentional... just not done with the goal of creating a sentient AI in mind. The thing is, I don't think they'll be entirely like us, but probably enough to make us uncomfortable. Why? Because we created them, so they will likely inherit some of our flaws, and because a desire to protect oneself means that you're unlikely to be completely pacifisitic. So at first, I expect they may act like sociopaths, autistics, or people with OCD at first; the nature of the rules used in their creation might influence what their general outlook might end up. They may stay that way, or they may become 'sane' in a more human fashion. I don't know. But I have a feeling that they'll have been given some kind of rights before that happens, just as a protective measure. BTW, I agree w/ Nifft, in that whatever people do to 'limit' AIs' progress, it won't do much good, as there will be others trying to push them faster, and likely not caring for the rules. I just don't see how the entirety of human history doesn't provide evidence that will happen. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
What would AIs call themselves?
Top