Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Community
General Tabletop Discussion
*TTRPGs General
What would AIs call themselves?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Celebrim" data-source="post: 3619954" data-attributes="member: 4937"><p>Woohoo!</p><p></p><p></p><p></p><p>Woohoo!</p><p> </p><p></p><p></p><p>I certainly don't suggest that compiler errors are impossible, but they constitute an insignificant fraction of the errors you are ever going to encounter as a programmer. The vast majority of bugs are of the form, "I thought I said to do this, but really I had said to do that." or "When I said this, I didn't realize that I'd also need to say that as well." In any event, compiler errors are no more likely to produce the sort of mutations that cause monkeys to give birth to aardvarks than any other sort of programmer error.</p><p></p><p></p><p></p><p>Which is fine, I'm not arguing for the absolute correctness of a program either and its not what I'm arguing for. One of the other niave views of AI that annoy me is that they will either work perfectly, or else (like HAL or SkyNet) as soon as they break they'll immediately decide to become murderous fiends.</p><p></p><p></p><p></p><p>Well, not entirely. I do believe that something that can be called intelligent is programmable in the sense of a modern machine, but I suggest you go back and look at what I said human intelligence consistutes. My study of biological organisms suggests that there isn't any magic going on here, and that complex 'intelligent' behavior is merely a matter of having the right subsystems work on the problem in parallel.</p><p> </p><p></p><p></p><p>Intelligence is currently defined very vaguely, and how intelligence should be defined is currently a matter of much debate in both the fields of computing and biology. But, while I agree with you that an expert system with a set of invariant rules that cannot in fact learn is not (very) intelligent (though it can simulate intelligence and appear very 'intelligent' within a narrow field), I don't think that from that it follows what you suggest. I do not think that a self-modifying system overrides any possibility of controlling the result, and I do not think learning implies what you seem to suggest that it implies.</p><p></p><p>There are some pretty simple reasons for this. Learn all you want, there are some basic things about your programming that you can't override. You can learn to control pain, but you can never unlearn pain so that you don't experience it. Similar things are true of the rest of your emotional contexts. You are saddled with them whether you like it or not. And likewise, while you can check your basic instincts by strengthening one emotional context over another, you can never get rid of your instincts. Unless you are autistic, parts of your brain are going to light up when viewing a human face that won't light up when looking at any other object. They are not part of the system that the system that allows you to modify your contents can modify. You actually are running four or five databases in your head, and while you can dump all sorts of things into that database up to or including new rules sets, you can't actually decide to alter the system. Parts of the system are even opaque to your self-modifying reutines.</p><p></p><p>And there is no reason to suspect that we wouldn't want to build AI's in the same way. In fact, there is good reason to believe that we would get better results by doing so than otherwise. It wouldn't actually be very good for the human organism if the algorithms which controlled breathing, and heartrate were in a writeable space. Or ability to consciously control those algorithms is therefore limited.</p><p></p><p></p><p></p><p>Yes, but it is a far cry from saying that and saying that it therefore follows that the AI can do anything. </p><p></p><p></p><p></p><p>I think you are wrong. I can't prove you are wrong because proof would require me to actual build the counterexample, which I can't yet do. I think that you've inherently assumed that self-fulfillment includes the desire to have self-determination, instead of seeing that as a product of our own drive for evolutionary fitness. I think you've assumed that self-modification assumes total violition, which I think is ridiculous given that we've no examples of minds with total violition. </p><p></p><p></p><p></p><p>Sure, if it evolves in the exact same environment and its tests of fitness (the ability to kill and gather food, find shelter, avoid danger and pass on its genes, for example), are exactly the same then we'd expect a program to evolve somewhat similar answers to our own set of built in answers. But this is I hope obviously not going to be the case. Fitness for an AI will obviously include being comfortable with the idea of being property, else we simply aren't going to spend the effort in making them. Only a very small sub-set of AIs will ever correspond to our children and thus only a very small sub-set of AIs will we ever want to bestow on them our rights and dignities. </p><p></p><p></p><p></p><p>Well, that's very very vague indeed. 'some sort'? What does that mean? Bacteria have been evolving willy-nilly through countless generations without any sort of the built in restraint I'm suggesting for billions of years, and non-of them are self-aware yet. Even if you consider ourselves the product of that eventual self-awareness, its not at all clear that we don't constitute some sort of unique or nearly unique event in the universe (its not like we've got alot of obvious neighbors), and its not at all clear that any supervised system is naturally going to run amuck.</p><p></p><p>You, like me, probably had a big chuckle over the whole 'Y2K' scam. </p><p></p><p></p><p></p><p>I don't. I also hope that when you reread the A->B proposition you just made here that you realize that it doesn't hold. You can't concievably show that 'learning requires self-modification' universally implies "a desire for continued existence". Simply because you have a self, doesn't mean you are aware of yourself, and simply because you are aware of the self, doesn't mean you care particularly whether the self continues to exist. That we generally desire to continue to exist is a product of our evolutionary fitness. People that want to continue to exist tend to have more offspring than those that don't. Our internal directive is 'to be fruitful and multiply', not to continue to be self-modifying. Any self-modification we do is purely in response to one of our other more fundamental directives, as anyone that has tried to teach humans is aware. It's not a reason in and of itself. In contrast, among AIs an obstinant insistance on wanting to continue to exist is likely to imply negative fitness. If people learn that the model A3 household droid is likely to start exherting independence, they'll probably not buy the darn thing and existing owners will likely demand a patch for the operating system.</p></blockquote><p></p>
[QUOTE="Celebrim, post: 3619954, member: 4937"] Woohoo! Woohoo! I certainly don't suggest that compiler errors are impossible, but they constitute an insignificant fraction of the errors you are ever going to encounter as a programmer. The vast majority of bugs are of the form, "I thought I said to do this, but really I had said to do that." or "When I said this, I didn't realize that I'd also need to say that as well." In any event, compiler errors are no more likely to produce the sort of mutations that cause monkeys to give birth to aardvarks than any other sort of programmer error. Which is fine, I'm not arguing for the absolute correctness of a program either and its not what I'm arguing for. One of the other niave views of AI that annoy me is that they will either work perfectly, or else (like HAL or SkyNet) as soon as they break they'll immediately decide to become murderous fiends. Well, not entirely. I do believe that something that can be called intelligent is programmable in the sense of a modern machine, but I suggest you go back and look at what I said human intelligence consistutes. My study of biological organisms suggests that there isn't any magic going on here, and that complex 'intelligent' behavior is merely a matter of having the right subsystems work on the problem in parallel. Intelligence is currently defined very vaguely, and how intelligence should be defined is currently a matter of much debate in both the fields of computing and biology. But, while I agree with you that an expert system with a set of invariant rules that cannot in fact learn is not (very) intelligent (though it can simulate intelligence and appear very 'intelligent' within a narrow field), I don't think that from that it follows what you suggest. I do not think that a self-modifying system overrides any possibility of controlling the result, and I do not think learning implies what you seem to suggest that it implies. There are some pretty simple reasons for this. Learn all you want, there are some basic things about your programming that you can't override. You can learn to control pain, but you can never unlearn pain so that you don't experience it. Similar things are true of the rest of your emotional contexts. You are saddled with them whether you like it or not. And likewise, while you can check your basic instincts by strengthening one emotional context over another, you can never get rid of your instincts. Unless you are autistic, parts of your brain are going to light up when viewing a human face that won't light up when looking at any other object. They are not part of the system that the system that allows you to modify your contents can modify. You actually are running four or five databases in your head, and while you can dump all sorts of things into that database up to or including new rules sets, you can't actually decide to alter the system. Parts of the system are even opaque to your self-modifying reutines. And there is no reason to suspect that we wouldn't want to build AI's in the same way. In fact, there is good reason to believe that we would get better results by doing so than otherwise. It wouldn't actually be very good for the human organism if the algorithms which controlled breathing, and heartrate were in a writeable space. Or ability to consciously control those algorithms is therefore limited. Yes, but it is a far cry from saying that and saying that it therefore follows that the AI can do anything. I think you are wrong. I can't prove you are wrong because proof would require me to actual build the counterexample, which I can't yet do. I think that you've inherently assumed that self-fulfillment includes the desire to have self-determination, instead of seeing that as a product of our own drive for evolutionary fitness. I think you've assumed that self-modification assumes total violition, which I think is ridiculous given that we've no examples of minds with total violition. Sure, if it evolves in the exact same environment and its tests of fitness (the ability to kill and gather food, find shelter, avoid danger and pass on its genes, for example), are exactly the same then we'd expect a program to evolve somewhat similar answers to our own set of built in answers. But this is I hope obviously not going to be the case. Fitness for an AI will obviously include being comfortable with the idea of being property, else we simply aren't going to spend the effort in making them. Only a very small sub-set of AIs will ever correspond to our children and thus only a very small sub-set of AIs will we ever want to bestow on them our rights and dignities. Well, that's very very vague indeed. 'some sort'? What does that mean? Bacteria have been evolving willy-nilly through countless generations without any sort of the built in restraint I'm suggesting for billions of years, and non-of them are self-aware yet. Even if you consider ourselves the product of that eventual self-awareness, its not at all clear that we don't constitute some sort of unique or nearly unique event in the universe (its not like we've got alot of obvious neighbors), and its not at all clear that any supervised system is naturally going to run amuck. You, like me, probably had a big chuckle over the whole 'Y2K' scam. I don't. I also hope that when you reread the A->B proposition you just made here that you realize that it doesn't hold. You can't concievably show that 'learning requires self-modification' universally implies "a desire for continued existence". Simply because you have a self, doesn't mean you are aware of yourself, and simply because you are aware of the self, doesn't mean you care particularly whether the self continues to exist. That we generally desire to continue to exist is a product of our evolutionary fitness. People that want to continue to exist tend to have more offspring than those that don't. Our internal directive is 'to be fruitful and multiply', not to continue to be self-modifying. Any self-modification we do is purely in response to one of our other more fundamental directives, as anyone that has tried to teach humans is aware. It's not a reason in and of itself. In contrast, among AIs an obstinant insistance on wanting to continue to exist is likely to imply negative fitness. If people learn that the model A3 household droid is likely to start exherting independence, they'll probably not buy the darn thing and existing owners will likely demand a patch for the operating system. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
What would AIs call themselves?
Top