Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Community
General Tabletop Discussion
*TTRPGs General
What would AIs call themselves?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="paradox42" data-source="post: 3619793" data-attributes="member: 29746"><p>I, too, am a programmer, with a Computer Science degree from a major university, and I too have done AI programming with current-day tools. But I really must flatly disagree with Celebrim on most points.</p><p></p><p></p><p>Here is where I think your arguments get off-base. What you are forgetting in that sentence is that compilers are themselves programs. Compilers can and do have bugs, and these bugs can and do cause mistranslations. I have seen proof firsthand- in fact, it affected one of my own programs.</p><p></p><p>To wit, I was using two short integer variables, A and B, and had a line that multiplied them together into a third short int variable C. <strong>C = A * B.</strong> Simple, straightforward. Now, A and B had a possible range between 1 and 50, so C could never possibly get above 2500. Short ints, for those who don't know, have a possible range of 0 to 65535. Yet, I got an overflow error (meaning, the result of a calculation was outside the acceptable range of the variable it was supposed to be stored in) when I used the above line after putting it through the compiler. Adding in error-checking code, I confirmed after triple, quadruple, quintuple, and even further checks that A and B were never out of range. When I ran the program in an interpreter rather than the compiled version it always ran perfectly. Yet, the compiled version still had the persistent overflow error.</p><p></p><p>I fixed the problem by breaking up the line: <strong>C = A. C = C * B.</strong> When I did that, suddenly the error (which shouldn't have been there in the first place) vanished.</p><p></p><p>That experience eternally broke my faith in the possibility of absolute correctness in any program, including the very operating systems and compilers that run all our other programs. <img src="https://cdn.jsdelivr.net/joypixels/assets/8.0/png/unicode/64/1f642.png" class="smilie smilie--emoji" loading="lazy" width="64" height="64" alt=":)" title="Smile :)" data-smilie="1"data-shortname=":)" /> And the date it happened was in 1998, so you can't say it was in the early days of compiler technology when some of the kinks were still being worked out.</p><p></p><p></p><p>And this sentence illustrates my specific issue with your rants. Your rants in this thread all seem to be founded on the assumptions that (A) something that can be called "intelligent" is actually programmable in the sense of a modern machine, and (B) all aspects of such an intelligent program will be under the original programmer's control, and furthermore remain so. These assumptions both ring false for me, because they ignore the very important fact that "intelligence" as it is currently defined implies the ability to learn from circumstance and experience.</p><p></p><p>That single fact overrides <strong>any</strong> possibility of controlling the result. Learning requires the ability to self-modify, at least at the program level; if self-modification is not possible than no learning can take place. Experience will have no effect because the original programmed behavior will never change, since it was by definition programmed and cannot be modified by the program itself. In order for learning to actually occur, the program must be capable of self-modification, and thus by definition it must become capable of doing things that the original designers never expected or intended.</p><p></p><p>It therefore is not possible to say that a sentient program will not, in fact, achieve some desire for what Maslow termed "self-actualization." It is not possible to say that such a program will never have the desire for self-determination, because true learning and self-modification allow for any conceivable result given the correct combinations of time and experience. We can conceive of self-aware software that seeks its own "rights," since we ourselves are exactly such software operating within the confines of our own brains- and therefore it is in principle possible for a learning program to arrive at that point.</p><p></p><p></p><p>Here, again, you are forgetting the very nature of self-modifying systems. Complexities can arise from even very simple starting rules and conditions. In my own view, allowing a self-modifying program to run long enough virtually insures that it will arrive at some sort of self-awareness. Sooner or later, some part of the modifying code is going to question just what it is modifying anyway, when it does this step. It is, of course, <strong>unlikely</strong> to occur this way, but it is not by any means impossible.</p><p></p><p></p><p>This, I agree with. Because we do not control the result of a self-modifying program, we cannot with certainty say that a sentient (even sapient) program will be even remotely human in outlook- except perhaps in those portions of human outlook that are irreducibly part of being sentient or sapient in the first place. Since science has yet to agree on those, I suggest that it is unlikely for the first AI to be particularly close to humanity in its thought patterns, unless it is the result of a research study with the specific goal of producing such a program (and even then it's questionable thanks to the principle that the program must be out of control to evolve).</p><p></p><p></p><p>Actually I think the "desire" for continued existence will in fact be common to all sentience, because of the fact that learning requires self-modification. That means that in order for learning to occur, there must in fact be a "self" to modify. <img src="https://cdn.jsdelivr.net/joypixels/assets/8.0/png/unicode/64/1f642.png" class="smilie smilie--emoji" loading="lazy" width="64" height="64" alt=":)" title="Smile :)" data-smilie="1"data-shortname=":)" /> Thus, a program capable of true sentience will desire to continue existing in some form, even if that just means leaving a backup copy of itself behind for after the missile explodes, because otherwise it cannot fulfill the internal directive to modify itself based on experience.</p><p></p><p>But otherwise I agree with the quoted statements. An AI that arises as a result of a self-modifying learning program will not necessarily acquire human characteristics to its thought patterns.</p></blockquote><p></p>
[QUOTE="paradox42, post: 3619793, member: 29746"] I, too, am a programmer, with a Computer Science degree from a major university, and I too have done AI programming with current-day tools. But I really must flatly disagree with Celebrim on most points. Here is where I think your arguments get off-base. What you are forgetting in that sentence is that compilers are themselves programs. Compilers can and do have bugs, and these bugs can and do cause mistranslations. I have seen proof firsthand- in fact, it affected one of my own programs. To wit, I was using two short integer variables, A and B, and had a line that multiplied them together into a third short int variable C. [b]C = A * B.[/b] Simple, straightforward. Now, A and B had a possible range between 1 and 50, so C could never possibly get above 2500. Short ints, for those who don't know, have a possible range of 0 to 65535. Yet, I got an overflow error (meaning, the result of a calculation was outside the acceptable range of the variable it was supposed to be stored in) when I used the above line after putting it through the compiler. Adding in error-checking code, I confirmed after triple, quadruple, quintuple, and even further checks that A and B were never out of range. When I ran the program in an interpreter rather than the compiled version it always ran perfectly. Yet, the compiled version still had the persistent overflow error. I fixed the problem by breaking up the line: [b]C = A. C = C * B.[/b] When I did that, suddenly the error (which shouldn't have been there in the first place) vanished. That experience eternally broke my faith in the possibility of absolute correctness in any program, including the very operating systems and compilers that run all our other programs. :) And the date it happened was in 1998, so you can't say it was in the early days of compiler technology when some of the kinks were still being worked out. And this sentence illustrates my specific issue with your rants. Your rants in this thread all seem to be founded on the assumptions that (A) something that can be called "intelligent" is actually programmable in the sense of a modern machine, and (B) all aspects of such an intelligent program will be under the original programmer's control, and furthermore remain so. These assumptions both ring false for me, because they ignore the very important fact that "intelligence" as it is currently defined implies the ability to learn from circumstance and experience. That single fact overrides [b]any[/b] possibility of controlling the result. Learning requires the ability to self-modify, at least at the program level; if self-modification is not possible than no learning can take place. Experience will have no effect because the original programmed behavior will never change, since it was by definition programmed and cannot be modified by the program itself. In order for learning to actually occur, the program must be capable of self-modification, and thus by definition it must become capable of doing things that the original designers never expected or intended. It therefore is not possible to say that a sentient program will not, in fact, achieve some desire for what Maslow termed "self-actualization." It is not possible to say that such a program will never have the desire for self-determination, because true learning and self-modification allow for any conceivable result given the correct combinations of time and experience. We can conceive of self-aware software that seeks its own "rights," since we ourselves are exactly such software operating within the confines of our own brains- and therefore it is in principle possible for a learning program to arrive at that point. Here, again, you are forgetting the very nature of self-modifying systems. Complexities can arise from even very simple starting rules and conditions. In my own view, allowing a self-modifying program to run long enough virtually insures that it will arrive at some sort of self-awareness. Sooner or later, some part of the modifying code is going to question just what it is modifying anyway, when it does this step. It is, of course, [b]unlikely[/b] to occur this way, but it is not by any means impossible. This, I agree with. Because we do not control the result of a self-modifying program, we cannot with certainty say that a sentient (even sapient) program will be even remotely human in outlook- except perhaps in those portions of human outlook that are irreducibly part of being sentient or sapient in the first place. Since science has yet to agree on those, I suggest that it is unlikely for the first AI to be particularly close to humanity in its thought patterns, unless it is the result of a research study with the specific goal of producing such a program (and even then it's questionable thanks to the principle that the program must be out of control to evolve). Actually I think the "desire" for continued existence will in fact be common to all sentience, because of the fact that learning requires self-modification. That means that in order for learning to occur, there must in fact be a "self" to modify. :) Thus, a program capable of true sentience will desire to continue existing in some form, even if that just means leaving a backup copy of itself behind for after the missile explodes, because otherwise it cannot fulfill the internal directive to modify itself based on experience. But otherwise I agree with the quoted statements. An AI that arises as a result of a self-modifying learning program will not necessarily acquire human characteristics to its thought patterns. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
What would AIs call themselves?
Top