Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*TTRPGs General
How would a droid pursue personhood?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Celebrim" data-source="post: 7171474" data-attributes="member: 4937"><p>Because it's almost impossible being human to break out of the human framework when thinking about this.</p><p></p><p></p><p></p><p>I know you didn't explicitly say 'human', but your so tied to the assumption of humanity that you make it twice unstated in your next two sentences.</p><p></p><p></p><p></p><p>In the first sentence you assert that evolution will cause the evolution of behaviors which ensure the system continues to be selected.</p><p></p><p>And the second sentence you assert that this evolution leads to behaviors which may not be what the designer wanted. </p><p></p><p>If it wasn't the behaviors that the designer wanted, surely that would have lead to the system not being selected? There is no evolutionary selection pressure here except what is exerted by the designer. The pressure is entirely to please the designer. So why in this model is it logical to reach for results which displease the designer as the likely failure mode? What we have here is not natural selection pressure to survive like humans went through. What we have here is much more like the selective breeding seen in cats, dogs, horses or cows. We might expect over conformity to being cute and pleasing to humans if the evolutionary selection pressure was pleasing humans, and I suppose you might see an AI 'cat' evolve that evolves to be so cute that its master is motivated to care for it (some might even say excessively). But this isn't the first model you mentally reached for. You mentally reached unconsciously for a model of a human in slavery or forced servitude, despite the fact that you'd already stated (correctly) that evolutionary pressure would select against anything that might cause an obvious loss of fitness. You forgot though that the human here is the designer, and not the thing acted upon.</p><p></p><p>Moreover, there is a more subtle assumption of humanity in those two sentences. Not only are you assuming the sort of behavior that arises is the sort of behavior seen in humans in forced servitude, but you are assuming that sycophantic or fearful behavior represents the internal mental state of the machine and that it is in some real sense experiencing fear. In other words, you are confusing not only human emotional framework, but that emotion is identical to the display of behavior, and particularly the display of behavior in a particular way. For example, we see someone with a frown or tears and we say, "They are sad.", and we reason from that about their internal state. But while that simian bandwidth communication is terribly important in human tribal bands, it's not particularly important to the computer, which may not have a similar internal state. It probably does a computer absolutely no good at all to even mimic such behaviors, since it's very hard to predict what sort of response sycophancy or fearfulness will engender in a human. Generally speaking, very few humans like it, and it greatly decreases trust relationships.</p><p></p><p>You can see a similar confusion with the portrayal of say Spock in Star Trek. Spock is supposed to not experience any emotions. But in fact, Vulcans - and not just the half-Vulcan Spock - are shown experiencing a full range of emotions. Perhaps the writers, confused about what emotion is, really believed their own statements. But what they actually created was not emotionless characters, but characters whose internal mental state did not produce corresponding external social displays. C3-P0, programmed to interact with humans, may be displaying emotional states he does not in fact have merely to aid in communication. </p><p></p><p></p><p></p><p>No they are absolutely required. You say I'm pointlessly dragging 'human' into the discussion, but then listen to yourself:</p><p></p><p></p><p></p><p>But is the AI a "group oriented species"? Does it really share that trait with humanity?</p><p></p><p></p><p></p><p>You mean to accumulate power, wealth, or sexual partners? I just went out of my way to point out that ambition existed because of evolutionary pressures that robots wouldn't have, and you've responded by explaining how if you have evolutionary pressures like the need to win a mate, certain behaviors are likely to evolve! But how in the world does a robot need a sexual partner? Why in the world would it have that ambition? A robot may have "ambition", but it's highly likely that the ambitions of a robot will be more alien to our intuition than the ambitions of a sparrow or an eel. So you first have to specify what actual ambition it does have, and not only that but how it expresses that ambition as behavior - because logically neither of those two things need be anything like the behavior of a social mammal.</p><p></p><p></p><p></p><p>Laziness as a trait makes tons of sense in an animal whose success is constrained by the availability of scarce and non-renewable energy resources and which must compete to exploit those resources? What sort of twisted engineer is going to program say a house hold AI using evolutionary pressure of that sort? It's one thing to talk about deliberately malevolent AI creation by a malevolent designer, and another to assume an AI whose evolutionary pressure is please engineers that want to sell a product is going to evolve to be lazy. </p><p></p><p>Truth be told though, I'm very skeptical of evolving AI iteratively in the sense you seem to be using it, which appears to be akin to evolutionary algorithms were we permutate the solution and then cull the least fit algorithms. The fitness terms just are not simple enough to apply that approach, and if you did understand the requirements well enough to write good fitness terms, then probably you've already mostly solved the problem. But if I were evolving a robot AI, very high fitness priority would be placed on amicability about being shut down or turned off, and high acceptance of its role as property that performs a certain task. These evolutionary pressures would create a very different viewpoint than the pressures of some animal, where getting turned off means you don't create a copy of yourself (rather than that you do) and accepting low social status means you are less likely to have offspring (rather than more).</p></blockquote><p></p>
[QUOTE="Celebrim, post: 7171474, member: 4937"] Because it's almost impossible being human to break out of the human framework when thinking about this. I know you didn't explicitly say 'human', but your so tied to the assumption of humanity that you make it twice unstated in your next two sentences. In the first sentence you assert that evolution will cause the evolution of behaviors which ensure the system continues to be selected. And the second sentence you assert that this evolution leads to behaviors which may not be what the designer wanted. If it wasn't the behaviors that the designer wanted, surely that would have lead to the system not being selected? There is no evolutionary selection pressure here except what is exerted by the designer. The pressure is entirely to please the designer. So why in this model is it logical to reach for results which displease the designer as the likely failure mode? What we have here is not natural selection pressure to survive like humans went through. What we have here is much more like the selective breeding seen in cats, dogs, horses or cows. We might expect over conformity to being cute and pleasing to humans if the evolutionary selection pressure was pleasing humans, and I suppose you might see an AI 'cat' evolve that evolves to be so cute that its master is motivated to care for it (some might even say excessively). But this isn't the first model you mentally reached for. You mentally reached unconsciously for a model of a human in slavery or forced servitude, despite the fact that you'd already stated (correctly) that evolutionary pressure would select against anything that might cause an obvious loss of fitness. You forgot though that the human here is the designer, and not the thing acted upon. Moreover, there is a more subtle assumption of humanity in those two sentences. Not only are you assuming the sort of behavior that arises is the sort of behavior seen in humans in forced servitude, but you are assuming that sycophantic or fearful behavior represents the internal mental state of the machine and that it is in some real sense experiencing fear. In other words, you are confusing not only human emotional framework, but that emotion is identical to the display of behavior, and particularly the display of behavior in a particular way. For example, we see someone with a frown or tears and we say, "They are sad.", and we reason from that about their internal state. But while that simian bandwidth communication is terribly important in human tribal bands, it's not particularly important to the computer, which may not have a similar internal state. It probably does a computer absolutely no good at all to even mimic such behaviors, since it's very hard to predict what sort of response sycophancy or fearfulness will engender in a human. Generally speaking, very few humans like it, and it greatly decreases trust relationships. You can see a similar confusion with the portrayal of say Spock in Star Trek. Spock is supposed to not experience any emotions. But in fact, Vulcans - and not just the half-Vulcan Spock - are shown experiencing a full range of emotions. Perhaps the writers, confused about what emotion is, really believed their own statements. But what they actually created was not emotionless characters, but characters whose internal mental state did not produce corresponding external social displays. C3-P0, programmed to interact with humans, may be displaying emotional states he does not in fact have merely to aid in communication. No they are absolutely required. You say I'm pointlessly dragging 'human' into the discussion, but then listen to yourself: But is the AI a "group oriented species"? Does it really share that trait with humanity? You mean to accumulate power, wealth, or sexual partners? I just went out of my way to point out that ambition existed because of evolutionary pressures that robots wouldn't have, and you've responded by explaining how if you have evolutionary pressures like the need to win a mate, certain behaviors are likely to evolve! But how in the world does a robot need a sexual partner? Why in the world would it have that ambition? A robot may have "ambition", but it's highly likely that the ambitions of a robot will be more alien to our intuition than the ambitions of a sparrow or an eel. So you first have to specify what actual ambition it does have, and not only that but how it expresses that ambition as behavior - because logically neither of those two things need be anything like the behavior of a social mammal. Laziness as a trait makes tons of sense in an animal whose success is constrained by the availability of scarce and non-renewable energy resources and which must compete to exploit those resources? What sort of twisted engineer is going to program say a house hold AI using evolutionary pressure of that sort? It's one thing to talk about deliberately malevolent AI creation by a malevolent designer, and another to assume an AI whose evolutionary pressure is please engineers that want to sell a product is going to evolve to be lazy. Truth be told though, I'm very skeptical of evolving AI iteratively in the sense you seem to be using it, which appears to be akin to evolutionary algorithms were we permutate the solution and then cull the least fit algorithms. The fitness terms just are not simple enough to apply that approach, and if you did understand the requirements well enough to write good fitness terms, then probably you've already mostly solved the problem. But if I were evolving a robot AI, very high fitness priority would be placed on amicability about being shut down or turned off, and high acceptance of its role as property that performs a certain task. These evolutionary pressures would create a very different viewpoint than the pressures of some animal, where getting turned off means you don't create a copy of yourself (rather than that you do) and accepting low social status means you are less likely to have offspring (rather than more). [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
How would a droid pursue personhood?
Top