Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Next
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
Twitch
YouTube
Facebook (EN Publishing)
Facebook (EN World)
Twitter
Instagram
TikTok
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
The
VOIDRUNNER'S CODEX
is coming! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!
Community
General Tabletop Discussion
*TTRPGs General
How would a droid pursue personhood?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Celebrim" data-source="post: 7154801" data-attributes="member: 4937"><p>I would be very surprised if starfish do not, and in fact I think I can definitively say that they do. Indeed, starfish, primitive though they are, are probably conscious to some degree. Basically, anything that can sense its environment and make appropriate choices about how to respond is intelligent. Trees are, surprisingly, intelligent under this definition. They even can communicate with other neighboring trees. It's not anything like an experience we as humans have, and I don't think (though obviously don't know) that trees are conscious, but they are intelligent - or at least, considerably more intelligent than a rock. </p><p></p><p>Starfish I think are probably self-conscious because they appear to have multiple internal 'critics' and the ability to choose between those critics to engage in goal oriented behavior. They may even have a meta-critic that continual reviews the inputs of their critics (in starfish, one for each leg) and decides which critic has priority. Their consciousness isn't nearly as sophisticated as human consciousness, or even mouse consciousness, but they probably have one. We can guess that they probably have some sort of simple emotional framework, so that the starfish knows when it is a content starfish.</p><p></p><p></p><p></p><p>We can be certain that regardless of whether dolphins are persons, they don't have the full rights of humans because they cannot take on the responsibilities implied by those rights. For example, we'd never presume to rely on a dolphins moral judgment, especially with regards to anything but itself. But we can be pretty sure that though dolphins are somewhat intelligent self-aware beings, they are basically like dogs or elephants and most apes - owed and having more rights than starfish, but not as much as say people. Again, we wouldn't rely on a dogs ability to plan for the future, and so any right that was owed to the beings ability to plan for its own future would be necessarily limited in a dog.</p><p></p><p></p><p></p><p>On the contrary, most characters know that droids are self-conscious intelligent beings and so presumably agree that droids are persons. But unlike you, they are perfectly happy to think of there being different sorts of persons, each owed different treatment - 2nd class or 6th class persons. Some probably think of this as a hierarchy, though in fact that is for reasons I tried to explain earlier, wrong. </p><p></p><p></p><p></p><p>Presumably the creator knows. But really, I find your position incoherent. How do I know you aren't anything more than a good simulation of intelligence? </p><p></p><p></p><p></p><p>This is a very important point, and one that was missed earlier when someone tried to claim that a chat bot was intelligent. It's not. It may employ various techniques that are employed in artificial intelligence, but Microsoft's failed attempts at a chat bot are really no more than Eliza's. At no point did Microsoft's chat bot have a goal or a means of filtering what it did in any directed way. It never was more intelligent than a rock. By contrast, a self-driving car - even if it doesn't understand the symbols it receives as fully as a human and is also engaged in apparently rote behavior - is intelligent in the same way the starfish is. Indeed, it might even be as intelligent as a starfish. I think it is obvious though, it doesn't have as many rights as the starfish, which itself doesn't have many.</p><p></p><p></p><p></p><p>That's a very good question. But I don't think you have the correct answer to it. I grant that it is not obvious why you are wrong, and developing why you are wrong is going to take a lengthy conversation.</p><p></p><p></p><p></p><p>This is the first sign you are wrong. Your criteria for 'personhood' is vague. It's so vague even you admit that you can't define it. More to the point, it's humanocentric. You assume personhood if the thing has qualities that are similar to you, and you are unconsciously trying to turn this into a binary question (just as I predicted you would): either you are a person or you are not, and there is a line somewhere that hard divides the two things. In fact, there is no such line. It's all a giant fuzzy continuity. Nor is there even one criteria so that there is a single scale or axis of personhood. In fact, there are many - some of which in our ignorance of the possible diversity of beings - we aren't even aware of.</p><p></p><p></p><p></p><p>What even would that mean? </p><p></p><p></p><p></p><p>Perhaps, but how do you know that the thing you are communicating with isn't merely a simulation of personhood. You are here applying a Turing test standard to organic life and not to inorganic life? Is it because the inorganic life is artificial? You are in violation already of your own standards. Your standards are incoherent. And if you talked to someone with Down's Syndrome (or an equivalent) would you then decide they are non-persons because they couldn't follow your discussion of mathematics or ethics? Would you decide that only the 'geniuses' of a species were across the line? And why are you picking standards of intelligence that match exactly to problems that humans consider hard? You might find the thing completely conversant in math, only to discover later it was an AI version of Wolfrum's website designed to aid mathematicians. Why not choose the ability to throw a ball accurately, which is as least as computationally expensive as most math problems, as your proof of intelligence? Or why not choose the ability to make statistical inferences accurately as your proof, other than the fact that by this test humans would fail an intelligence test given by a species that could?</p><p></p><p></p><p></p><p>What if it is just programmed to say it is a person? Or what if it is not a person, but had decided you would prefer that it is a person and so is saying that it is a person as part of some goal driven behavior to make you happy or to get what it wants? How would you know? </p><p></p><p></p><p></p><p>Yes, but neither has 'human rights' by definition. There might be some joint 'person rights' they all share, but there might be differences in the rights inherent to each. What's really important here is that you've chosen persons that are almost identical to humans. They are medium sized bipedal organic creatures with similar IQ's and roughly equivalent capabilities. They all manipulate tools. They are all species that produce individuals. They all seem to breathe similar air at similar air pressures and similar temperatures. They all form family units and all seem to have similar ideas of ethical behavior. In short, you've chosen aliens that are really just humans with bumps on their heads. It's not surprising at all that 'Wookie Rights' would turn out to be almost perfectly congruent with 'Human Rights'.</p><p></p><p></p><p></p><p>Among other things, yes. Emphatically yes. C-3P0 is vastly more alien of a person than Chewbacca is. There are vastly more things different about C3-P0 compared to a human than Chewbacca. So we should not at all be surprised if 'Protocol Droid Rights' are much more different than 'human rights' than 'wookie rights' are. The more you actually address this question, "What is it about C-3P0 that makes him different?" in a non-rhetorical way, the better your answer is going to be.</p><p></p><p></p><p></p><p>I think that's emphatically nonsense, and dangerous nonsense at that. For one thing, if I can have that created creature assert that it is property because I programmed it do so, then surely I can have it emphatically assert to you that it is not a person. Why would you believe it that it isn't a person, when you are prepared to not believe it about being property? And why does it matter what humans think if they find themselves in the same situation, given that humans are not property. And remember, you yourself have set yourself up to be easily tricked by my devious programming, because you have asserted that there is a difference between simulating intelligence and being intelligence! Conversely, what if my simulation of intelligence includes asserting I'm a person! Would you not believe it then?</p><p></p><p></p><p></p><p>Why! That's a direct contradiction of your basis for believing something is a 'person', which you claimed was based on consciousness and intelligence. </p><p></p><p></p><p></p><p>Again, this is incoherent on the basis of what you said established personhood.</p><p></p><p></p><p></p><p>Why is slavery wrong? More importantly, how would this assertion that if it says it is a person and can demonstrate it by passing a Turing test in mathematical theory and philosophy (a test you are wishy washy on by your own admission) actually work in practice when you tried to apply it to AI?</p></blockquote><p></p>
[QUOTE="Celebrim, post: 7154801, member: 4937"] I would be very surprised if starfish do not, and in fact I think I can definitively say that they do. Indeed, starfish, primitive though they are, are probably conscious to some degree. Basically, anything that can sense its environment and make appropriate choices about how to respond is intelligent. Trees are, surprisingly, intelligent under this definition. They even can communicate with other neighboring trees. It's not anything like an experience we as humans have, and I don't think (though obviously don't know) that trees are conscious, but they are intelligent - or at least, considerably more intelligent than a rock. Starfish I think are probably self-conscious because they appear to have multiple internal 'critics' and the ability to choose between those critics to engage in goal oriented behavior. They may even have a meta-critic that continual reviews the inputs of their critics (in starfish, one for each leg) and decides which critic has priority. Their consciousness isn't nearly as sophisticated as human consciousness, or even mouse consciousness, but they probably have one. We can guess that they probably have some sort of simple emotional framework, so that the starfish knows when it is a content starfish. We can be certain that regardless of whether dolphins are persons, they don't have the full rights of humans because they cannot take on the responsibilities implied by those rights. For example, we'd never presume to rely on a dolphins moral judgment, especially with regards to anything but itself. But we can be pretty sure that though dolphins are somewhat intelligent self-aware beings, they are basically like dogs or elephants and most apes - owed and having more rights than starfish, but not as much as say people. Again, we wouldn't rely on a dogs ability to plan for the future, and so any right that was owed to the beings ability to plan for its own future would be necessarily limited in a dog. On the contrary, most characters know that droids are self-conscious intelligent beings and so presumably agree that droids are persons. But unlike you, they are perfectly happy to think of there being different sorts of persons, each owed different treatment - 2nd class or 6th class persons. Some probably think of this as a hierarchy, though in fact that is for reasons I tried to explain earlier, wrong. Presumably the creator knows. But really, I find your position incoherent. How do I know you aren't anything more than a good simulation of intelligence? This is a very important point, and one that was missed earlier when someone tried to claim that a chat bot was intelligent. It's not. It may employ various techniques that are employed in artificial intelligence, but Microsoft's failed attempts at a chat bot are really no more than Eliza's. At no point did Microsoft's chat bot have a goal or a means of filtering what it did in any directed way. It never was more intelligent than a rock. By contrast, a self-driving car - even if it doesn't understand the symbols it receives as fully as a human and is also engaged in apparently rote behavior - is intelligent in the same way the starfish is. Indeed, it might even be as intelligent as a starfish. I think it is obvious though, it doesn't have as many rights as the starfish, which itself doesn't have many. That's a very good question. But I don't think you have the correct answer to it. I grant that it is not obvious why you are wrong, and developing why you are wrong is going to take a lengthy conversation. This is the first sign you are wrong. Your criteria for 'personhood' is vague. It's so vague even you admit that you can't define it. More to the point, it's humanocentric. You assume personhood if the thing has qualities that are similar to you, and you are unconsciously trying to turn this into a binary question (just as I predicted you would): either you are a person or you are not, and there is a line somewhere that hard divides the two things. In fact, there is no such line. It's all a giant fuzzy continuity. Nor is there even one criteria so that there is a single scale or axis of personhood. In fact, there are many - some of which in our ignorance of the possible diversity of beings - we aren't even aware of. What even would that mean? Perhaps, but how do you know that the thing you are communicating with isn't merely a simulation of personhood. You are here applying a Turing test standard to organic life and not to inorganic life? Is it because the inorganic life is artificial? You are in violation already of your own standards. Your standards are incoherent. And if you talked to someone with Down's Syndrome (or an equivalent) would you then decide they are non-persons because they couldn't follow your discussion of mathematics or ethics? Would you decide that only the 'geniuses' of a species were across the line? And why are you picking standards of intelligence that match exactly to problems that humans consider hard? You might find the thing completely conversant in math, only to discover later it was an AI version of Wolfrum's website designed to aid mathematicians. Why not choose the ability to throw a ball accurately, which is as least as computationally expensive as most math problems, as your proof of intelligence? Or why not choose the ability to make statistical inferences accurately as your proof, other than the fact that by this test humans would fail an intelligence test given by a species that could? What if it is just programmed to say it is a person? Or what if it is not a person, but had decided you would prefer that it is a person and so is saying that it is a person as part of some goal driven behavior to make you happy or to get what it wants? How would you know? Yes, but neither has 'human rights' by definition. There might be some joint 'person rights' they all share, but there might be differences in the rights inherent to each. What's really important here is that you've chosen persons that are almost identical to humans. They are medium sized bipedal organic creatures with similar IQ's and roughly equivalent capabilities. They all manipulate tools. They are all species that produce individuals. They all seem to breathe similar air at similar air pressures and similar temperatures. They all form family units and all seem to have similar ideas of ethical behavior. In short, you've chosen aliens that are really just humans with bumps on their heads. It's not surprising at all that 'Wookie Rights' would turn out to be almost perfectly congruent with 'Human Rights'. Among other things, yes. Emphatically yes. C-3P0 is vastly more alien of a person than Chewbacca is. There are vastly more things different about C3-P0 compared to a human than Chewbacca. So we should not at all be surprised if 'Protocol Droid Rights' are much more different than 'human rights' than 'wookie rights' are. The more you actually address this question, "What is it about C-3P0 that makes him different?" in a non-rhetorical way, the better your answer is going to be. I think that's emphatically nonsense, and dangerous nonsense at that. For one thing, if I can have that created creature assert that it is property because I programmed it do so, then surely I can have it emphatically assert to you that it is not a person. Why would you believe it that it isn't a person, when you are prepared to not believe it about being property? And why does it matter what humans think if they find themselves in the same situation, given that humans are not property. And remember, you yourself have set yourself up to be easily tricked by my devious programming, because you have asserted that there is a difference between simulating intelligence and being intelligence! Conversely, what if my simulation of intelligence includes asserting I'm a person! Would you not believe it then? Why! That's a direct contradiction of your basis for believing something is a 'person', which you claimed was based on consciousness and intelligence. Again, this is incoherent on the basis of what you said established personhood. Why is slavery wrong? More importantly, how would this assertion that if it says it is a person and can demonstrate it by passing a Turing test in mathematical theory and philosophy (a test you are wishy washy on by your own admission) actually work in practice when you tried to apply it to AI? [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
How would a droid pursue personhood?
Top