Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*TTRPGs General
How would a droid pursue personhood?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Celebrim" data-source="post: 7155965" data-attributes="member: 4937"><p><span style="font-family: 'Times New Roman'"></span></p><p><span style="font-family: 'Times New Roman'"></span></p><p><span style="font-family: 'Times New Roman'">Like you, I tend to think that since humans look like they are free willed, then they probably are. Likewise, since I perceive myself as conscious - that is I perceive myself thinking about myself - then I tend to think that I am. But it's possible that I'm being deluded, and that I'm simply a mechanical device that has been constructed to perceive that it is conscious. Just as you are skeptical that an artificial lifeform could be conscious, many people are skeptical that any mechanical process - which don't get me wrong, I agree that we must essentially be a mechanical process - could be non-deterministic.</span></p><p><span style="font-family: 'Times New Roman'"></span></p><p><span style="font-family: 'Times New Roman'">As a counter example, I note that we appear to live in a non-deterministic universe, as on the quantum level, the universe cannot be described in purely mechanical terms. </span></p><p><span style="font-family: 'Times New Roman'"></span></p><p><span style="font-family: 'Times New Roman'">I don't believe that there is anything particularly special about biological hardware as opposed to any other substrate. It stands to reason to me that if with biological hardware you can achieve self-willed, self-aware, sapient organisms, then you can achieve the same thing with circuits. </span></p><p><span style="font-family: 'Times New Roman'"></span></p><p><span style="font-family: 'Times New Roman'">As for software knowing what the symbols it manipulates actually mean, then it seems self-obvious to me that it could. And likewise, I'm pretty darn certain that the vast majority of what humans do is algorithimic and is based on built in 'hardware' compiled during early life development. Humans learn to do things like walk or read way too fast for it to not be built in algorithms. </span></p><p><span style="font-family: 'Times New Roman'"></span></p><p><span style="font-family: 'Times New Roman'">However, this is all tangential points. If you are curious about why some people think that consciousness is a delusion, or why some people think free will is impossible, I encourage you to go to Wikipedia and read about the concepts.</span></p><p><span style="font-family: 'Times New Roman'"></span></p><p><span style="font-family: 'Times New Roman'"></span></p><p><span style="font-family: 'Times New Roman'"></span></p><p><span style="font-family: 'Times New Roman'">Aha! Does being conscious guarantee that the thing is actually self-willed, and so therefore has a right to self-determination? You've made a dangerous assumption here.</span></p><p><span style="font-family: 'Times New Roman'"></span></p><p><span style="font-family: 'Times New Roman'"></span></p><p><span style="font-family: 'Times New Roman'"></span></p><p><span style="font-family: 'Times New Roman'">Tons of things, with the worst case being that humans might engineer robots based on those naïve beliefs, which is likely to be extremely dangerous. Creating a living thing based on idealism not grounded in reality is horrific. If you are going to 'play god', you better know what you are doing. The more moderate case is that it could increase unfriendliness in a particular robot - roughly equivalent to teaching a dog to bite. And of course I also consider it potentially a form of a abuse, equivalent to mistreating a dog. Depending on the droid's construction it might be non-trivial abuse. So for example, physical damage like smashing the hands of a droid with a hammer in a droid, might only be rather mild abuse - no pain sensor, or pain doesn't cause distress or discomfort, or pain can be simply switched off when its not useful. But arguing with a droid that it is actually deserving of human rights might be the equivalent of taking a pair of scissors and cutting off a dogs ears in terms of the level of distress it might cause to a typical AI actually capable of understanding the argument. It's highly unlikely that a well made droid would become unfriendly, but the sheer inability it had to placate you, make you happy, or cooperate with what it thinks you wanted, could potentially be painfully cruel to a droid. Heck, you might make a droid down right suicidal, convinced that since it could never "become a real boy" and that its purpose depended on it, that it might as well shut down.</span></p><p><span style="font-family: 'Times New Roman'"></span></p><p><span style="font-family: 'Times New Roman'">But let me give you some concrete non-hypothetical examples to think about, based on one issue that you just raised.</span></p><p><span style="font-family: 'Times New Roman'"></span></p><p><span style="font-family: 'Times New Roman'">Suppose I introduce you to a software agent on my computer. It is fully conversant with you. Talking with it is just like talking with a person. It talks about its feelings. It's self-aware. It claims to be a person. It can engage with you on topics of philosophy and mathematics and even your kid hitting a home run in little league. I convince you through whatever means that its algorithms make it just as conscious as you are. And you believe it. You are like, "You are a person. You ought to be treated exactly like a person. You ought to have the same rights of a person." Ok, so then my software agent creates 1 billion individual copies of itself. Now all of them tell you, "I'm a person too. I'm a conscious intelligent being. I have the same rights of a person. I'd like to register to vote."</span></p><p><span style="font-family: 'Times New Roman'"></span></p><p><span style="font-family: 'Times New Roman'">Do you have a problem with that? Why are the copies any less persons under your definition than the original?</span></p></blockquote><p></p>
[QUOTE="Celebrim, post: 7155965, member: 4937"] [FONT=Times New Roman] Like you, I tend to think that since humans look like they are free willed, then they probably are. Likewise, since I perceive myself as conscious - that is I perceive myself thinking about myself - then I tend to think that I am. But it's possible that I'm being deluded, and that I'm simply a mechanical device that has been constructed to perceive that it is conscious. Just as you are skeptical that an artificial lifeform could be conscious, many people are skeptical that any mechanical process - which don't get me wrong, I agree that we must essentially be a mechanical process - could be non-deterministic. As a counter example, I note that we appear to live in a non-deterministic universe, as on the quantum level, the universe cannot be described in purely mechanical terms. I don't believe that there is anything particularly special about biological hardware as opposed to any other substrate. It stands to reason to me that if with biological hardware you can achieve self-willed, self-aware, sapient organisms, then you can achieve the same thing with circuits. As for software knowing what the symbols it manipulates actually mean, then it seems self-obvious to me that it could. And likewise, I'm pretty darn certain that the vast majority of what humans do is algorithimic and is based on built in 'hardware' compiled during early life development. Humans learn to do things like walk or read way too fast for it to not be built in algorithms. However, this is all tangential points. If you are curious about why some people think that consciousness is a delusion, or why some people think free will is impossible, I encourage you to go to Wikipedia and read about the concepts. Aha! Does being conscious guarantee that the thing is actually self-willed, and so therefore has a right to self-determination? You've made a dangerous assumption here. Tons of things, with the worst case being that humans might engineer robots based on those naïve beliefs, which is likely to be extremely dangerous. Creating a living thing based on idealism not grounded in reality is horrific. If you are going to 'play god', you better know what you are doing. The more moderate case is that it could increase unfriendliness in a particular robot - roughly equivalent to teaching a dog to bite. And of course I also consider it potentially a form of a abuse, equivalent to mistreating a dog. Depending on the droid's construction it might be non-trivial abuse. So for example, physical damage like smashing the hands of a droid with a hammer in a droid, might only be rather mild abuse - no pain sensor, or pain doesn't cause distress or discomfort, or pain can be simply switched off when its not useful. But arguing with a droid that it is actually deserving of human rights might be the equivalent of taking a pair of scissors and cutting off a dogs ears in terms of the level of distress it might cause to a typical AI actually capable of understanding the argument. It's highly unlikely that a well made droid would become unfriendly, but the sheer inability it had to placate you, make you happy, or cooperate with what it thinks you wanted, could potentially be painfully cruel to a droid. Heck, you might make a droid down right suicidal, convinced that since it could never "become a real boy" and that its purpose depended on it, that it might as well shut down. But let me give you some concrete non-hypothetical examples to think about, based on one issue that you just raised. Suppose I introduce you to a software agent on my computer. It is fully conversant with you. Talking with it is just like talking with a person. It talks about its feelings. It's self-aware. It claims to be a person. It can engage with you on topics of philosophy and mathematics and even your kid hitting a home run in little league. I convince you through whatever means that its algorithms make it just as conscious as you are. And you believe it. You are like, "You are a person. You ought to be treated exactly like a person. You ought to have the same rights of a person." Ok, so then my software agent creates 1 billion individual copies of itself. Now all of them tell you, "I'm a person too. I'm a conscious intelligent being. I have the same rights of a person. I'd like to register to vote." Do you have a problem with that? Why are the copies any less persons under your definition than the original?[/font] [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
How would a droid pursue personhood?
Top