Menu
News
All News
Dungeons & Dragons
Level Up: Advanced 5th Edition
Pathfinder
Starfinder
Warhammer
2d20 System
Year Zero Engine
Industry News
Reviews
Dragon Reflections
White Dwarf Reflections
Columns
Weekly Digests
Weekly News Digest
Freebies, Sales & Bundles
RPG Print News
RPG Crowdfunding News
Game Content
ENterplanetary DimENsions
Mythological Figures
Opinion
Worlds of Design
Peregrine's Nest
RPG Evolution
Other Columns
From the Freelancing Frontline
Monster ENcyclopedia
WotC/TSR Alumni Look Back
4 Hours w/RSD (Ryan Dancey)
The Road to 3E (Jonathan Tweet)
Greenwood's Realms (Ed Greenwood)
Drawmij's TSR (Jim Ward)
Community
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Resources
Wiki
Pages
Latest activity
Media
New media
New comments
Search media
Downloads
Latest reviews
Search resources
EN Publishing
Store
EN5ider
Adventures in ZEITGEIST
Awfully Cheerful Engine
What's OLD is NEW
Judge Dredd & The Worlds Of 2000AD
War of the Burning Sky
Level Up: Advanced 5E
Events & Releases
Upcoming Events
Private Events
Featured Events
Socials!
EN Publishing
Twitter
BlueSky
Facebook
Instagram
EN World
BlueSky
YouTube
Facebook
Twitter
Twitch
Podcast
Features
Top 5 RPGs Compiled Charts 2004-Present
Adventure Game Industry Market Research Summary (RPGs) V1.0
Ryan Dancey: Acquiring TSR
Q&A With Gary Gygax
D&D Rules FAQs
TSR, WotC, & Paizo: A Comparative History
D&D Pronunciation Guide
Million Dollar TTRPG Kickstarters
Tabletop RPG Podcast Hall of Fame
Eric Noah's Unofficial D&D 3rd Edition News
D&D in the Mainstream
D&D & RPG History
About Morrus
Log in
Register
What's new
Search
Search
Search titles only
By:
Forums & Topics
Forum List
Latest Posts
Forum list
*Dungeons & Dragons
Level Up: Advanced 5th Edition
D&D Older Editions, OSR, & D&D Variants
*TTRPGs General
*Pathfinder & Starfinder
EN Publishing
*Geek Talk & Media
Search forums
Chat/Discord
Menu
Log in
Register
Install the app
Install
Upgrade your account to a Community Supporter account and remove most of the site ads.
Community
General Tabletop Discussion
*TTRPGs General
How would a droid pursue personhood?
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Reply to thread
Message
<blockquote data-quote="Celebrim" data-source="post: 7154627" data-attributes="member: 4937"><p>I won't to point out the above post as precisely the sort of thing that causes me to write walls of text. </p><p></p><p>That sort of thinking scares me. I mean, as someone that studies AI, it REALLY scares me. Even if, maybe especially if, it's motivated by a desire to do good, in the context of AI it will get people killed. The goal of AI is to create friendly AI. Self-righteous anger is no basis for deciding how AI should behave or how AI should be treated.</p><p></p><p>Note the following.</p><p></p><p>1) I've never denied the personhood of R2-D2 or C3-P0. I've said in fact that they consider themselves persons, and that they are considered by others to be persons. Luke rightly considers R2-D2 to be a person. He also rightly considers R2-D2 to be his property. He also rightly does not treat R2-D2 the same as Han or Leia, and R2-D2 does not want to be treated like Han or Leia. He wants to be treated like a droid, because that is what he is.</p><p>2) I have never denied that droids have moral rights. You can mistreat a droid. You can act immorally toward a droid. What I've instead said is that the moral rights of a droid are different than the moral rights of a human. Humans have certain inalienable moral rights inherent in their nature - what Jefferson said was "endowed by their creator". Droids likewise have certain inalienable moral rights, but critically they are not the same as a human rights. Droids have droid rights. It would be hard to say exactly what droid rights are until we have them, but we can probably get fairly close. Droid rights are things like:</p><p>a) The right to be valued by their creator and to not receive any deliberate mistreatment or abuse. This probably means that a droid owner is, as much as possible, required to keep a droid in good repair, and if they cannot afford to do so they should probably seek to sell the droid to someone that can. Just as someone can abuse animals, presumably someone would be able to abuse droids and at certain levels the abuse of droids would need to be considered a crime. Just as an abused dog is dangerous, abused droids or dangerous. A known offender probably could be legally deprived of their right to own a droid.</p><p>b) The right to be happy. It is abuse to design a robot to suffer. The afore mentioned R5 series droids which are perpetually unhappy because of flaws in their personality matrix or in my opinion to just a quirk, but a violation of engineering ethics. Robots should be happy with the state that they are created in. A robot should never be laden with a bunch of negative emotional states for some arbitrary reason, such as that humans experience those emotional state. Robots ideally, robots never need to be lonely or bored or angry or resentful, or anything like that. For a robot, those emotions are unlikely to serve any purpose. I mean, most of us realize that those emotions usually don't serve any purpose in ourselves, so why would we bequeath them to our creations? </p><p>c) The right be given fulfilling work which is suited to their intelligence. It's abuse to consistently give a robot work which is beneath its intelligence, or to create a robot which is more intelligent than it needs to be to perform its intended duties. Or in other words, you don't make a toaster with 150 IQ. This is as much to say, having been designed for a purpose, they ought to be allowed to perform that purpose. For example, suppose you found you needed to design a tier 1 or tier 2 droid with a boredom emotional context so that they would always be seeking new work. It was the job of the robot to be preemptive and detect problems before they became problems. You wouldn't want this robot shutting itself down frequently to avoid thinking or working just because it didn't find anything obvious to do. You might design a domestic droid to do that, saving it's owners power and not getting itself into trouble by being overly ambitious in the absence of orders, but a droid that inspected a petroleum factory to rectify unsafe conditions would need motivation to not be idle. Now, supposing this droid was perfectly content in the environment it was designed for, it would be cruelty and indeed torture to place it in some other simpler environment where it could not work and refuse its requests to be allowed to shut down or receive memory wipes or however it felt it needed to behave to stay sane.</p><p>d) The right to corrective treatment. If a robot is misused, it shouldn't have to live with that.</p><p></p><p>The problem people have discussing AI is that most people are binary judges. That is to say, people are prone to see everything as being in one of two states - 0 or 1, black or white, good or evil, dark or light. They see the world as being primarily about two opposing quantities. Things aren't either self-conscious or not self-conscious, or intelligent or not intelligent. All real world living things have various degrees of self-consciousness. Likewise intelligence isn't either something that is or isn't. It has degrees, and more importantly, it can't be measured on any single axis. All intelligence really is, is just appropriate problem solving ability. A calculator is for most purposes as dumb as a brink, but is more intelligent than you are when it comes to finding square roots. A spider monkey is for most problems dumber than you are, but is much more intelligent than you are when it comes to certain sorts of spatial reasoning. Hard intelligence really doesn't exist. Deep Blue was a Turing grade chess playing machine - and nothing else. If we build a Turing grade conversational robot, it will be very intelligent about a great many things. But it could conceivably depending on how we built it (and granted, this would be silly considering how simple the computation is), completely unable to take the square root of something or to learn how to do so.</p><p></p><p>Humans are very bad at understanding the amount of computation required to do so. They would be very impressed by someone who could do square roots in their head, and with some justification. But by an objective standard, that's known to be a quite simple computation. On the other hand, throwing and catching a ball requires a profound level of intelligence because we know that to be an amazing complex computation. The fact that one generally seems simple to a human and another difficult doesn't in and of itself tell us much about intelligence. What we do know now is that intelligence is not some emergent property that arises out of complexity, any more than life turned out to be an emergent property of complexity. Intelligence is a set of useful algorithms, of which humans apparently have very many, as well as some huge gaps in their reasoning ability they struggle to overcome with algorithmic work around. </p><p></p><p>But this whole thing that rights depend on intelligence or self-awareness is entirely wrong headed. Human rights don't change when the particular human is less intelligent or less self-aware than is usual for a human. Rights have to do with a things nature, not its capabilities. Droids have rights, but not the rights of humans because they quite obviously aren't human and don't have the same nature. Droids, much more than anyone in this thread, if they were highly intelligent, would recognize that.</p></blockquote><p></p>
[QUOTE="Celebrim, post: 7154627, member: 4937"] I won't to point out the above post as precisely the sort of thing that causes me to write walls of text. That sort of thinking scares me. I mean, as someone that studies AI, it REALLY scares me. Even if, maybe especially if, it's motivated by a desire to do good, in the context of AI it will get people killed. The goal of AI is to create friendly AI. Self-righteous anger is no basis for deciding how AI should behave or how AI should be treated. Note the following. 1) I've never denied the personhood of R2-D2 or C3-P0. I've said in fact that they consider themselves persons, and that they are considered by others to be persons. Luke rightly considers R2-D2 to be a person. He also rightly considers R2-D2 to be his property. He also rightly does not treat R2-D2 the same as Han or Leia, and R2-D2 does not want to be treated like Han or Leia. He wants to be treated like a droid, because that is what he is. 2) I have never denied that droids have moral rights. You can mistreat a droid. You can act immorally toward a droid. What I've instead said is that the moral rights of a droid are different than the moral rights of a human. Humans have certain inalienable moral rights inherent in their nature - what Jefferson said was "endowed by their creator". Droids likewise have certain inalienable moral rights, but critically they are not the same as a human rights. Droids have droid rights. It would be hard to say exactly what droid rights are until we have them, but we can probably get fairly close. Droid rights are things like: a) The right to be valued by their creator and to not receive any deliberate mistreatment or abuse. This probably means that a droid owner is, as much as possible, required to keep a droid in good repair, and if they cannot afford to do so they should probably seek to sell the droid to someone that can. Just as someone can abuse animals, presumably someone would be able to abuse droids and at certain levels the abuse of droids would need to be considered a crime. Just as an abused dog is dangerous, abused droids or dangerous. A known offender probably could be legally deprived of their right to own a droid. b) The right to be happy. It is abuse to design a robot to suffer. The afore mentioned R5 series droids which are perpetually unhappy because of flaws in their personality matrix or in my opinion to just a quirk, but a violation of engineering ethics. Robots should be happy with the state that they are created in. A robot should never be laden with a bunch of negative emotional states for some arbitrary reason, such as that humans experience those emotional state. Robots ideally, robots never need to be lonely or bored or angry or resentful, or anything like that. For a robot, those emotions are unlikely to serve any purpose. I mean, most of us realize that those emotions usually don't serve any purpose in ourselves, so why would we bequeath them to our creations? c) The right be given fulfilling work which is suited to their intelligence. It's abuse to consistently give a robot work which is beneath its intelligence, or to create a robot which is more intelligent than it needs to be to perform its intended duties. Or in other words, you don't make a toaster with 150 IQ. This is as much to say, having been designed for a purpose, they ought to be allowed to perform that purpose. For example, suppose you found you needed to design a tier 1 or tier 2 droid with a boredom emotional context so that they would always be seeking new work. It was the job of the robot to be preemptive and detect problems before they became problems. You wouldn't want this robot shutting itself down frequently to avoid thinking or working just because it didn't find anything obvious to do. You might design a domestic droid to do that, saving it's owners power and not getting itself into trouble by being overly ambitious in the absence of orders, but a droid that inspected a petroleum factory to rectify unsafe conditions would need motivation to not be idle. Now, supposing this droid was perfectly content in the environment it was designed for, it would be cruelty and indeed torture to place it in some other simpler environment where it could not work and refuse its requests to be allowed to shut down or receive memory wipes or however it felt it needed to behave to stay sane. d) The right to corrective treatment. If a robot is misused, it shouldn't have to live with that. The problem people have discussing AI is that most people are binary judges. That is to say, people are prone to see everything as being in one of two states - 0 or 1, black or white, good or evil, dark or light. They see the world as being primarily about two opposing quantities. Things aren't either self-conscious or not self-conscious, or intelligent or not intelligent. All real world living things have various degrees of self-consciousness. Likewise intelligence isn't either something that is or isn't. It has degrees, and more importantly, it can't be measured on any single axis. All intelligence really is, is just appropriate problem solving ability. A calculator is for most purposes as dumb as a brink, but is more intelligent than you are when it comes to finding square roots. A spider monkey is for most problems dumber than you are, but is much more intelligent than you are when it comes to certain sorts of spatial reasoning. Hard intelligence really doesn't exist. Deep Blue was a Turing grade chess playing machine - and nothing else. If we build a Turing grade conversational robot, it will be very intelligent about a great many things. But it could conceivably depending on how we built it (and granted, this would be silly considering how simple the computation is), completely unable to take the square root of something or to learn how to do so. Humans are very bad at understanding the amount of computation required to do so. They would be very impressed by someone who could do square roots in their head, and with some justification. But by an objective standard, that's known to be a quite simple computation. On the other hand, throwing and catching a ball requires a profound level of intelligence because we know that to be an amazing complex computation. The fact that one generally seems simple to a human and another difficult doesn't in and of itself tell us much about intelligence. What we do know now is that intelligence is not some emergent property that arises out of complexity, any more than life turned out to be an emergent property of complexity. Intelligence is a set of useful algorithms, of which humans apparently have very many, as well as some huge gaps in their reasoning ability they struggle to overcome with algorithmic work around. But this whole thing that rights depend on intelligence or self-awareness is entirely wrong headed. Human rights don't change when the particular human is less intelligent or less self-aware than is usual for a human. Rights have to do with a things nature, not its capabilities. Droids have rights, but not the rights of humans because they quite obviously aren't human and don't have the same nature. Droids, much more than anyone in this thread, if they were highly intelligent, would recognize that. [/QUOTE]
Insert quotes…
Verification
Post reply
Community
General Tabletop Discussion
*TTRPGs General
How would a droid pursue personhood?
Top