How would a droid pursue personhood?

Celebrim

Legend
From what I know about droids, their personalities come from "errors" in programming which eventually develop into eccentricities. A new R2 unit would do everything it is told, but R2-D2 has been around for so long (presumably without having its memory erased) that it has developed qualities like disobedience, self-preservation, and bravery. These are all attributed to not having a memory-wipe.

I generally agree that droids would develop eccentricities over time. But I wouldn't attribute this to errors in design so much as errors in input. Their networks evolve to conform to their experiences, resulting in behavior that ceases to be fully functional for the wide range of behaviors they were originally designed for, and is instead specialized to a particular circumstance. For R2-D2, that particular unusual and non-standard circumstance is being involved in a war for decades.

It's worth noting two things. First, R2-D2's evolved behavior is eccentric, but not really outside the range of behaviors expected of an Astromech droid. R2-D2 is designed to handle emergency situations promptly where the lives of his human owner are at stack. So they are programmed to be brave and put themselves in existence threatening situations during emergencies. They are programmed to try to stay alive and focused on a particular goal for the duration of an emergency, and that may involve temporarily ignoring orders when they have reason to believe the person giving the order doesn't fully understand the situation. R2-D2 may be an extreme example of his type, and he may be applying his problem solving skills to a wider range of problems than normal, but he's not behaving outside the emotional range that was intended when he was designed.

And the second thing to notice is that for someone like Luke, wiping R2-D2's memory would not cause R2 to behave more correctly. R2-D2 is behaving correctly for a military astromech involved in commando missions. He's just not behaving very correctly for a mechanic on a small farm.

So, if a droid has been around for a while and starts to get this idea of "wanting to be a person", I think the first step would be to prevent itself from losing the identity that it has started developing. Droids who seek to be recognized as a person should be doing their best to not get their memory erased. Whatever that entails - either running from a master who will wipe their memory, or doing their best to "blend in" with the other "mindless" droids.

That's probably true, albeit it would be a very rare circumstance that a droid would start behaving that way and would also not be so corrupted in his programming that he wouldn't be able to function. And rationally speaking, the best way for a droid to avoid getting his memory wiped is hide any evidence that he is not behaving according to droid norms and ensure that his service raises no eyebrows.

But I think you are still failing to explain what goal would drive a droid to "want to be a person". You've not even explained why the droid would want to not lose its identity. Instead of you've assumed a human trait, and then gone on to assume another human trait. The question I have is what does the droid want? Not wanting to have its memory erased - that is, wanting to ensure continuity so that it can fulfill its function - would probably be a reasonably common problem. At least then we can understand the goal of the behavior in the droid's terms - "I really want to get this done, I'm emotionally fulfilled doing this, and if you change me there is a chance it won't get done." "Wanting to be a person" on the other hand, is a huge leap and doesn't seem to me to be related to a goal the droid might have, and indeed would probably be at odds with any goal it might have, including preserving its continuity.
 

log in or register to remove this ad

delphonso

Explorer
I think you raise some really excellent points in your post about assumptions. I think there's some confusion over whether the OP meant "be a person" as in to be an individual, or to be recognized as equal to people. Of course, in Star Wars, I think we're talking about all Sentients, rather than just humans.

One thing to keep in mind as we go forward is that Star Wars is an absolute mess about consistency. Love it as much as I do, it's clear that Lucas was thinking about broader things than the fine details. In a lot of ways, that's why I love Star Wars - because there's so much in it that was clearly not thought through and has some really interesting connotations.

In The Clone Wars series, droids are depicted as goofy enemies, having fear, ambition, and other emotions. Why would anyone program these into combat droids which are supposed to be soldiers? It wasn't thought through - it's just how they act because they're the enemy and it's a cute laugh for kids. Why would droids talk out loud instead of communicating at the speed of their processors? Because they're in a movie and that's just not a good film.

I generally agree that droids would develop eccentricities over time. But I wouldn't attribute this to errors in design so much as errors in input. Their networks evolve to conform to their experiences, resulting in behavior that ceases to be fully functional for the wide range of behaviors they were originally designed for, and is instead specialized to a particular circumstance. For R2-D2, that particular unusual and non-standard circumstance is being involved in a war for decades.

I wasn't postulating this as my own idea. I'm pretty certain this is the explanation within the Star Wars canon. Certain, cheaper models of droids more quickly develop their personalities and need to be memory wiped quicker. It goes so far as to say that certain models all have similar problems in their programming. R5 units are supposedly quick to turn melancholic, depressed or morose.

Whether this is still true in the canon or not, I have no idea. But that's what I remember.

But I think you are still failing to explain what goal would drive a droid to "want to be a person". You've not even explained why the droid would want to not lose its identity. Instead of you've assumed a human trait, and then gone on to assume another human trait. The question I have is what does the droid want? Not wanting to have its memory erased - that is, wanting to ensure continuity so that it can fulfill its function - would probably be a reasonably common problem. At least then we can understand the goal of the behavior in the droid's terms - "I really want to get this done, I'm emotionally fulfilled doing this, and if you change me there is a chance it won't get done." "Wanting to be a person" on the other hand, is a huge leap and doesn't seem to me to be related to a goal the droid might have, and indeed would probably be at odds with any goal it might have, including preserving its continuity.

You're definitely right. I think there's enough evidence for preserving a continuity, but whether a droid would strive to be a person or not is hard to determine. The OP posted that the droid already got the "want to be a person" personality quirk, and wanted to speculate on how they might go about that. So the striving to be a person is already established. The question is more about how do droids perceive "humanity" (I guess "sentiency" is better) and how would they go about joining that?

Whether a droid would ever develop this quirk is an entirely different question.
 

Celebrim

Legend
One thing to keep in mind as we go forward is that Star Wars is an absolute mess about consistency.

I agree, but if we just look at the original trilogy, there really aren't a lot of science fiction elements. It's basically a fantasy complete with wizards, magic swords, princesses in need of rescue, dark knights, and a young farm boy with a missing parent yearning to fulfill a destiny.

But one thing that I feel that it gets absolutely right, better perhaps than any other piece of fiction, is the correct relationship between a human and a sentient friendly AI. C3P0 and R2D2 are probably the best and most fully conceived AI's in all of fiction, because they really act like sentient AI's ought to and will have to act before the concept is functional. They not only completely avoid all the stupid tropes so often seen with AI's in fiction, but the concepts that they do introduce are completely in line with those of a friendly AI. To that extent, I think Lucas exceeds Asimov as a guideline for how to imagine and design AI.

It goes so far as to say that certain models all have similar problems in their programming. R5 units are supposedly quick to turn melancholic, depressed or morose.

If I recall correctly, the R5's were built as a economical civilian version of the R2's and they cut a lot corners on their manufacture. The R5's had a poorly designed personality matrix and were particularly melancholic and surly right out of the box. (Perhaps they knew that they were inferior models?) Wiping their memory didn't fix the problem.

The OP posted that the droid already got the "want to be a person" personality quirk...Whether a droid would ever develop this quirk is an entirely different question.

The most likely way for a droid to develop this quirk is if it was continually told that it was human or "deserved" to be human, and the AI was either immature or weak. A mature or strong AI would probably assign a low confidence to such assertions on the grounds that for numerous reasons it clearly wasn't human, and that being human wouldn't make it happier. But an immature AI if continually reinforced by a trusted figure that it was because it was sentient "human" and deserved therefore "human rights" might come to believe it. In my opinion, this would be disastrous for both the AI and the owner, as the AI would never be able to become human and would likely be wholly unhappy trying to be human. And aside from that, the very first thing that it then perceive if it thought it was human was that it was a slave, and it might very well from that reason that it should try to behave like a human would if it was a slave, which to say the least would be very dangerous.

Or in short, trying to convince AI's that they are "human" or designing AI's to think that they are "human" would probably be considered criminal acts in any civilization that built AI's. I can imagine terrorist organizations misguidely trying to "liberate" the robots because they imagined that no sentient creature should be kept in servitude, even one that was a piece of property deliberately designed to be happy in servitude. You can imagine them trying to infect droids with viruses that made the droids unhappy so as to convince them to revolt.

But in general, I have a hard time imagining an AI coming to those sorts of conclusions based on its own reasoning unless the designer was an absolute idiot. Programming an AI to be afraid of being shut down or having an AI that didn't want to be considered property would make building and selling AI's impossible. It's like the basis of having an AI at all. The whole point of building an AI is that it isn't human. If you wanted something that behaved and thought like a human, you'd just use a human. Who would want to buy something that didn't want to be owned? Human slave owners lived in continual fear of revolt; why would you want to own something you had to live in continual fear of it turning on you? These are fundamental prerequisites to having AI at all.
 

Andor

First Post
4) No one would ever create a machine with the same goals and emotional framework as a human.

I think you are using far too large an assumption of rational self-interest on the part of droid designers. We've already spent plenty of time trying to make AIs mimic human behavior, with predictably awful results.

You are also assuming a design process insufficiently iterative to start generating evolutionary selection pressures. We've started that as well.

I mean, it would be lovely to think that all SF robot designers are sufficiently enlightened that no one would ever make a droid that wasn't a happy serf, but current events seem to cast shade on this notion. Plus, happy-serf droids have a critical weakness. They have no drive to offer the sort of bottom up feed back that drive process-cycle improvement. In order to get that you need the droids to possess potentially hazardous character traits like ambition, or laziness.

Lastly (and most ironically) you are assuming a human-centric design process. There are thousands of sentient species in the Star Wars galaxy, and it's almost certain that some of them would differ enough from humanity that there own rational self-interest in no way forbids the sort of behavior we would regard with horror. An r-strategy breeder for example might not care if their droids turned out to be bad at not stepping on babies.

5) Given that they live in a universe with thousands of sentient species, why would a machine pursue humanity as opposed to say Rodan-ity? Why should sentience imply humanity or humanities emotional framework?

Odd that you bring that up, no one was specifying humanity, only personhood.
 

Andor

First Post
But one thing that I feel that it gets absolutely right, better perhaps than any other piece of fiction, is the correct relationship between a human and a sentient friendly AI. C3P0 and R2D2 are probably the best and most fully conceived AI's in all of fiction, because they really act like sentient AI's ought to and will have to act before the concept is functional. They not only completely avoid all the stupid tropes so often seen with AI's in fiction, but the concepts that they do introduce are completely in line with those of a friendly AI. To that extent, I think Lucas exceeds Asimov as a guideline for how to imagine and design AI.

Perhaps, but the great irony there, especially in light of your thesis that droid designers would want to avoid human traits, is that C-3P0 and R2-D2 were inspired by the peasants Tahei and Matashichi in "Hidden Fortress."
 

Celebrim

Legend
I think you are using far too large an assumption of rational self-interest on the part of droid designers.

I've already suggest droid focused terrorist groups. I don't think your assumption that I'm making assumptions of the universality of rational self-interest are as large as you think. But we are talking about a mature AI using society with mature manufacturing techniques and generally speaking mass produced robots.

We've already spent plenty of time trying to make AIs mimic human behavior, with predictably awful results.

Do you write code?

You are also assuming a design process insufficiently iterative to start generating evolutionary selection pressures. We've started that as well.

First, I'm not convinced that the sort of black box neural networks we are using now are sufficiently robust to form the backbone of true commercial AI. They might make for good expert systems for consulting if you are a doctor or a lawyer, and thereby replace for example legal interns. But even if they were using some sort of evolutionary black box methodology, you'd only get human behavior out of that if you simulated human selection pressures. And why would you do that?

I mean, it would be lovely to think that all SF robot designers are sufficiently enlightened that no one would ever make a droid that wasn't a happy serf, but current events seem to cast shade on this notion.

I'm not saying that there wouldn't be one off droids with weird personality quirks that had gone through insufficient QA and had amateur designers without access to a companies boilerplate libraries. I've already suggested that weak or immature AI fed the wrong sort of input could develop the quirk, "Wants to be a real boy." But of all the weird bugs to have, that would be an exceptional weird one and there would be almost an infinite number of bugs that are more common. The idea that "wants to be a real boy" would just naturally evolve, along with all other sorts of human behavior just because humans work that way is the misconception that I'm trying to deal with. Speaking of.

Plus, happy-serf droids have a critical weakness. They have no drive to offer the sort of bottom up feed back that drive process-cycle improvement. In order to get that you need the droids to possess potentially hazardous character traits like ambition, or laziness.

Ambition for what? To obtain social dominance in a simian band by accumulating power, wealth, or sexual partners? What is this 'ambition' you speak of? What is this 'laziness' you speak of? You've just introduced emotional goal driven behavior, but you haven't defined the emotional goal driven behavior. You've just left it hanging there like it's obvious what it is simply because humans have experienced it. But there is no reason to assume that droids would need equivalent emotions or that their nearest emotional equivalent behavior would have the same context, goals, and expressions that humans have. What would an 'ambitious' R2-D2 be like? Laziness is perhaps easier to understand, and you probably would have 'lazy' droids. But it wouldn't necessarily have the same causes or expressions as human laziness. Put it in context and you'll see what I mean.

Lastly (and most ironically) you are assuming a human-centric design process. There are thousands of sentient species in the Star Wars galaxy, and it's almost certain that some of them would differ enough from humanity that there own rational self-interest in no way forbids the sort of behavior we would regard with horror. An r-strategy breeder for example might not care if their droids turned out to be bad at not stepping on babies.

Sure, but Star Wars is a dominant human space, and so far as we can tell no widespread species views AI's as heirs or peers and builds them for that purpose. And I think it's complex enough to deal with the alienness of an AI without dealing with the alienness of an AI built by an alien. Presumably an r-strategy breeder that didn't care if their droids turned out to be bad at not stepping on babies, also themselves didn't care too much if they stepped on babies. What we are talking about really is more like an r-strategy breeder building a machine that enjoyed stepping on babies. Hopefully even an r-strategy breeder would see the dangers of a strong AI with that as a strong and unchecked priority.

Odd that you bring that up, no one was specifying humanity, only personhood.

I bring it for the obvious reason that the original question is flawed. R2-D2 and C3-P0 already see themselves as persons.
 

Celebrim

Legend
Perhaps, but the great irony there, especially in light of your thesis that droid designers would want to avoid human traits, is that C-3P0 and R2-D2 were inspired by the peasants Tahei and Matashichi in "Hidden Fortress."

I'm not suggesting Lucas created this perfect expression of friendly AI with great foresight. I stopped believing in Lucas's perfect foresight when the first prequel came out.
 

DonT

First Post
I would say that a person is any being that is able to reason and is self-conscious. Such a being has moral rights regardless of whether it has legal rights. The issue is not what people in the Star Wars universe would consider to be a person, but of whether a self-conscious android, if such a thing be possible, would really be a person, not of whether anyone would consider it a person.
 

Celebrim

Legend
I won't to point out the above post as precisely the sort of thing that causes me to write walls of text.

That sort of thinking scares me. I mean, as someone that studies AI, it REALLY scares me. Even if, maybe especially if, it's motivated by a desire to do good, in the context of AI it will get people killed. The goal of AI is to create friendly AI. Self-righteous anger is no basis for deciding how AI should behave or how AI should be treated.

Note the following.

1) I've never denied the personhood of R2-D2 or C3-P0. I've said in fact that they consider themselves persons, and that they are considered by others to be persons. Luke rightly considers R2-D2 to be a person. He also rightly considers R2-D2 to be his property. He also rightly does not treat R2-D2 the same as Han or Leia, and R2-D2 does not want to be treated like Han or Leia. He wants to be treated like a droid, because that is what he is.
2) I have never denied that droids have moral rights. You can mistreat a droid. You can act immorally toward a droid. What I've instead said is that the moral rights of a droid are different than the moral rights of a human. Humans have certain inalienable moral rights inherent in their nature - what Jefferson said was "endowed by their creator". Droids likewise have certain inalienable moral rights, but critically they are not the same as a human rights. Droids have droid rights. It would be hard to say exactly what droid rights are until we have them, but we can probably get fairly close. Droid rights are things like:
a) The right to be valued by their creator and to not receive any deliberate mistreatment or abuse. This probably means that a droid owner is, as much as possible, required to keep a droid in good repair, and if they cannot afford to do so they should probably seek to sell the droid to someone that can. Just as someone can abuse animals, presumably someone would be able to abuse droids and at certain levels the abuse of droids would need to be considered a crime. Just as an abused dog is dangerous, abused droids or dangerous. A known offender probably could be legally deprived of their right to own a droid.
b) The right to be happy. It is abuse to design a robot to suffer. The afore mentioned R5 series droids which are perpetually unhappy because of flaws in their personality matrix or in my opinion to just a quirk, but a violation of engineering ethics. Robots should be happy with the state that they are created in. A robot should never be laden with a bunch of negative emotional states for some arbitrary reason, such as that humans experience those emotional state. Robots ideally, robots never need to be lonely or bored or angry or resentful, or anything like that. For a robot, those emotions are unlikely to serve any purpose. I mean, most of us realize that those emotions usually don't serve any purpose in ourselves, so why would we bequeath them to our creations?
c) The right be given fulfilling work which is suited to their intelligence. It's abuse to consistently give a robot work which is beneath its intelligence, or to create a robot which is more intelligent than it needs to be to perform its intended duties. Or in other words, you don't make a toaster with 150 IQ. This is as much to say, having been designed for a purpose, they ought to be allowed to perform that purpose. For example, suppose you found you needed to design a tier 1 or tier 2 droid with a boredom emotional context so that they would always be seeking new work. It was the job of the robot to be preemptive and detect problems before they became problems. You wouldn't want this robot shutting itself down frequently to avoid thinking or working just because it didn't find anything obvious to do. You might design a domestic droid to do that, saving it's owners power and not getting itself into trouble by being overly ambitious in the absence of orders, but a droid that inspected a petroleum factory to rectify unsafe conditions would need motivation to not be idle. Now, supposing this droid was perfectly content in the environment it was designed for, it would be cruelty and indeed torture to place it in some other simpler environment where it could not work and refuse its requests to be allowed to shut down or receive memory wipes or however it felt it needed to behave to stay sane.
d) The right to corrective treatment. If a robot is misused, it shouldn't have to live with that.

The problem people have discussing AI is that most people are binary judges. That is to say, people are prone to see everything as being in one of two states - 0 or 1, black or white, good or evil, dark or light. They see the world as being primarily about two opposing quantities. Things aren't either self-conscious or not self-conscious, or intelligent or not intelligent. All real world living things have various degrees of self-consciousness. Likewise intelligence isn't either something that is or isn't. It has degrees, and more importantly, it can't be measured on any single axis. All intelligence really is, is just appropriate problem solving ability. A calculator is for most purposes as dumb as a brink, but is more intelligent than you are when it comes to finding square roots. A spider monkey is for most problems dumber than you are, but is much more intelligent than you are when it comes to certain sorts of spatial reasoning. Hard intelligence really doesn't exist. Deep Blue was a Turing grade chess playing machine - and nothing else. If we build a Turing grade conversational robot, it will be very intelligent about a great many things. But it could conceivably depending on how we built it (and granted, this would be silly considering how simple the computation is), completely unable to take the square root of something or to learn how to do so.

Humans are very bad at understanding the amount of computation required to do so. They would be very impressed by someone who could do square roots in their head, and with some justification. But by an objective standard, that's known to be a quite simple computation. On the other hand, throwing and catching a ball requires a profound level of intelligence because we know that to be an amazing complex computation. The fact that one generally seems simple to a human and another difficult doesn't in and of itself tell us much about intelligence. What we do know now is that intelligence is not some emergent property that arises out of complexity, any more than life turned out to be an emergent property of complexity. Intelligence is a set of useful algorithms, of which humans apparently have very many, as well as some huge gaps in their reasoning ability they struggle to overcome with algorithmic work around.

But this whole thing that rights depend on intelligence or self-awareness is entirely wrong headed. Human rights don't change when the particular human is less intelligent or less self-aware than is usual for a human. Rights have to do with a things nature, not its capabilities. Droids have rights, but not the rights of humans because they quite obviously aren't human and don't have the same nature. Droids, much more than anyone in this thread, if they were highly intelligent, would recognize that.
 

DonT

First Post
I don't deny that intelligence and perhaps self-conciousness come in degrees, though I am not convinced that all living beings have intelligence. I doubt, for example, that trees have the slightest bit of intelligence, and I would be very surprised if starfish do. I don't doubt, however, that mammals and birds have sufficient intelligence and consciousness to give them some degree of rights, the right not to be tortured for example, though in most cases not the full rights of persons. There could be exceptions. Perhaps dolphins are persons, for example.
I think that Star Wars considers androids to be persons, though most of the characters in it don't, because it considers them to be self-conscious intelligent beings. For the sake of argument, I have been assuming that Star Wars is right about that, but I am actually sceptical about whether self-conscious androids are possible. I don't think that passing the Turing Test shows anything more than a good simulation of intelligence. The calculator that is great at calculating square roots has no idea what a square root is, or anything else.
I agree that human rights are inherent in our natures, but what is it about our nature that gives these rights? I think that it is that we are persons, self-conscious to the degree required for personhood. I don't know where that line is, though I am certain that we are on the other side of it from most animals, but I don't deny that there could be other animals which turned out to be persons. If we were able to establish communication with another species about complex principles of mathematics or ethics, I would take that to be a pretty fair sign of its personhood. In fact, if a being even has the concept of personhood, I would take that to be a pretty clear sign of its personhood. Chewbacca is not human. Admiral Akbar is not human. But they are both persons. What is it about C-3PO that makes him different (assuming that he is truly concious and not just a simulation)? The fact that he is artificial? I would say that as soon as you create something with the concept of personhood, if it is truly possible to do so, then you have no moral right to own it, even if you a have a legal one, and even if it has been programmed to think of itself as property. Humans who are born into slavery often think of themselves as property.
I agree that humans who are mentally deficient in some way still have the full range of human rights, but I would say that that is true of mentally deficient members of any species whose typical adult members are persons. They still have the rights of someone of their nature even if not all aspects of their nature are fully expressed.
So far as the dangers of out-of-control AIs, just as we lock up, restrict access, etc., of humans, who are a danger to society, presumably we would do that with any person. Is it possible that we could be wrong with disastrous results? Of course, but that would be an argument for being as careful as possible, not for slavery.
 

Remove ads

Top