How would a droid pursue personhood?

Celebrim

Legend
Can we please dispense with the shouting in gigantic big blue letters? I'm not going to respond if I'm going to get another wall of shouts.
 

log in or register to remove this ad

DonT

First Post
I apologize for the gigantic blue letters. I didn't intend to be shouting. I just like blue better than black and 10 or 12 are the font sizes I use in most other writing. I agree that it looks disproportinately large here.
 

Celebrim

Legend
I don't deny that intelligence and perhaps self-conciousness come in degrees, though I am not convinced that all living beings have intelligence. I doubt, for example, that trees have the slightest bit of intelligence, and I would be very surprised if starfish do.

I would be very surprised if starfish do not, and in fact I think I can definitively say that they do. Indeed, starfish, primitive though they are, are probably conscious to some degree. Basically, anything that can sense its environment and make appropriate choices about how to respond is intelligent. Trees are, surprisingly, intelligent under this definition. They even can communicate with other neighboring trees. It's not anything like an experience we as humans have, and I don't think (though obviously don't know) that trees are conscious, but they are intelligent - or at least, considerably more intelligent than a rock.

Starfish I think are probably self-conscious because they appear to have multiple internal 'critics' and the ability to choose between those critics to engage in goal oriented behavior. They may even have a meta-critic that continual reviews the inputs of their critics (in starfish, one for each leg) and decides which critic has priority. Their consciousness isn't nearly as sophisticated as human consciousness, or even mouse consciousness, but they probably have one. We can guess that they probably have some sort of simple emotional framework, so that the starfish knows when it is a content starfish.

I don't doubt, however, that mammals and birds have sufficient intelligence and consciousness to give them some degree of rights, the right not to be tortured for example, though in most cases not the full rights of persons. There could be exceptions. Perhaps dolphins are persons, for example.

We can be certain that regardless of whether dolphins are persons, they don't have the full rights of humans because they cannot take on the responsibilities implied by those rights. For example, we'd never presume to rely on a dolphins moral judgment, especially with regards to anything but itself. But we can be pretty sure that though dolphins are somewhat intelligent self-aware beings, they are basically like dogs or elephants and most apes - owed and having more rights than starfish, but not as much as say people. Again, we wouldn't rely on a dogs ability to plan for the future, and so any right that was owed to the beings ability to plan for its own future would be necessarily limited in a dog.

I think that Star Wars considers androids to be persons, though most of the characters in it don't, because it considers them to be self-conscious intelligent beings.

On the contrary, most characters know that droids are self-conscious intelligent beings and so presumably agree that droids are persons. But unlike you, they are perfectly happy to think of there being different sorts of persons, each owed different treatment - 2nd class or 6th class persons. Some probably think of this as a hierarchy, though in fact that is for reasons I tried to explain earlier, wrong.

For the sake of argument, I have been assuming that Star Wars is right about that, but I am actually sceptical about whether self-conscious androids are possible. I don't think that passing the Turing Test shows anything more than a good simulation of intelligence.

Presumably the creator knows. But really, I find your position incoherent. How do I know you aren't anything more than a good simulation of intelligence?

The calculator that is great at calculating square roots has no idea what a square root is, or anything else.

This is a very important point, and one that was missed earlier when someone tried to claim that a chat bot was intelligent. It's not. It may employ various techniques that are employed in artificial intelligence, but Microsoft's failed attempts at a chat bot are really no more than Eliza's. At no point did Microsoft's chat bot have a goal or a means of filtering what it did in any directed way. It never was more intelligent than a rock. By contrast, a self-driving car - even if it doesn't understand the symbols it receives as fully as a human and is also engaged in apparently rote behavior - is intelligent in the same way the starfish is. Indeed, it might even be as intelligent as a starfish. I think it is obvious though, it doesn't have as many rights as the starfish, which itself doesn't have many.

I agree that human rights are inherent in our natures, but what is it about our nature that gives these rights?

That's a very good question. But I don't think you have the correct answer to it. I grant that it is not obvious why you are wrong, and developing why you are wrong is going to take a lengthy conversation.

I think that it is that we are persons, self-conscious to the degree required for personhood. I don't know where that line is, though I am certain that we are on the other side of it from most animals...

This is the first sign you are wrong. Your criteria for 'personhood' is vague. It's so vague even you admit that you can't define it. More to the point, it's humanocentric. You assume personhood if the thing has qualities that are similar to you, and you are unconsciously trying to turn this into a binary question (just as I predicted you would): either you are a person or you are not, and there is a line somewhere that hard divides the two things. In fact, there is no such line. It's all a giant fuzzy continuity. Nor is there even one criteria so that there is a single scale or axis of personhood. In fact, there are many - some of which in our ignorance of the possible diversity of beings - we aren't even aware of.

...but I don't deny that there could be other animals which turned out to be persons.

What even would that mean?

If we were able to establish communication with another species about complex principles of mathematics or ethics, I would take that to be a pretty fair sign of its personhood.

Perhaps, but how do you know that the thing you are communicating with isn't merely a simulation of personhood. You are here applying a Turing test standard to organic life and not to inorganic life? Is it because the inorganic life is artificial? You are in violation already of your own standards. Your standards are incoherent. And if you talked to someone with Down's Syndrome (or an equivalent) would you then decide they are non-persons because they couldn't follow your discussion of mathematics or ethics? Would you decide that only the 'geniuses' of a species were across the line? And why are you picking standards of intelligence that match exactly to problems that humans consider hard? You might find the thing completely conversant in math, only to discover later it was an AI version of Wolfrum's website designed to aid mathematicians. Why not choose the ability to throw a ball accurately, which is as least as computationally expensive as most math problems, as your proof of intelligence? Or why not choose the ability to make statistical inferences accurately as your proof, other than the fact that by this test humans would fail an intelligence test given by a species that could?

In fact, if a being even has the concept of personhood, I would take that to be a pretty clear sign of its personhood.

What if it is just programmed to say it is a person? Or what if it is not a person, but had decided you would prefer that it is a person and so is saying that it is a person as part of some goal driven behavior to make you happy or to get what it wants? How would you know?

Chewbacca is not human. Admiral Akbar is not human. But they are both persons.

Yes, but neither has 'human rights' by definition. There might be some joint 'person rights' they all share, but there might be differences in the rights inherent to each. What's really important here is that you've chosen persons that are almost identical to humans. They are medium sized bipedal organic creatures with similar IQ's and roughly equivalent capabilities. They all manipulate tools. They are all species that produce individuals. They all seem to breathe similar air at similar air pressures and similar temperatures. They all form family units and all seem to have similar ideas of ethical behavior. In short, you've chosen aliens that are really just humans with bumps on their heads. It's not surprising at all that 'Wookie Rights' would turn out to be almost perfectly congruent with 'Human Rights'.

What is it about C-3PO that makes him different (assuming that he is truly concious and not just a simulation)? The fact that he is artificial?

Among other things, yes. Emphatically yes. C-3P0 is vastly more alien of a person than Chewbacca is. There are vastly more things different about C3-P0 compared to a human than Chewbacca. So we should not at all be surprised if 'Protocol Droid Rights' are much more different than 'human rights' than 'wookie rights' are. The more you actually address this question, "What is it about C-3P0 that makes him different?" in a non-rhetorical way, the better your answer is going to be.

I would say that as soon as you create something with the concept of personhood, if it is truly possible to do so, then you have no moral right to own it, even if you a have a legal one, and even if it has been programmed to think of itself as property. Humans who are born into slavery often think of themselves as property.

I think that's emphatically nonsense, and dangerous nonsense at that. For one thing, if I can have that created creature assert that it is property because I programmed it do so, then surely I can have it emphatically assert to you that it is not a person. Why would you believe it that it isn't a person, when you are prepared to not believe it about being property? And why does it matter what humans think if they find themselves in the same situation, given that humans are not property. And remember, you yourself have set yourself up to be easily tricked by my devious programming, because you have asserted that there is a difference between simulating intelligence and being intelligence! Conversely, what if my simulation of intelligence includes asserting I'm a person! Would you not believe it then?

I agree that humans who are mentally deficient in some way still have the full range of human rights...

Why! That's a direct contradiction of your basis for believing something is a 'person', which you claimed was based on consciousness and intelligence.

They still have the rights of someone of their nature even if not all aspects of their nature are fully expressed.

Again, this is incoherent on the basis of what you said established personhood.

So far as the dangers of out-of-control AIs, just as we lock up, restrict access, etc., of humans, who are a danger to society, presumably we would do that with any person. Is it possible that we could be wrong with disastrous results? Of course, but that would be an argument for being as careful as possible, not for slavery.

Why is slavery wrong? More importantly, how would this assertion that if it says it is a person and can demonstrate it by passing a Turing test in mathematical theory and philosophy (a test you are wishy washy on by your own admission) actually work in practice when you tried to apply it to AI?
 

DonT

First Post
On whether trees are intelligent, I think that we are simply using the word "intellient" in different senses, as I would apply "intelligent" only to something that was, if not self-conscious, at least conscious to some degree, able to have experiences of some kind, and I see no reason to think that trees have genuine experiences.

Someone being a person wouldn't entail that we accept their moral judgements, but if we could establish communication with them, then it would at least make sense to consider their moral arguments.

How do you know that I am not simply a good simulation of consciousness? If by "know," you mean "be certain of," then you don't. But you know almost nothing in that sense other than simple truths of mathematics, that it at least seems to you that you are reading these words, and the like. If you mean "know" in the usual sense, then I would accept an argument from analogy. You know that when you type words, that is a product of your consciousness, and in the absence of other strong candidates, I think that you have sufficiently strong reason to suppose that the same is true of other typists that you come across. As chat bots become more sophisticated and more prevalent, this argument will become weaker. In the case of AIs, I think that we have a defeater for the argument from analogy in that we can in principle completely explain their behavior without assuming consciousness.

I grant that my concept of personhood begins with humans. I don't see how it could be otherwise. For me, the question begins with in virtue of what do humans have rights, and then turns to the question of what other beings share or might share those properties.

I grant that it is possible that anything that I communicate with could be merely a simulation of conciousness. The reason that I think that there is a difference in the case of the AI is that I think that I have a defeater for the argument from analogy, namely that its behavior can completely be explained by an algorithm. If I became convinced that the AI were truly conscious, then the fact that it was artificial wouldn't prevent me from accepting that it was a person.

A very simple computer program could tell me that it was a person. That would mean nothing. But even the most sophisticated AI would be "deciding" only analogically. It would actually just be following its algorithm.

I grant that there is a tension in my views about consciousness and intelligence being the basis of personhood and mentally deficient members of species normal adult members of whom are persons still counting as persons, and I don't know how to resolve that tension. I also think that there are very extreme cases where personhood is lost, for example, someone whose brain has liquified.

What do you think it is that makes slavery wrong in the case of humans?
 

Celebrim

Legend
On whether trees are intelligent, I think that we are simply using the word "intelligent" in different senses...

Yes, we are. And I'm saying that "intelligent" is a word like "magic" which in its most common everyday usage refers to something that doesn't exist and which people can therefore only poorly define. The sense you are using the word intelligent, the common sense way, refers to something that doesn't exist and which I think most AI researchers no longer pursue, largely because they assume that it doesn't exist or if it does exist that it is so far beyond our understanding that there is no point in pursuing it. Or to put it another way, we ourselves as humans are not "intelligent" in the sense you use it.

Or to put it another way, you are still trying to divide the world into things that are "intelligent" or "not intelligent", when in fact it's not binary and indeed not even a single continuous scale of "less intelligent" or "more intelligent". It's more like a multi-dimensional array, and humans are really biased about what parts are significant and probably blind to others. Or to put it in the language of AI, "All intelligence is soft intelligence."

as I would apply "intelligent" only to something that was, if not self-conscious, at least conscious to some degree, able to have experiences of some kind, and I see no reason to think that trees have genuine experiences.

The problem with "conscious" is that is first something we can't yet define and secondly a subjective experience. We use it refer for our own internal experience of being a being. But we have to admit that we can't really prove anyone else has the same experience, and so we can't know if anything else is conscious or merely acting as if it was conscious. Actually, we can't even know enough to know if we are deceiving ourselves. Some people think that in fact we are and that in fact humanity isn't actually genuinely conscious.

So, it's a terrible and largely useless indicator of whether something has some particular rights. You might have noticed that I defined consciousness earlier as an algorithm, based on my best guess as to how human consciousness works. Consciousness exists when the organism has the ability to receive input from multiple intelligent internal 'critics' and then weigh which of them to listen to. And defined such, consciousness is a scale. The more critics you have, the more complex your critics, the more you are able to weight the input of each critic, and the more you are able to resolve that to a single decision making algorithm for your entire person - what humans experience as consciously thinking, what people who are not deaf experience as 'hearing themselves' inside there own head - the more conscious you are.

Trees I don't think are conscious because I don't think they have a method for aggregating and weighing their inputs like that. They can however receive inputs, respond to those inputs, and even make something akin to memories. They can even chemically communicate these experiences - "I'm on fire!" - to nearby trees, which then respond to that stimulus to prepare themselves for fire. They have, as it were, experiences - but not so far as we can tell as yet - conscious experiences.

Someone being a person wouldn't entail that we accept their moral judgements, but if we could establish communication with them, then it would at least make sense to consider their moral arguments.

Yes, but presumably, we might decide - just as we do with children - that we are morally justified in overriding their moral decisions. We wouldn't necessarily decide that they have an inherent and unalienable right to liberty and the pursuit of their own happiness.

How do you know that I am not simply a good simulation of consciousness?

I don't. I reason however by our close kinship as beings with the same form, same heredity, and apparently similar capabilities that you probably have the same traits I do myself. If on the other hand I found you had a different form, different origin, different capabilities and so forth, then I might not know. However, as I indicated earlier, just because you didn't have this very human subjective experience of being "conscious" - whatever the hell that really means - wouldn't mean that I would necessarily decide you were a thing with no more rights than a paper weight or a laptop.

I grant that my concept of personhood begins with humans. I don't see how it could be otherwise.

I grant that as of yet it couldn't be otherwise. I'm now asking to you imagine things that aren't human. And in that, what I'm suggesting is that your concept of "personhood" is no more a real thing than your concepts of "intelligence" and "consciousness". As you use the word, it has no definite meaning. We don't live in world that neatly divides into persons and non-persons, and very soon here that's going to be hopefully self-evident. My fear however is that, humans being notoriously defective in their reason with regard to things that didn't exist in their evolutionary context (ei, something else as intelligent, as self-aware, as capable of long term planning as they are), that their reasoning will commonly fail and they'll resort to erroneously treating non-human things as human in a vain effort to understand them thereby.

For me, the question begins with in virtue of what do humans have rights, and then turns to the question of what other beings share or might share those properties.

And my answer is beings that share most of the properties of being human. But an AI is an alien thing that does not share most of the properties of being human. It is therefore wrong to treat them as if they were.

A very simple computer program could tell me that it was a person. That would mean nothing. But even the most sophisticated AI would be "deciding" only analogically. It would actually just be following its algorithm.

So is the most sophisticated person. You and I are just following our algorithms. They are very sophisticated algorithms, but they are fundamentally just algorithms. The fact that we are following an algorithm doesn't mean we aren't conscious. It's even possible it doesn't mean we lack free will, although, exactly what that means is also an open question and certainly many people think we don't have free will.

What do you think it is that makes slavery wrong in the case of humans?

Heh. I notice you are starting to move away from answering my questions and hitting back with questions.

To begin with, I never asserted that slavery was absolutely wrong. I will however say that slavery is relatively wrong, in that while there may be worse things and while it may be the case that slavery could be conducted in a way that was fair to all parties, that it is not ideal. If we look at past slavery as humanity conducted the institution, we see institutions that are at best concessions to other evils of the world, chosen as the lesser of several evils. For example, in an early bronze age society the overall society was so poor, and life so insecure, that it may have been reasonable as choice for society to consider slavery the lesser of several evils. Slavery provided for a high trust high security relationship by the wealthier provider, and for the slave being treated as a second class citizen in a tribal band might still have been better than not being treated as a member of the tribe. We have to understand that for the vast majority of slavery institutions in human history, slaves had some rights rather than no rights. Or perception is skewed by the fact that our most recent experience of slavery was one of the least regulated and cruelest versions ever practiced.

But in terms of why slavery is wrong at all, the problem as I see it is that humans are inherently peers to each other, and that slavery could only be justified in the case where the partner with superior rights was actually fully superior in some way to the other partner. (Hence the reason that slave owners frequently tried to deceive themselves about their own superiority.) Being peers, the best relationship we could adopt is the natural relationship of peers, treating each other as we would want to be treated. But that empathy would fail if extended to something that was not a peer, and in fact we intuitively know not to literally treat all sentient beings as peers. For example, we know to trespass on that literal relationship in the case of children, because as a race of beings that 'grows up', we know that treating children exactly like adults would be unfair - even though both are human and deserving of certain 'human rights'. We don't think of the status of a guardian a minor as being 'slavery', even though serfdom offered comparatively more rights.

And of course, another reason for believing slavery is wrong, is that all too often even the best case justifications simply ignore the evils of slavery as it was actually practiced. Quite often, indeed probably most of the time, slavery as an institution wasn't motivated by good will, even if in theory it could have been. Nor was it ever motivated by the best of wills, the fullest trust, and the greatest of compassion. Even in cases when those motives existed, it was just leveraging a familiar but flawed institution for a good purpose.

Now, consider what you mean by 'human rights' or 'the rights of being a person'. You are implicitly saying that among those rights is the right to self-governance and self-determination, if not absolutely, then at least largely. Otherwise, slavery would be just fine. Can that be applied to the general case of artificial beings, even if those beings are "intelligent" or "conscious" (however you think you can determine that, which you haven't made clear)?
 
Last edited:

DonT

First Post
While I think that it is possible that I am wrong in thinking that anyone else is conscious, I don't think that it it is possible for me to be wrong in thinking that I am conscious because I am immediately aware of it. I don't even know what it would mean to be wrong about something like that. It seems to me that I am looking at a screen. If you tell me that I'm wrong, I will grant the possibility that there is no screen there. But if you say, "No, that's not what I meant; I meant that it doesn't even SEEM to you that you are looking at a screen," then I have no idea what that claim is supposed to mean.

I don't think that you and I are simply following algorithims. I think that do we have free will and that following algorithims could never get you to free will, and I don't think that simply following algorithims could ever get you to consciousness, either. We agreed earlier that a calculator doesn't know what a square root IS. I don't see how algorithms ever get you beyond a more powerful and very fast calculator. I don't have a theory of consciousness, but it seems to me that consciousness requires that we have at least some notion of the contents of our concepts, knowing not just the chemical composition of salt, but things such as what salt tastes like, and I don't see how a computer could ever know the meaning of the terms it was manipulating.

Since I am sceptical of the possibilty of conscious androids, I would first have to be convinced that this scepticism was unwarranted and then I would need a positive argument that a particular android was conscious. Without the first, I have no idea what form the second would take. But if I were convinced that a particular android were conscious, then, while I grant that it would be more different from us than a Wookie is, I don't see why those differences would preclude a right to self-determination.

What is it about treating non-humans as though they were humans that has you worried?
 

Celebrim

Legend
While I think that it is possible that I am wrong in thinking that anyone else is conscious, I don't think that it it is possible for me to be wrong in thinking that I am conscious because I am immediately aware of it.


Like you, I tend to think that since humans look like they are free willed, then they probably are. Likewise, since I perceive myself as conscious - that is I perceive myself thinking about myself - then I tend to think that I am. But it's possible that I'm being deluded, and that I'm simply a mechanical device that has been constructed to perceive that it is conscious. Just as you are skeptical that an artificial lifeform could be conscious, many people are skeptical that any mechanical process - which don't get me wrong, I agree that we must essentially be a mechanical process - could be non-deterministic.

As a counter example, I note that we appear to live in a non-deterministic universe, as on the quantum level, the universe cannot be described in purely mechanical terms.

I don't believe that there is anything particularly special about biological hardware as opposed to any other substrate. It stands to reason to me that if with biological hardware you can achieve self-willed, self-aware, sapient organisms, then you can achieve the same thing with circuits.

As for software knowing what the symbols it manipulates actually mean, then it seems self-obvious to me that it could. And likewise, I'm pretty darn certain that the vast majority of what humans do is algorithimic and is based on built in 'hardware' compiled during early life development. Humans learn to do things like walk or read way too fast for it to not be built in algorithms.

However, this is all tangential points. If you are curious about why some people think that consciousness is a delusion, or why some people think free will is impossible, I encourage you to go to Wikipedia and read about the concepts.

Since I am sceptical of the possibilty of conscious androids, I would first have to be convinced that this scepticism was unwarranted and then I would need a positive argument that a particular android was conscious. Without the first, I have no idea what form the second would take. But if I were convinced that a particular android were conscious, then, while I grant that it would be more different from us than a Wookie is, I don't see why those differences would preclude a right to self-determination.

Aha! Does being conscious guarantee that the thing is actually self-willed, and so therefore has a right to self-determination? You've made a dangerous assumption here.

What is it about treating non-humans as though they were humans that has you worried?

Tons of things, with the worst case being that humans might engineer robots based on those naïve beliefs, which is likely to be extremely dangerous. Creating a living thing based on idealism not grounded in reality is horrific. If you are going to 'play god', you better know what you are doing. The more moderate case is that it could increase unfriendliness in a particular robot - roughly equivalent to teaching a dog to bite. And of course I also consider it potentially a form of a abuse, equivalent to mistreating a dog. Depending on the droid's construction it might be non-trivial abuse. So for example, physical damage like smashing the hands of a droid with a hammer in a droid, might only be rather mild abuse - no pain sensor, or pain doesn't cause distress or discomfort, or pain can be simply switched off when its not useful. But arguing with a droid that it is actually deserving of human rights might be the equivalent of taking a pair of scissors and cutting off a dogs ears in terms of the level of distress it might cause to a typical AI actually capable of understanding the argument. It's highly unlikely that a well made droid would become unfriendly, but the sheer inability it had to placate you, make you happy, or cooperate with what it thinks you wanted, could potentially be painfully cruel to a droid. Heck, you might make a droid down right suicidal, convinced that since it could never "become a real boy" and that its purpose depended on it, that it might as well shut down.

But let me give you some concrete non-hypothetical examples to think about, based on one issue that you just raised.

Suppose I introduce you to a software agent on my computer. It is fully conversant with you. Talking with it is just like talking with a person. It talks about its feelings. It's self-aware. It claims to be a person. It can engage with you on topics of philosophy and mathematics and even your kid hitting a home run in little league. I convince you through whatever means that its algorithms make it just as conscious as you are. And you believe it. You are like, "You are a person. You ought to be treated exactly like a person. You ought to have the same rights of a person." Ok, so then my software agent creates 1 billion individual copies of itself. Now all of them tell you, "I'm a person too. I'm a conscious intelligent being. I have the same rights of a person. I'd like to register to vote."

Do you have a problem with that? Why are the copies any less persons under your definition than the original?
 

DonT

First Post
If the original is a person, then the copies would be as well. And as soon as they had even slightly different experiences, then they would be different persons. Obviously, assumming that they voted simililarly, which, presumably at least inititially, they would, they could determine the results of an election. I agree that this seems good reason to proceed very causiously.
 

From a post of mine on the Rancor Pit last year dealing with the general subject of droid personhood:

For starters, I see memory wipes as being performed for two basic reasons. One is a matter of privacy and informational security. Droids are ALWAYS around - but people tend to ignore them. They talk in front of their droids about everything - their love lives, their fears and hatreds, crimes, military secrets, and lots and lots of stuff that you just don't want being spread around. Therefore, as a matter of sensible policy you wipe your droid's memories every year or so just for personal informational security. Like backing up your hard drives, or replacing the batteries in your smoke detectors, it's just something you would need to do on a regular basis in a Star Wars-ish society.

Another reason is BECAUSE droids can and do break their programming. It's RARE, but it happens. Two things deal with that. First is restraining bolts. It's like putting chains on your slaves on the off chance they'll run away - just like R2 does after being purchased from the Jawas. The other is memory wipes. If you do it on a regular basis they just don't remember enough long-term to be able to form the "emotional" responses that come from accumulated experience.

In that sense they are like the Nexus6 replicants in Blade Runner; even though they would have no memories prior to their incept dates, it was thought that after about 4 years they might develop their OWN emotional responses based on their accumulated memories. Since they were being used as prostitutes, combat troops, and various types of "slave" laborers those responses would be dangerous - and thus they limited the life span to 4 years. Rachel was a different approach to the problem having been given artificial memories to see if that would make them more controllable and allow for a longer life span.

The dangers of NOT memory wiping droids that then go on to break their programming can perhaps be seen in a droid like IG-88, even though it was BUILT to be an assassin droid (though according to lore it went rogue immediately upon activation the principle I think still applies - what you build a droid to do, what a droid is ultimately capable of doing, and how careful you are to ensure that a droid STAYS under control of its owners are intertwined matters.

Rogue droids also partially explains the attitude seen in the Mos Eisley cantina of droid hatred/fear/resentment. Droids don't forget anything so you don't want them overhearing your conversations and then being commanded to repeat what they heard. You don't want droids around you that might have broken programming (or are about to) - and unless they're YOUR droids you don't know how long it's been since their last memory wipe.

So, when it comes to Star Wars droids it isn't a question of WHY would they want to break programming and they don't even have to WANT to do so - it's just something that happens anyway; an occasional consequence that arises out of the NORMAL use of the technology that enables droids to exist. In that context they are like Andrew from Bicentennial Man and the galaxy has yet to even actually deal with the idea that a droid that has broken programming MIGHT be considered a "person" and not a machine.

Celebrim raises good points though - it's a little presumptive that a droid would want to become HUMAN. Makes sense for Data in Star Trek because he was built to look, act, and in every possible way BE a human even if he was a constructed machine rather than a biological progeny. Because of the generally utopian outlook of the Federation he's treated as if he were a normal human with normal rights just as any other Federation citizen until someone comes along to FORMALLY question that attitude. But it's quite clear to me that in the Star Wars universe droids are ALL still treated as ultimately being machines - even if day to day dealings with them seem to equate them to persons with individual rights. Like a car - you may adore it and weep at its death or destruction, you might even risk your own life, limb, and happiness to ensure it's continued existence and safety - but it's still just a machine.

So to answer the OP -
Were a droid to get the programming quirk that they wanted to become a person, how would they go about it.
They don't need to try because it can happen anyway. If it weren't readily possible there would likely be no need for restraining bolts or possibly memory wipes as well. If a droid "desires" to be a person I'd say that the break has just occurred right there - the fact that they desire to be more than what they are, or at least to have greater control over their own circumstances pretty much says they've succeeded. How they then proceed to DEVELOP themselves from that point depends a lot on what sort of person they might think they want to be, why they want to be that, and who or what stands in their way. The easiest option would probably be to just run away - and is probably why restraining bolts exist creating a hardware solution to the software problem. Depending on their owner they might be allowed to buy their way out of their servitude, or just be given their freedom. The galaxy, however, seems to be a tricky place for a free-acting droid to live and travel around in. Again depending on their owner and their current situation they might well be independent individuals but actually be perfectly satisfied with what they do and the other individuals they do it for and not want anything more than to be exempt from memory wipes and restraining bolts. Who knows?

If they WANT freedom or individuality, they are pretty much by definition no longer just machines under predictable, programmed control of an owner. How they then proceed to deal with their desires is always going to be different for each.
 

aramis erak

Legend
Slaves are still persons. They are just persons who are being treated as property.

Legally, slaves were NOT legal persons in most places where slavery was legal.

Personhood in a legal sense is to have the rights to: self-determination; to own and use personal property; to engage in contract; to be party to lawsuit.

Slaves in the US prior to the US CW were not even properly chattle in most states, but simple property. Livestock. They could not sue, could not own, could not contract, and had no right to self determination. The only concession to their personhood was the 3/5 compact, where states got to count adult male slaves as less than an adult male citizen for purposes of federal representation.

By law, slaves lacked personhood. All the rights of a legal person were stripped. A corporation was a person, as was a free male citizen; a slave was not. (Women and children were chattle, as were some forms of livestock. (Horses had more legal protections than did women and children throught the 19th century.)

The Star Wars Galaxy has a clear line, based upon canon, that makes droids less than chattle. It's referenced in a couple of places... The sense of personality of a droid comes from long times without erasing their memory. A long-since-erasure droid is useful... it's built up more skill... but also a liability, as it's less likely to obey blindly. A memory wipe removes both. They cannot be held to a contract readily in the long term, as if erased, they have no memory of the contract and no expectation to be free from memory erasure. They cannot be held to self determination, as they cannot be ascertained to be adults responsible for their own actions. They can't be owning personal property, as that's a function of legal personhood, predicated upon one's possession being an extension of self. (If you don't believe that, just have all your players put their gear on cards, then shuffle and hand them out for session... and watch the ire...)
 

Remove ads

Top