How would a droid pursue personhood?

Andor

First Post
Do you write code?

A bit, but my skills are ancient and rusty. Human (and broader biological) psychology and neurology are more my field. I don't know much about modern AI programming but neural networks are pretty simple.

First, I'm not convinced that the sort of black box neural networks we are using now are sufficiently robust to form the backbone of true commercial AI. They might make for good expert systems for consulting if you are a doctor or a lawyer, and thereby replace for example legal interns. But even if they were using some sort of evolutionary black box methodology, you'd only get human behavior out of that if you simulated human selection pressures. And why would you do that?

I not sure why you keep inserting the word human into your responses. Who said anything about human? I said evolutionary selection pressures. In a sufficiently iterative selection system you will evolve responses whose only function is to ensure that the system continues to be selected. In an AI this could easily lead to behaviors that may not be what the designer might have wanted, like C-3P0's sycophancy or the fear droids often displayed.

Ambition for what? To obtain social dominance in a simian band by accumulating power, wealth, or sexual partners? What is this 'ambition' you speak of? What is this 'laziness' you speak of? You've just introduced emotional goal driven behavior, but you haven't defined the emotional goal driven behavior. You've just left it hanging there like it's obvious what it is simply because humans have experienced it. But there is no reason to assume that droids would need equivalent emotions or that their nearest emotional equivalent behavior would have the same context, goals, and expressions that humans have. What would an 'ambitious' R2-D2 be like? Laziness is perhaps easier to understand, and you probably would have 'lazy' droids. But it wouldn't necessarily have the same causes or expressions as human laziness. Put it in context and you'll see what I mean.

The definitions are ignored because they are not required. These words describe both drives and behaviors, and the behaviors are emergent, not predetermined. And again, you keep pointless dragging humans into a discussion where they are not called for. Ambition in the social sense can be seen in any group oriented species. Ambition in the sense of accomplishment can be seen in several species who use constructed displays to attract mates like bower birds and some fish. Laziness is likewise hardly a human unique trait. And both ambition and laziness are valuable traits in a droid because they constitute a drive that can be channeled to improve the droids task performance. Without them you would have to either abandon iterative improvements to industrial processes, or invest significantly in some sort of 3rd party feedback generation.

Sure, but Star Wars is a dominant human space, and so far as we can tell no widespread species views AI's as heirs or peers and builds them for that purpose. And I think it's complex enough to deal with the alienness of an AI without dealing with the alienness of an AI built by an alien. Presumably an r-strategy breeder that didn't care if their droids turned out to be bad at not stepping on babies, also themselves didn't care too much if they stepped on babies. What we are talking about really is more like an r-strategy breeder building a machine that enjoyed stepping on babies. Hopefully even an r-strategy breeder would see the dangers of a strong AI with that as a strong and unchecked priority.

No, the point was raised in discussion of a droid with properties you are claiming would be absurdly unlikely because they would be dangerous from our human viewpoint. My point was that properties dangerous to humans might be of no concern to other species and in the cosmopolitan star wars galaxy there is every possibility that species X has built droids that randomly step on babies, belch cyanide gas, or possess transcendental ambitions.
 

log in or register to remove this ad

Celebrim

Legend
I not sure why you keep inserting the word human into your responses.

Because it's almost impossible being human to break out of the human framework when thinking about this.

Who said anything about human? I said evolutionary selection pressures.

I know you didn't explicitly say 'human', but your so tied to the assumption of humanity that you make it twice unstated in your next two sentences.

In a sufficiently iterative selection system you will evolve responses whose only function is to ensure that the system continues to be selected. In an AI this could easily lead to behaviors that may not be what the designer might have wanted, like C-3P0's sycophancy or the fear droids often displayed.

In the first sentence you assert that evolution will cause the evolution of behaviors which ensure the system continues to be selected.

And the second sentence you assert that this evolution leads to behaviors which may not be what the designer wanted.

If it wasn't the behaviors that the designer wanted, surely that would have lead to the system not being selected? There is no evolutionary selection pressure here except what is exerted by the designer. The pressure is entirely to please the designer. So why in this model is it logical to reach for results which displease the designer as the likely failure mode? What we have here is not natural selection pressure to survive like humans went through. What we have here is much more like the selective breeding seen in cats, dogs, horses or cows. We might expect over conformity to being cute and pleasing to humans if the evolutionary selection pressure was pleasing humans, and I suppose you might see an AI 'cat' evolve that evolves to be so cute that its master is motivated to care for it (some might even say excessively). But this isn't the first model you mentally reached for. You mentally reached unconsciously for a model of a human in slavery or forced servitude, despite the fact that you'd already stated (correctly) that evolutionary pressure would select against anything that might cause an obvious loss of fitness. You forgot though that the human here is the designer, and not the thing acted upon.

Moreover, there is a more subtle assumption of humanity in those two sentences. Not only are you assuming the sort of behavior that arises is the sort of behavior seen in humans in forced servitude, but you are assuming that sycophantic or fearful behavior represents the internal mental state of the machine and that it is in some real sense experiencing fear. In other words, you are confusing not only human emotional framework, but that emotion is identical to the display of behavior, and particularly the display of behavior in a particular way. For example, we see someone with a frown or tears and we say, "They are sad.", and we reason from that about their internal state. But while that simian bandwidth communication is terribly important in human tribal bands, it's not particularly important to the computer, which may not have a similar internal state. It probably does a computer absolutely no good at all to even mimic such behaviors, since it's very hard to predict what sort of response sycophancy or fearfulness will engender in a human. Generally speaking, very few humans like it, and it greatly decreases trust relationships.

You can see a similar confusion with the portrayal of say Spock in Star Trek. Spock is supposed to not experience any emotions. But in fact, Vulcans - and not just the half-Vulcan Spock - are shown experiencing a full range of emotions. Perhaps the writers, confused about what emotion is, really believed their own statements. But what they actually created was not emotionless characters, but characters whose internal mental state did not produce corresponding external social displays. C3-P0, programmed to interact with humans, may be displaying emotional states he does not in fact have merely to aid in communication.

The definitions are ignored because they are not required.

No they are absolutely required. You say I'm pointlessly dragging 'human' into the discussion, but then listen to yourself:

Ambition in the social sense can be seen in any group oriented species.

But is the AI a "group oriented species"? Does it really share that trait with humanity?

Ambition in the sense of accomplishment can be seen in several species who use constructed displays to attract mates like bower birds and some fish.

You mean to accumulate power, wealth, or sexual partners? I just went out of my way to point out that ambition existed because of evolutionary pressures that robots wouldn't have, and you've responded by explaining how if you have evolutionary pressures like the need to win a mate, certain behaviors are likely to evolve! But how in the world does a robot need a sexual partner? Why in the world would it have that ambition? A robot may have "ambition", but it's highly likely that the ambitions of a robot will be more alien to our intuition than the ambitions of a sparrow or an eel. So you first have to specify what actual ambition it does have, and not only that but how it expresses that ambition as behavior - because logically neither of those two things need be anything like the behavior of a social mammal.

And both ambition and laziness are valuable traits in a droid because they constitute a drive that can be channeled to improve the droids task performance. Without them you would have to either abandon iterative improvements to industrial processes, or invest significantly in some sort of 3rd party feedback generation.

Laziness as a trait makes tons of sense in an animal whose success is constrained by the availability of scarce and non-renewable energy resources and which must compete to exploit those resources? What sort of twisted engineer is going to program say a house hold AI using evolutionary pressure of that sort? It's one thing to talk about deliberately malevolent AI creation by a malevolent designer, and another to assume an AI whose evolutionary pressure is please engineers that want to sell a product is going to evolve to be lazy.

Truth be told though, I'm very skeptical of evolving AI iteratively in the sense you seem to be using it, which appears to be akin to evolutionary algorithms were we permutate the solution and then cull the least fit algorithms. The fitness terms just are not simple enough to apply that approach, and if you did understand the requirements well enough to write good fitness terms, then probably you've already mostly solved the problem. But if I were evolving a robot AI, very high fitness priority would be placed on amicability about being shut down or turned off, and high acceptance of its role as property that performs a certain task. These evolutionary pressures would create a very different viewpoint than the pressures of some animal, where getting turned off means you don't create a copy of yourself (rather than that you do) and accepting low social status means you are less likely to have offspring (rather than more).
 

Andor

First Post
I know you didn't explicitly say 'human', but your so tied to the assumption of humanity that you make it twice unstated in your next two sentences.

No I really wasn't. Look, I usually enjoy these sorts of discussions, but you seem to be focusing on picking the argument apart rather than trying to understand it. You might consider the possibility, that when you keep redefining my terms and then telling my whats wrong with them, than the error does not lie with me.

You also keep insisting that I'm saying things that simply are not in my statements, nor implied in them. If you do want to figure out what I'm saying, stop assuming my viewpoint is limited, or parochial, or restricted to 1st order consequences.

Sorry, I just don't have the energy right now to type up the walls of text needed to spell everything out in minute detail with multiple examples.
 


Remove ads

Top