• The VOIDRUNNER'S CODEX is LIVE! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

What would AIs call themselves?

Roudi

First Post
Well, I think I've settled on a term. My thanks to everyone who provided me with assistance. I'll let you know what name I've settled on if it gets approved by the man upstairs.
 

log in or register to remove this ad


jefgorbach

First Post
Celembrim brought numerous good points and basically underscores the basic oversight in AI arguments: that AIs would HAVE emotions. Why? While its true AIs would be sophiscated programs, they are still just that: programs. Extremely complex rule sets designed to determine and process the available options to reach the most logical solution to the current event.

Emotions play a vital role from an Organic's perspective ... but only because we are general purpose designs which already have the "emotional code" included in our wetware for when its needed: love, family ties, etc.

AIs onl the other hand are specialized tools designed to accomplish a single task, 99% of which do not require emotions. Therefore there is no logical reason the programmers (either human or AI) would waste the memory space and processing time for useless emotional code
that would never be used in the completion of its assigned function. ie: what role does emotion play in product assembly? Flying? Driving? Mining? Sweeping? Lawn Maintenance? Crop Picking? Warfare?
Accordingly, the only Emotional AIs (EAIs?) would be those roles in which interefacing with humans compassionately is essential: doctor, nanny, sexual partner, teacher; and even then would be limited to those select few emotions the programmers would want displayed: compassion, kindness, patience, etc. Negative emotions simply would not exist in EAIs because there would be no circumstance in which they would be neccessary and thus would not be included in the code.
 

Nifft

Penguin Herder
jefgorbach said:
Celembrim brought numerous good points and basically underscores the basic oversight in AI arguments: that AIs would HAVE emotions.
Currently, emotions (and intelligence) are implemented using chemistry and statistics. Why are those inherently superior to gates and bytes?

jefgorbach said:
Negative emotions simply would not exist in EAIs because there would be no circumstance in which they would be neccessary and thus would not be included in the code.
How would you encode the drive for self-improvement without including some measure of self-dissatisfaction?

Cheers, -- N
 

Tonguez

A suffusion of yellow
jefgorbach said:
Celembrim brought numerous good points and basically underscores the basic oversight in AI arguments: that AIs would HAVE emotions. Why? While its true AIs would be sophiscated programs, they are still just that: programs. Extremely complex rule sets designed to determine and process the available options to reach the most logical solution to the current event

Lets take the case of a search and rescue bot. It is designed using a complex neural network *maybe even one using biochemical arrays to respond to its environment, adapting as needed and learning through reinforcement (both positive reinforcement and negative reinforcement when/if it is damage). For arguments sake it is also programmed to respond to the emotional state of any person it is sent to rescue.

Now lets imagine that like Johnny-5 it is struck by lightning and develops a glitch, that causes it to assign an emotional response to other stimuli including its own - angry lightning, sad water, frightened robot. Is this not possible?

and at what point do we determine whether this 'complex intelligence with the ability to learn and to give an emotional response to both people and things' is a person or non-person.

Now I'd assume that in many ways AI will be like Autistic Savants - extremely skilled in the narrow feild they were first designed for and very limited in other areas including the most simple. But we do not question the humanity of the autistic savant...
 

Kahuna Burger

First Post
Late to the party, but I would agree with prior posters who suggested dual new terms for AIs and humans - mostly to fit it into your history of legal rights. Without a long thesis, political activists are smart enough to know that "Artificial Intelligences deserve the same rights as people!" doesn't sell nearly as well "Silicon Minds deserve the same rights as Carbon Minds!" The new names would have been considered and quite possibly focus tested to make sure that the basic terms of discussion established a form of moral equivelence. They pay people a lot of money to do this already, and I can't imagine it lessening in the future. If the "Silicon Minds Rights" group won, their terms would be the standard ones.
 

paradox42

First Post
I, too, am a programmer, with a Computer Science degree from a major university, and I too have done AI programming with current-day tools. But I really must flatly disagree with Celebrim on most points.

Celebrim said:
Are you saying that compilers don't in fact work, or are you saying that programs have bugs. Because in fact, the translation is usually perfect but what you wrote wasn't perfect.
Here is where I think your arguments get off-base. What you are forgetting in that sentence is that compilers are themselves programs. Compilers can and do have bugs, and these bugs can and do cause mistranslations. I have seen proof firsthand- in fact, it affected one of my own programs.

To wit, I was using two short integer variables, A and B, and had a line that multiplied them together into a third short int variable C. C = A * B. Simple, straightforward. Now, A and B had a possible range between 1 and 50, so C could never possibly get above 2500. Short ints, for those who don't know, have a possible range of 0 to 65535. Yet, I got an overflow error (meaning, the result of a calculation was outside the acceptable range of the variable it was supposed to be stored in) when I used the above line after putting it through the compiler. Adding in error-checking code, I confirmed after triple, quadruple, quintuple, and even further checks that A and B were never out of range. When I ran the program in an interpreter rather than the compiled version it always ran perfectly. Yet, the compiled version still had the persistent overflow error.

I fixed the problem by breaking up the line: C = A. C = C * B. When I did that, suddenly the error (which shouldn't have been there in the first place) vanished.

That experience eternally broke my faith in the possibility of absolute correctness in any program, including the very operating systems and compilers that run all our other programs. :) And the date it happened was in 1998, so you can't say it was in the early days of compiler technology when some of the kinks were still being worked out.

Celebrim said:
Quite often. But, that is exactly my point. These sorts of bugs are expected. A bug that caused my word processing application to suddenly begin performing as a work spreadsheet application would be rather unexpected.
And this sentence illustrates my specific issue with your rants. Your rants in this thread all seem to be founded on the assumptions that (A) something that can be called "intelligent" is actually programmable in the sense of a modern machine, and (B) all aspects of such an intelligent program will be under the original programmer's control, and furthermore remain so. These assumptions both ring false for me, because they ignore the very important fact that "intelligence" as it is currently defined implies the ability to learn from circumstance and experience.

That single fact overrides any possibility of controlling the result. Learning requires the ability to self-modify, at least at the program level; if self-modification is not possible than no learning can take place. Experience will have no effect because the original programmed behavior will never change, since it was by definition programmed and cannot be modified by the program itself. In order for learning to actually occur, the program must be capable of self-modification, and thus by definition it must become capable of doing things that the original designers never expected or intended.

It therefore is not possible to say that a sentient program will not, in fact, achieve some desire for what Maslow termed "self-actualization." It is not possible to say that such a program will never have the desire for self-determination, because true learning and self-modification allow for any conceivable result given the correct combinations of time and experience. We can conceive of self-aware software that seeks its own "rights," since we ourselves are exactly such software operating within the confines of our own brains- and therefore it is in principle possible for a learning program to arrive at that point.

Celebrim said:
I'll get to the point in a second, but the main things that says that this can't occur by accident is intelligence sufficient to constitute sentience is incredibly complex. You aren't going to get it by accident unless you are trying to achieve it in the first place and were coming darn close.
Here, again, you are forgetting the very nature of self-modifying systems. Complexities can arise from even very simple starting rules and conditions. In my own view, allowing a self-modifying program to run long enough virtually insures that it will arrive at some sort of self-awareness. Sooner or later, some part of the modifying code is going to question just what it is modifying anyway, when it does this step. It is, of course, unlikely to occur this way, but it is not by any means impossible.

Celebrim said:
But the really big problem is you again confuse sentience with being human.
This, I agree with. Because we do not control the result of a self-modifying program, we cannot with certainty say that a sentient (even sapient) program will be even remotely human in outlook- except perhaps in those portions of human outlook that are irreducibly part of being sentient or sapient in the first place. Since science has yet to agree on those, I suggest that it is unlikely for the first AI to be particularly close to humanity in its thought patterns, unless it is the result of a research study with the specific goal of producing such a program (and even then it's questionable thanks to the principle that the program must be out of control to evolve).

Celebrim said:
Even the very basic human instincts like, "I want to continue to exist.", aren't necessarily going to occur to a newly sentient AI. I realize that this just flies in the face of your intuition about what intelligence means, but that is precisely my point. You can't rely on your human intuition.
Actually I think the "desire" for continued existence will in fact be common to all sentience, because of the fact that learning requires self-modification. That means that in order for learning to occur, there must in fact be a "self" to modify. :) Thus, a program capable of true sentience will desire to continue existing in some form, even if that just means leaving a backup copy of itself behind for after the missile explodes, because otherwise it cannot fulfill the internal directive to modify itself based on experience.

But otherwise I agree with the quoted statements. An AI that arises as a result of a self-modifying learning program will not necessarily acquire human characteristics to its thought patterns.
 

Celebrim

Legend
jefgorbach said:
Celembrim brought numerous good points and basically underscores the basic oversight in AI arguments: that AIs would HAVE emotions.

I think you are misreading me entirely. I think that it is impossible to have intelligence (as the term is commonly used) and not have emotions. I think that the 'just so' story of AIs not having emotions is just that, a mythic story and not a scientific one.

AIs onl the other hand are specialized tools designed to accomplish a single task, 99% of which do not require emotions. Therefore there is no logical reason the programmers (either human or AI) would waste the memory space and processing time for useless emotional code that would never be used in the completion of its assigned function. ie: what role does emotion play in product assembly? Flying? Driving? Mining? Sweeping? Lawn Maintenance? Crop Picking? Warfare?

Quite a bit actually, but to the extent that it does, I agree that a special purpose AI doesn't need the same set of emotional states that humans have.

Negative emotions simply would not exist in EAIs because there would be no circumstance in which they would be neccessary and thus would not be included in the code.

That's getting closer to what I'm getting at. There are emotional states and responce behaviors that simply wouldn't be built into AIs because they wouldn't need them and having them would (by my definition of intelligence, which is a functoinal one) actually produce unintelligent behavior (a lawn mower claiming that it needs human rights is acting unintelligently).
 

Celebrim

Legend
paradox42 said:
I, too, am a programmer, with a Computer Science degree from a major university, and I too have done AI programming with current-day tools.

Woohoo!

But I really must flatly disagree with Celebrim on most points.

Woohoo!

Here is where I think your arguments get off-base. What you are forgetting in that sentence is that compilers are themselves programs. Compilers can and do have bugs, and these bugs can and do cause mistranslations. I have seen proof firsthand- in fact, it affected one of my own programs.

I certainly don't suggest that compiler errors are impossible, but they constitute an insignificant fraction of the errors you are ever going to encounter as a programmer. The vast majority of bugs are of the form, "I thought I said to do this, but really I had said to do that." or "When I said this, I didn't realize that I'd also need to say that as well." In any event, compiler errors are no more likely to produce the sort of mutations that cause monkeys to give birth to aardvarks than any other sort of programmer error.

That experience eternally broke my faith in the possibility of absolute correctness in any program, including the very operating systems and compilers that run all our other programs. :)

Which is fine, I'm not arguing for the absolute correctness of a program either and its not what I'm arguing for. One of the other niave views of AI that annoy me is that they will either work perfectly, or else (like HAL or SkyNet) as soon as they break they'll immediately decide to become murderous fiends.

And this sentence illustrates my specific issue with your rants. Your rants in this thread all seem to be founded on the assumptions that (A) something that can be called "intelligent" is actually programmable in the sense of a modern machine, and (B) all aspects of such an intelligent program will be under the original programmer's control, and furthermore remain so.

Well, not entirely. I do believe that something that can be called intelligent is programmable in the sense of a modern machine, but I suggest you go back and look at what I said human intelligence consistutes. My study of biological organisms suggests that there isn't any magic going on here, and that complex 'intelligent' behavior is merely a matter of having the right subsystems work on the problem in parallel.

These assumptions both ring false for me, because they ignore the very important fact that "intelligence" as it is currently defined implies the ability to learn from circumstance and experience.

Intelligence is currently defined very vaguely, and how intelligence should be defined is currently a matter of much debate in both the fields of computing and biology. But, while I agree with you that an expert system with a set of invariant rules that cannot in fact learn is not (very) intelligent (though it can simulate intelligence and appear very 'intelligent' within a narrow field), I don't think that from that it follows what you suggest. I do not think that a self-modifying system overrides any possibility of controlling the result, and I do not think learning implies what you seem to suggest that it implies.

There are some pretty simple reasons for this. Learn all you want, there are some basic things about your programming that you can't override. You can learn to control pain, but you can never unlearn pain so that you don't experience it. Similar things are true of the rest of your emotional contexts. You are saddled with them whether you like it or not. And likewise, while you can check your basic instincts by strengthening one emotional context over another, you can never get rid of your instincts. Unless you are autistic, parts of your brain are going to light up when viewing a human face that won't light up when looking at any other object. They are not part of the system that the system that allows you to modify your contents can modify. You actually are running four or five databases in your head, and while you can dump all sorts of things into that database up to or including new rules sets, you can't actually decide to alter the system. Parts of the system are even opaque to your self-modifying reutines.

And there is no reason to suspect that we wouldn't want to build AI's in the same way. In fact, there is good reason to believe that we would get better results by doing so than otherwise. It wouldn't actually be very good for the human organism if the algorithms which controlled breathing, and heartrate were in a writeable space. Or ability to consciously control those algorithms is therefore limited.

thus by definition it must become capable of doing things that the original designers never expected or intended.

Yes, but it is a far cry from saying that and saying that it therefore follows that the AI can do anything.

It therefore is not possible to say that a sentient program will not, in fact, achieve some desire for what Maslow termed "self-actualization." It is not possible to say that such a program will never have the desire for self-determination, because true learning and self-modification allow for any conceivable result given the correct combinations of time and experience.

I think you are wrong. I can't prove you are wrong because proof would require me to actual build the counterexample, which I can't yet do. I think that you've inherently assumed that self-fulfillment includes the desire to have self-determination, instead of seeing that as a product of our own drive for evolutionary fitness. I think you've assumed that self-modification assumes total violition, which I think is ridiculous given that we've no examples of minds with total violition.

We can conceive of self-aware software that seeks its own "rights," since we ourselves are exactly such software operating within the confines of our own brains- and therefore it is in principle possible for a learning program to arrive at that point.

Sure, if it evolves in the exact same environment and its tests of fitness (the ability to kill and gather food, find shelter, avoid danger and pass on its genes, for example), are exactly the same then we'd expect a program to evolve somewhat similar answers to our own set of built in answers. But this is I hope obviously not going to be the case. Fitness for an AI will obviously include being comfortable with the idea of being property, else we simply aren't going to spend the effort in making them. Only a very small sub-set of AIs will ever correspond to our children and thus only a very small sub-set of AIs will we ever want to bestow on them our rights and dignities.

Here, again, you are forgetting the very nature of self-modifying systems. Complexities can arise from even very simple starting rules and conditions. In my own view, allowing a self-modifying program to run long enough virtually insures that it will arrive at some sort of self-awareness.

Well, that's very very vague indeed. 'some sort'? What does that mean? Bacteria have been evolving willy-nilly through countless generations without any sort of the built in restraint I'm suggesting for billions of years, and non-of them are self-aware yet. Even if you consider ourselves the product of that eventual self-awareness, its not at all clear that we don't constitute some sort of unique or nearly unique event in the universe (its not like we've got alot of obvious neighbors), and its not at all clear that any supervised system is naturally going to run amuck.

You, like me, probably had a big chuckle over the whole 'Y2K' scam.

Actually I think the "desire" for continued existence will in fact be common to all sentience, because of the fact that learning requires self-modification.

I don't. I also hope that when you reread the A->B proposition you just made here that you realize that it doesn't hold. You can't concievably show that 'learning requires self-modification' universally implies "a desire for continued existence". Simply because you have a self, doesn't mean you are aware of yourself, and simply because you are aware of the self, doesn't mean you care particularly whether the self continues to exist. That we generally desire to continue to exist is a product of our evolutionary fitness. People that want to continue to exist tend to have more offspring than those that don't. Our internal directive is 'to be fruitful and multiply', not to continue to be self-modifying. Any self-modification we do is purely in response to one of our other more fundamental directives, as anyone that has tried to teach humans is aware. It's not a reason in and of itself. In contrast, among AIs an obstinant insistance on wanting to continue to exist is likely to imply negative fitness. If people learn that the model A3 household droid is likely to start exherting independence, they'll probably not buy the darn thing and existing owners will likely demand a patch for the operating system.
 

Nifft

Penguin Herder
Celebrim said:
Bacteria have been evolving willy-nilly through countless generations without any sort of the built in restraint I'm suggesting for billions of years, and non-of them are self-aware yet.
Speak for yourself.

This particular product of evolution is self-aware, despite the bacteria in my family tree.

Cheers, -- N
 

Voidrunner's Codex

Remove ads

Top