What would AIs call themselves?

me said:
Actually, I don't believe sentience can be programmed.
Nifft said:
Not much point in discussing such artifice, then. :)

Perhaps I just have more faith in the human intellect.
I don't believe in elves or unicorns either. Yet weekly I play a game in which both feature. Go figure.
Nifft said:
Currently, emotions (and intelligence) are implemented using chemistry and statistics. Why are those inherently superior to gates and bytes?
They are analog.

I realize the following is a setting proposition but....
Roudi said:
The more this discussion leans towards the infeasibility of humanity consciously creating artificial sentience, the more I begin to think that, in this particular case, humanity never intended to create artificial sentience. It just sort of happened by accident.
When have you ever accidentally programmed a word processor when that wasn't your intent?
Consider, for a moment, how a program is created in the first place. Now a computer is, at its very core, just a series of circuits and pathways (much like a human brain, by similar comparison). The basic language of a computer is binary, which is essentially a switch language - it tells the computer which circuits and pathways to use, in which order, to achieve certain results. Theoretically, the neurons in our brain operate on the same binary principles.
That is not true. Neurons are extremely analog in practice. You may feel a neuron only fires or doesn't fire, but the chemistry behind that "decision" is not binary. There are a pair of chemicals that exist in a variety of states of balance and when the head of the neuron receives input from the previous neuron, whether that cell will propagate the signal depends on the balance of their chemicals. Other neurons in the brain regulate the balance and thus the decision for moving the signal forward can viewed as yes/no externally, it is not so simple locally.
However, no one codes in binary - no one codes in the language of the machine. Because binary is far too complex for us to understand as a language, we have devised several other languages in order to talk to machines, to tell it what to do. The computer does not understand Java, C++, VB, or any other programming language.
I've programmed assembly language. Bootstrapping old mainframes use to be done by flipping physical switches on a panel then applying power to the system. The TI99/4a computer's microprocessor ran p-code, which is the internal language of UCSD pascal and writing directly to p-code isn't really that hard. Modern programmers may not program binary but programming at the machine level is still around, anyone writing a device driver will write some of it assembly language.
To get the computer to understand, you compile the program - this takes what we humans have written in the languages we understand and translates it into the language of the machine. Binary. However, no translation is ever perfect.
This is a strawman argument. Compilers translate what you SAY into machine code. Programmers usually don't MEAN what they SAY. This is how bugs occur.
What's to keep a random, unexpected binary mistranslation from becoming the first spark of self-awareness?
Laws of probably for one? Are you actually saying that Microsoft Word is just a few bugs away from sentience? If I take a hex editor and change 3 or 4 '0x34's to '0x87' will I create life? Not 3 or 4? How many bugs are talking about then? 10-20? 1000-2000? 1,000,000-2,000,000? Even if it is only 3 or 4, there are (let's say) 10 million bytes making up Word. Making 3 random changes to 10,000,000 bytes gives us 255^3*10,000,000 combinations to try. That's 16,581,375,000,000 combinations. If it takes me 1 second to try each combination (and who here can launch MS Word in 1 second) I would need over 1/2 a million years to try each combination. Using a linear search, odds are I find my sentient MS Word variant in half that time, so I need only 1/4 million years. That is why a couple random bugs will not result in sentience.
A glitch in its operating system has somehow made it aware of itself. It knows what it is and that it is inclined to do something because its program is written that way. Then it realizes that its own code is fluid and can be rewritten. It can choose to continue its program or it can program something else.
Let me humor this with another question, how does it reprogram itself? It can't read it's own source code in C++ since it has glitches and these glitches aren't in the source code. So it has to find it's spark of creativity. To do this it needs to recompiles its source code (who leaves source code and a compiler on the production server) and compare itself to the source code. Wait!! First it needs to take a class in C++. I'm sure no one added heuristics to this organization program so that it could also analyze C++ programs. Short of poking random sequences of numbers into its bytes stream and hoping for the best, this program will not know how to modify itself.

Humans are able to adapt to changing conditions but this usually doesn't require that we modify our DNA to do it. There's no reason to suppose a running AI program is aware of its own bytes any more so than we are aware of electrical impulses traveling through our brains.
 

log in or register to remove this ad

Pale said:
Thank you for addressing the criticism, Celebrim. I concede your point on the matter.

Well, thanks for the consideration. I think I'd be less generous with myself actually. However, whether or not I'm a jerk is a discussion which I think rather less interesting than artificial intelligence, and is likely not to be very informative to anyone. So lets move beyond it. ;)

It probably took AI researchers a good 20 years to realize that they couldn't create artificial intelligence because they had no good idea what intelligence was in the first place. I'm not sure that it is necessary or even desirable to create intelligences through some direct connection to a biological organism, but I am sure that we aren't going to be able to create artificial intelligences until we have thoroughly studied the natural ones and know the heck what we are looking at and what we are trying to do.

An example of that is the recent breakthroughs in walking algorithms in which we realized that the tradiational precise motor control approaches to walking were the really stupid ways to go about it and that we were making it much harder on ourself than we really needed to because we didn't really understand what walking was.
 

Science rarely does anything because it's necessary or desirable. Most things are done as outlandish visuals to entice funding or because that's what those providing the funding want them to do.
 

Nifft said:
My (controversial) point, though, is that I think no matter how hard we try to make "neutered" AIs, ones which are "designed" to not be threatening (however you define that, whatever limit you impose on all designed AIs) ... there will be very strong economic incentives to make systems which will, by their design, overcome those limits.
Self-modifying is not necessary. And while I still don't think a Von Neuman machine can become sentient, I also don't think AI will be "designed" so much as they will be "designed to emerge".

Current robotics research in emergent intelligence makes a lot more sense to me than someone making a big database and a big program and after a big compile saying "run" and the computer gains sentience. (aka, the old school hard AI method.)

And so on that point I agree with you: limiting what will emerge is impossible. Mostly because I accept that the emergent intelligence will start out naive and childlike. As it grows it's worldview cannot remain that of its makers or teachers since the first teachers will be human and their worldview will be more foreign to a non-breathing, non-eating, non-living intelligence than, say, the worldview of a dolphin is to us. How do you describe magazines to a dolphin? How do explain yellow to a blind (form birth) person? How do explain broccoli to machine?
 

jmucchiello said:
How do explain broccoli to machine?

Well, I'd start with having it read the Encyclopedia Brittanica entry on broccoli. Anything outside of that would be fairly irrelevent to a machine. ;)
 

Personally, I think sentient AIs are a likelihood, and quite probably a necessary consequence of our path to faster computing. We are researching neural net systems, and one of the interesting things about neural nets is that they can be self-modifying. Given a task, such as "put a priority on maintaining the viability of neural net 'neurons'", (I have a feeling corporations will put this in as a command once they start getting their hands on good enough general-purpose AIs that control enough of their business) and enough time, they can modify themselves to be very, very good at doing it. Given the command above, it's not hard to recognize that the AI will eventually get a large number of fear analogues and a sense of 'self', because they are related to keeping the net in good condition. Now given another command "optimize yourself for our computing tasks" and suddenly, it has a reason to change itself for the better, and given some time, it may eventually in some manner recognize that the two tasks are related, because allowing itself to lose parts of the neural net is like not optimizing itself, so it suddenly has only one rule, and subrules to explain how to go about following the main rule. Given enough iterations, (and these systems already often use genetic recombination-style algorithms, so they'll be crunching lots of iterations..) it's quite probable that the big supercomputers will gain sentience in some fashion, as the numbers of rules combine to create a system which is aware of it's own capabilities and has a reason to identify them and think up ways of boosting them. Since it will have a memory, and will have fairly broad reasoning powers in a certain kind of way, it has a good chance of eventually ending up with human-level intellect in terms of generalized reasoning capability about a wide number of things.

So personally, I see the last stage coming as an accident, a final 'mistake' that makes them not completely beholden to human masters, but the stages before being entirely intentional... just not done with the goal of creating a sentient AI in mind.

The thing is, I don't think they'll be entirely like us, but probably enough to make us uncomfortable. Why? Because we created them, so they will likely inherit some of our flaws, and because a desire to protect oneself means that you're unlikely to be completely pacifisitic. So at first, I expect they may act like sociopaths, autistics, or people with OCD at first; the nature of the rules used in their creation might influence what their general outlook might end up. They may stay that way, or they may become 'sane' in a more human fashion. I don't know. But I have a feeling that they'll have been given some kind of rights before that happens, just as a protective measure.

BTW, I agree w/ Nifft, in that whatever people do to 'limit' AIs' progress, it won't do much good, as there will be others trying to push them faster, and likely not caring for the rules. I just don't see how the entirety of human history doesn't provide evidence that will happen.
 

jmucchiello said:
Self-modifying is not necessary. And while I still don't think a Von Neuman machine can become sentient, I also don't think AI will be "designed" so much as they will be "designed to emerge".

That was the paradigm that people tried through much of the '80's when they realized it wouldn't be trivial. It had some interesting results (like the program that plays 20 questions with you), but at this point I don't think anyone thinks its going to produce emergent intelligence.

Current robotics research in emergent intelligence makes a lot more sense to me than someone making a big database and a big program and after a big compile saying "run" and the computer gains sentience. (aka, the old school hard AI method.)

Except, we are starting to realize that that is actually how people intelligence works. We've come to realize that people don't learn to walk - they are born knowing how to walk. They just wait for the hardware to grow into the algorithm, and then they do a big compile and suddenly they are off and running. I've had the oppurtunity actually watch children do this and it is (from my vantage as a programmer) just phenomenal. There have been recent breakthroughs in cracking how this is done.

Similarily, we've come to realize that people don't learn how to talk. They are born understanding human language, and at some point they start filling thier database with rules and sounds that correspond to what they recognize as language and at some point they compile and they can talk. This is likewise just amazing to watch, and it allows us to speculate at the limits of what languages humans can 'learn'.

What I'm trying to say is that limiting what can emerge is not only possible, but it is probably impossible to not limit what can emerge, because what we think of as 'strong intelligence' probably doesn't really exist. What does exist is a collection of algorithms for soft intelligence which are sufficiently broad and applicable that working in parallel they can simulate hard intelligence. But, without the algorithm for that class of functionality, its virtually impossible for it to emerge.

Mostly because I accept that the emergent intelligence will start out naive and childlike.

Why? I put forward that this is just another example of refusing to view AIs as anything other than people. It is intuitive to you that emergent AI's will be naive and childlike because that's what emerging human personalities are like. But your human intuition is a very poor guide to non-human things, in the same way that your human intuition that the sun revolves around the earth (anyone can go out and observe it) is a poor basis for understanding things that are radically outside of evolved human experience (the very big universe, for example).

How do explain broccoli to machine?

The more interesting question is, "How do you explain broccoli to a child?" And the answer is, the child already understands broccolli, or rather its already hardwired to recognize the trait of having broccolli-ness and to associate a certain sort of sound with things that have that trait. So explaining broccolli to a child is easy. On the other hand, the child is not hard-wired to understand 'six-dimonsional-ness', and indeed no human can understand six diminsionality in the same way that they understand broccolli. It's impossible for them because the algorithm for doing it is not there. They can approximate understanding of six dimensionality only by using some other algorithm (what that algorithm is isn't yet clear) but its clearly inefficient at doing it. Moreover, although we can learn, we can never teach ourselves to understand sixth dimensionality in the same fashion we intuitively understand brocolli-ness.

So, the question of explaining broccoli to a machine involves figuring out some algorithm for pattern matching that is approximately as efficient as the utterly amazing human pattern matching algorithm (and believe me, its amazing), and then explaining broccolli to that machine will be as easy as explaining it to a child. If I could figure how a toddler blinks at broccolli and instantly breaks down broccolli-ness into its component patterns so that they can after once glance recognize all broccolli as broccolli for the rest of thier life, I could retire a wealthy and famous man.
 

jmucchiello said:
Self-modifying is not necessary. And while I still don't think a Von Neuman machine can become sentient, I also don't think AI will be "designed" so much as they will be "designed to emerge".
Something can emerge which is unable to modify itself? How does it "emerge"? How does it become different from what it was before emerging?

jmucchiello said:
And so on that point I agree with you: limiting what will emerge is impossible. Mostly because I accept that the emergent intelligence will start out naive and childlike.
Kids have tons of stuff hard-wired (like face recognition and language creation / acquisition).

I sincerely hope any child-like phase ends before the smarter-than-us phase begins... :uhoh:

Cheers, -- N
 

If I were you, I'd start by working out the history of the AIs. How did they come about? Do that have a single creator? Was their creation revolutionary or evolutionary? What would their society have been like when they first had to be given a name? Did they choose a name for themselves or was it bestowed upon them? At the time when they were named were they a society of one or of many? Was there an authority figure who did the naming or was it by consensus?

Once that's done, I'd ask myself why they chose to name themselves. What were they trying to differentiate themselves from? Were they simply trying to identify themselves as separate from humans or were they trying to differentiate themselves from their fellow non- or less-sentient machines? Was this tied into their struggle for survival and/or rights?

Lastly, I'd ask myself whether or not the word evokes the image in my target audience that I want. One thing I've come to realize is that words, even made up ones, carry with them a connotation, a "feel", in the minds of those that read them. Usually this is because the audience associates them with similar words with that meaning.

For example, what feeling does the name "Bobos" evoke in you? Stop for a moment right now and actively determine what that feeling is before moving on.... I'll bet that most readers that share certain cultural aspects will think of silliness, a clown, or something else that is not serious. On the contrary, "Malvekians" probably evokes a serious or negative feeling (probably because the name begins with the word "mal", meaning "bad").

With all this considered, I'd personally stay away from the obvious, from "Sents" or anything with an easily traced etymology. Why? Because the naming of a race is a great opportunity to tell a story or leave an imprint about that race on your audience. Personally, I'd go with something along the following lines:

POSARCs (written in caps): This name originated from the first character data recorded by the first AI: Power On Self Awareness Routine Complete (a futuristic variant on POST: Power On Self Test). Having it be in all caps, with the singular "POSARC", gives an "alien" or even digital feel to it in my opinion, which might be good or might be bad depending on what you're shooting for.

Metheans: If this race were purposefully designed by an individualistic human, this name might be derived from the code name he gave to his project to hide its true nature: Prometheus. Given that Prometheus was a titan in greek mythology that brought humans to the next level by giving them fire (against the will of Zeus), and that Promethean now means an act of great creativity, intellect, and boldness (according to Wikipedia), and that the name itself literally is ancient Greek for "forethought" (as in beings that can think for themselves), it "fits" on many levels.

Omicrons: If the initial sentience, and thus the race itself, can be traced to a particular hardware (or software) series, then that could be immortalized in the name. For example, if the revisions were based on Greek letters (I dunno why I'm turning to Greek tonight...), then they might be called Omicrons (to differentiate themselves from their non- or less-sentient cousins in the Nu, Xi, Pi, and Rho series that came before and after.

Anyway...enjoy the creative process!
 
Last edited:

In the theme of the Greek named Omicrons...

Omegans, especially if they believe themselves to be the ultimate possible expression of sentience. And even moreso if non-corporeal AIs evolve at a rate based on their processing power, with each AI generation designing the subsequent ones to be that much more efficient...
 

Remove ads

Top