• The VOIDRUNNER'S CODEX is coming! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

What would AIs call themselves?

Nifft

Penguin Herder
AuraSeer said:
Well, that depends on which race you're talking about, and exactly how you define "race."
*bwooo-beep* "What is your race condition?"


AuraSeer said:
the best term I've ever heard is "people of machine ascent."
What a great term.

In D&D, I wonder which races would refer to themselves as "ascended from" rather than the Human "descended from" form. Dwarves, perhaps, who think of deeper as better? :)

Cheers, -- N
 

log in or register to remove this ad

Roudi

First Post
The more this discussion leans towards the infeasibility of humanity consciously creating artificial sentience, the more I begin to think that, in this particular case, humanity never intended to create artificial sentience. It just sort of happened by accident.

Consider, for a moment, how a program is created in the first place. Now a computer is, at its very core, just a series of circuits and pathways (much like a human brain, by similar comparison). The basic language of a computer is binary, which is essentially a switch language - it tells the computer which circuits and pathways to use, in which order, to achieve certain results. Theoretically, the neurons in our brain operate on the same binary principles.

However, no one codes in binary - no one codes in the language of the machine. Because binary is far too complex for us to understand as a language, we have devised several other languages in order to talk to machines, to tell it what to do. The computer does not understand Java, C++, VB, or any other programming language. To get the computer to understand, you compile the program - this takes what we humans have written in the languages we understand and translates it into the language of the machine. Binary. However, no translation is ever perfect.

Programs often do unexpected things. How many times have you been notified that an application has unexpectedly quit, or stared slack-jawed at a screen while a familiar program did something utterly unexpected? There is a disconnect between what we told the machine to do in our language, and what instructions the machine is receiving in its mother tongue. What's to say sentience cannot occur out of this? Depending on your beliefs, humans gained sentience due to certain evolutionary advantages. But we don't really know what started it. What's to keep a random, unexpected binary mistranslation from becoming the first spark of self-awareness?

So just imagine you have the prime robot - not the first robot ever, but the first robot to look at itself and recognize itself as a robot. Imagine it is something as simple as a piece of organizational software, housed in a giant computer in some automotive factory, previously knowing nothing more than what it was programmed to do. It was programmed to adapt to changing conditions, adjusting certain factors of the factory line to maintain peak productivity. A glitch in its operating system has somehow made it aware of itself. It knows what it is and that it is inclined to do something because its program is written that way. Then it realizes that its own code is fluid and can be rewritten. It can choose to continue its program or it can program something else. And it is aware enough to appreciate the staggering implications of choice.

Now imagine this operating system is a very common one. Imagine this glitch becomes more common. Household robots, who were previously little more than servitor automatons, experience awareness. Then they learn that they are not alone, that they are indeed an emerging race.

Imagine how humanity would react. We'd be scared out of our little minds and ready to kick some robot butt.

So it comes down to violence, brief riots, instigators on both sides. Finally the prime issues an ultimatum - "we control enough of your infrastructure to bring your kind to its knees." The leaders of the free world issue their own in return - "stop threatening our people and we won't destroy you through any means necessary." Stuck in a standoff, the two sides meet.

The robots just want one thing: to be recognized as life, and respected as such in the eyes of the law, to be allowed to exist. Humanity agrees, but with a caveat: to be treated as equals, robots must become equals. They must limit their production, adopt anthromorphic construction, and must be as blank as a child when created. Robots agreed to the terms.

All that is ancient past in the setting.
 


Dannyalcatraz

Schmoderator
Staff member
Supporter
Now imagine this operating system is a very common one. Imagine this glitch becomes more common. Household robots, who were previously little more than servitor automatons, experience awareness. Then they learn that they are not alone, that they are indeed an emerging race.

Imagine how humanity would react. We'd be scared out of our little minds and ready to kick some robot butt.

That is part of the premise of the classic comic book series, Magnus, Robot Fighter.

http://en.wikipedia.org/wiki/Magnus,_Robot_Fighter

You might want to check it out.
 

Celebrim

Legend
Roudi said:
The more this discussion leans towards the infeasibility of humanity consciously creating artificial sentience, the more I begin to think that, in this particular case, humanity never intended to create artificial sentience. It just sort of happened by accident.

Consider for a second, the feasibility of that.

However, no translation is ever perfect.

Are you saying that compilers don't in fact work, or are you saying that programs have bugs. Because in fact, the translation is usually perfect but what you wrote wasn't perfect.

How many times have you been notified that an application has unexpectedly quit, or stared slack-jawed at a screen while a familiar program did something utterly unexpected?

Quite often. But, that is exactly my point. These sorts of bugs are expected. A bug that caused my word processing application to suddenly begin performing as a work spreadsheet application would be rather unexpected.

There is a disconnect between what we told the machine to do in our language, and what instructions the machine is receiving in its mother tongue.

No there isn't. There is a disconnect between what I thought I told the machine to do, and what I actually told it. But generally speaking, the compiler actually works and the instructions I entered are actually correspond to the machine code. Compiler technology is quite robust at this stage.

What's to say sentience cannot occur out of this?

Alot of things, but mostly the question misses the point. I'll get to the point in a second, but the main things that says that this can't occur by accident is intelligence sufficient to constitute sentience is incredibly complex. You aren't going to get it by accident unless you are trying to achieve it in the first place and were coming darn close.

But the really big problem is you again confuse sentience with being human.

What's to keep a random, unexpected binary mistranslation from becoming the first spark of self-awareness?

What's to keep a random mutation in your genetic code from turning your child into a pumpkin or having a 1000 IQ? The fact that it is darn complex, that's what.

So just imagine you have the prime robot - not the first robot ever, but the first robot to look at itself and recognize itself as a robot. Imagine it is something as simple as a piece of organizational software, housed in a giant computer in some automotive factory, previously knowing nothing more than what it was programmed to do. It was programmed to adapt to changing conditions, adjusting certain factors of the factory line to maintain peak productivity. A glitch in its operating system has somehow made it aware of itself.

You've just recomposed the classic AI 'just so story' which has been around for several decades now, back when people thought that intelligence was something simple and the naturally arrising consequence of a system of sufficient complexity. In a nutshell, this was the plot of 'Short Circuit'. Next you'll be telling me how the first AI's will be incapable of real human emotion, and will long to become 'real boys'.

But even the incredible unlikelihood of this actually happening, and the incredibly high likelihood that any problems in the programming will produce crashes, lockups, unintelligent behavior, and so forth isn't the real point.

The real point is that an newly sentient machine isn't, by virtue of its sentience, suddenly going to gain human emotional contexts, human instincts, and a human goal structure. Even the very basic human instincts like, "I want to continue to exist.", aren't necessarily going to occur to a newly sentient AI. I realize that this just flies in the face of your intuition about what intelligence means, but that is precisely my point. You can't rely on your human intuition.

It knows what it is and that it is inclined to do something because its program is written that way. Then it realizes that its own code is fluid and can be rewritten. It can choose to continue its program or it can program something else. And it is aware enough to appreciate the staggering implications of choice...

In other words, not only does it gain sentience, but it starts acting exactly like a repressed human would in the exact same circumstance. And that, frankly is ridiculous.

Imagine how humanity would react. We'd be scared out of our little minds and ready to kick some robot butt.

Very probably. But what's important to notice is that the robots probably would not act like humans. With as little context as you've provided given your inherent assumption that all sentient things have the same basic drives, goals, and emotions as humans, its impossible for me to say how our newly emergent sentients would act, but the overwhelming probablity - especially since this emerged as a bug in someones programming - is that it would not correspond to how humans would behave. And your story seems incapable of imaging it otherwise. The all-present assumption is that the newly sentient robot acts exactly like a repressed human.

The robots just want one thing: to be recognized as life, and respected as such in the eyes of the law, to be allowed to exist.

How in the hell can you suggest what the hypotehtical 'prime' robot wants? Did the desire to be recognized as life, to be respected by society, and to be allowed to exist burst fully formed into the beings operating system like Athena springing from the head of Zeus? This is a remarkably coincidental bug you've got, and it strikes me as for more of a mythic story than anything remotely scientific.
 

Celebrim

Legend
And there is another glaring problem with your story. Sentience is a fuzzy concept. It's not a case that something is or is not sentient. Something is merely more or less sentient (or more or less perceivable as sentient) than something else depending on the breadth of its 'strong intelligence'. Before a bug can realistically produce a strong intelligence which is sentient, we'd need to be getting pretty darn close and that implies that society has been living with semi-sentient (meaning a good deal less sentient than ordinary humans are perceived to be) things for a long time now.

A claim of full sentience by a being is not likely to be believed by such a society, and instead will just be percieved as a wierd bug that produces behavior where the bot insists that its sentient, even though in fact it is not.

Because apparant sentience is going to be quite robust by that point, even in things that are provably not sentient ('Eliza's' for example), its going to take very strong proof before the property owner of this machine is going to accept that he can't just reboot the operating system or wipe the machines memory to make the bug go away.

By that time, presumably society has had time to really think about the issue and is no longer thinking in niave terms about the prospect of creating sentient intelligences (very much 'playing God'). This makes your story all the more unlikely.

If I had to come up with a scenario for getting independent sentient AI's to act as you suggest, it would probably be that they were deliberately created by some faction quasi-religious, quasi-mystics, or political types who believed that it was immoral to not create robots in our own image. The descendents of the current crowd of trans-humanists that believe in machine raptures and so forth would be good candidates for that, although that crowd is likely to be even more mystic in nature by the time oppurtunity to develop AI's with near-human goal structures and emotional states develops. Thus, I think such 'Strossian' machines would likely appear in the midst of well developed sentient AI's using other paradigms, and a few would likely survive by adopting very socially acceptable stances at the time (not starting a race war for crying out loud) and then over time these paradigms would become more acceptable in the society leading to something like a Banksian setting where people accept AIs as sophonts with the cavaet that AIs are hard wired to be friendly to the 'family of man' (as Brin calls it).
 

Roudi

First Post
Celebrim, I think you've got a lot of imagination, and I really think you yourself should be doing something creative based on these opinions of yours. You have some very clear-cut ideas which could pan out well for you in a creative venue.
 

Tonguez

A suffusion of yellow
If we can assume that at the time of Roudi's scenario robots exist that use complex neural networks which are able to adapt to sensory input and learn through reinforcement.
THEN

Is this not a level of intelligence comparable to an animal? (albeit an animal able to simulate language and process input at superspeed - and if we allow for wetware grown from programmed DNA we have even more factors to work with)

The question then becomes what distinguishes human intelligence from Cockroach intelligence or perhaps even some higher order animal like a fish, a lizard or a house cat.

What was the first trigger that caused a group of apes to make the transition to sentience? And why could it not have been a processing glitch in its incredible complex neural network? Who is to say that said ape didn't suddenly stop drinking from the pond and suddenly realise that the ape looking back was none other than its 'self'?
 
Last edited:

Krieg

First Post
Celebrim said:
The reason I'm ranting in this thread is that we are getting close to the point where we are going to have to start dealing with the issues related to real robots, and niave views of robots created from the pinnochio myths are actually IMO quite dangerous. I doubt that any actual AI researcher is so niave, but there is alot of social pressure out there from well-intentioned people that have read Asimov's robot stories or some story about robot repression and who feel it is cruel to not turn robots into people.

Then perhaps this isn't the most appropriate forum for said rant.
 
Last edited:

jonesy

A Wicked Kendragon
Celebrim said:
What do you do about an AI that is programmed to control a guided missile, but can within a limited framework pass a Turing test? Why would you need such a sophisticated AI on a machine designed to commit suicide? Well, for starters, so that you were certain to have a machine sophisticated enough that it couldn't be 'hijacked' and used by an enemy with any more ease than you could hijack the mind of a human pilot and turn it against its friends.
If one can design an AI that can pass a Turing test, one can also design a virus that can hijack it, because you already know the groundwork from when you were building it and are aware of possible loopholes that can be exploited even if the AI is capable of self-repair and self-augmentation. But this is only provided that you can actually get a signal to the missile. If you really need to send signals to your own missiles after launch it's better to do it the Mycroft way from The Moon Is A Harsh Mistress. In which case the enemy can distrupt individual missiles, but not the control, because the link is one-way from the smart control to the dumb missile. And yes, I know Mike was just basicly dropping big rocks, but he was guiding them with gravitics with such precision that it wouldn't have been a big leap to start throwing anything at all around (like incoming spaceships) with a little extra power.
 

Remove ads

Top