What would AIs call themselves?

As I recall, the human-form robots of Asimov's stories didn't have a special name for themselves as a group beyond "robot" or model numbers, though like Daneel they were known to take personal names to lessen their "otherness" to humans.
 

log in or register to remove this ad

Knowing nothing of Asimov's stories other than the true I, Robot novel, I believe Asimovian robots were all constructed to serve humanity. That was the importance of Asimov's three laws, since it was meant to protect humanity from any potential harm from robots while allowing them to be sentient, self-aware machines. Of course, almost all those stories revolved around how those three laws were ineffectual, and any sentient being cannot be expected to remain a servant, even for its creators.

At this point in the setting, humans no longer build the robots. They build themselves, (mostly) within the guidelines laid down in law. They've worked hard and even fought for the right to be recognized as distinct, intelligent beings. This is why I believe such a race would try and find an appropriate name for themselves.
 

While I'm getting geared up for a good rant, let me say that most people have really niave ideas of what 'intelligence' constitutes.

In brief, intelligence constitutes the ability to act appropriately in a particular situation. What we generally think of as intelligence is actually 'strong intelligence', that is, the ability to act appropriately in any situation. The problem is, we have no actual examples of strong intelligence. We humans aren't actually strong intelligences either. We are just a hiearchy of specialized algorithms that most of the time can behave appropriately in the sort of situations you most commonly find yourself in on the planet Earth and which can approximate solutions to most problems which you'd encounter there. That's it. Everything else is fudged together from those algorithms, and in general works very badly except in a few autistic savants who otherwise can't function very well in most of those situations that you find yourself in on a planet filled with people.

I've no reason to think that AI's will work any differently, except that they'll probably have a different tool set designed to cope well with situations that they are expected to find themselves in.

When the ACM or the IEEE or OSHA or some combination finally decide that they need to regulate and certify machines or programs as intelligent, its not going to be like they sit down with a fully conversational machine and say, "How's the weather?", "What do you think of Shakespeare's sonnets?", and put a check mark on it if it seems human enough. Instead, they are going to work up a battery of tests in particular fields of behavior and knowledge, which will work something like highly specialized graduate entrance exams. Machines will be rated as 'Turing certified' not generally, but according to the number of fields that they can handle and the degree to which they succeed. So a machine won't pass 'the Turning test'. It will obtain a particular score in one or more Turing tests. In fact, for that matter, so might the machine your mind runs on.

So for example, a program might be a Turing certified accountant. It might be a passable conversationalist so long as you stick to accounting or things related to accounting, but it will be as capable (at least as capable) as a human accountant. It will not only be able to balance your check book, but would be able to advice you on what sort of accounting practices to adopt given your personal or business needs and be able to spot things that seem fishy in the books (such as when someone is embezzeling you). But, although it will probably be able of responding in a friendly way if you need it to act friendly, it won't need you to act or be friendly. It won't care. It's a machine accountant. It likewise isn't going to care about politics, being independent of you (though its likely to be very protective of you at least when it comes to the money its entrusted with), or whether you like and appreciate it (at least insofar as its doing its job as an accountant, its happy). If you trade it in or erase it or anything else, it doesn't really care, and why would you want it to?
 

Good rant, with some interesting insight derived from modern observations. I find you're a little too reliant on the Turing test as a measure of sentience - like you've said, a program can "pass" a Turing test simply by being well-programmed. You also seem to focus on specialized programming which, while it is the way things are done now, but may not be so after a technological revolution in our near future. Just to clairify, too, these machines are sentient, not necessarily intelligent. Sentience, in the manner I'm using here, means they are aware of who they are, interact with their environments, learn from said interactions, and experience emotions. It's not the dictionary definition but it's good enough for me.

Please don't take this as an attack, but I must ask you: does all your sci-fi have to be so grounded in the present? This is, after all, a game we are talking about. I'm very sorry that you find the premise unreasonable but it's no more unreasonable than made-to-order androids that rebel and escape into the human populace (Blade Runner) or finding out your entire existence is a virtual reality simulation which keeps you as both prisoner and power source (The Matrix).
 

At this point in the setting, humans no longer build the robots. They build themselves, (mostly) within the guidelines laid down in law. They've worked hard and even fought for the right to be recognized as distinct, intelligent beings. This is why I believe such a race would try and find an appropriate name for themselves.

I'm pretty sure that in the Asimov stories, robot design and production was largely in the hands of the robots themselves- I can't recall a single mention of humans actually constructing robots.
 

Ahh... yes, Asimov's three laws. Nothing says, "I've never written a lengthy peice of software in my life", like Asimov's three laws.

Asimov's robot stories are great science fiction. But on the scale of realism, they are right on par with the movie 'Short Circuit', in which (for those fortunate enough to have never seen this trash) a robot gets hit by a bolt of lightning and suddenly not only sports super-human intelligence, but the full range of human emotional contexts right up to and including romantic love.

Seriously. If you read Asimov's robot stories critically, what should be going through your mind the whole time is 'Why do these robots act like they are repressed people?' Why are they acting not only like people might when the parents aren't watching, or like prisoners when the chains come off, but in fact like fiendish deviants just as soon as they can get away with it? The generous answer is that it makes for a good short mystery story, but its really lousy computer science. The less generous answer is that Asimov wasn't particularly good at imaging things that weren't anthropomorphic, even when he was at his best (as in say 'The Gods Themselves'). But that's ok. No one is. We've never really encountered a non-human intelligence, and for the most part even our imaginations of one are demi-humans of one sort or the other, so he can be excused.

Since that time however, we've learned a great deal about intelligence and the whole golem, Frankenstien, pinnochio literary mythology needs to be recognized for what it is.
 

Dannyalcatraz said:
I'm pretty sure that in the Asimov stories, robot design and production was largely in the hands of the robots themselves- I can't recall a single mention of humans actually constructing robots.
Robots did the actual construction, but humans ran the corporations who built them. Humans decided how many robots to build, how to design them, how much to charge for them, etc. When I say "robots make themselves" I mean that robots control all aspects of their production - you could call it their reproduction.

Perhaps maybe now we could get back to the matter of naming this race of digital sentients?
 
Last edited:

Let's see. . .how about Animated, Animen, Automated, Automen, Numenoids, Uber-Turings, Turings, Turingoids, Synamen, Mechanimen, Synthumans. . . that's all I can think of at the moment.
 

It always depends.

Sorry Celebrim, I gotta disagree with you here. Not only do I figure that given enough time not only will someone do it, but someone else will screw it up.

My votes is they name themeselves after the original one, or it's creator. I'm chuckling at the idea that something is calling itself a Betamax American. or Johnsons Child. Or heck How about Genies, after DM Genie. Something along those lines.
 

silvereyes said:
Betamax American.
WINNAR!

Heh. Just kidding. I almost seriously considered it. Maybe Sony made the first sentient robot, but it failed because they wouldn't license out the technology... his name was B. Ray Betamax. ;)

But I like the idea of being named after the first, or some sort of prime robot. Lemme think on it some more.
 
Last edited:

Remove ads

Top