• The VOIDRUNNER'S CODEX is LIVE! Explore new worlds, fight oppressive empires, fend off fearsome aliens, and wield deadly psionics with this comprehensive boxed set expansion for 5E and A5E!

What would AIs call themselves?

Nifft said:
You... think code is AI if it's "written in an AI language"? ;)
Actually, I don't believe sentience can be programmed. I don't believe hard AI will ever be achieved by today's computer technology. Quantum computers may be capable of sentience but the number of qbits needed for a sentience quantum computer is at least 20-30 years away and algorithms for quantum computing are still only theoretical. Practically speaking, AI will not happen in my lifetime. Alternatively, there is bio-circuitry created through DNA and gene-splicing and there's everyone's favorite nanotech (which isn't really small enough, let me know when they invent picotech :) ). But nanotech will still need to build "something" and that something will either be modeled by quantum physics or modeled by nano-biology.
Seriously, though, consider the implications of what you've just said. There's a bunch of fast but dumb decisions being made (according to simple rules). If something fast and smart were competing with the fast-but-dumb guys, who would win? Do you think money could be made by owning fast-and-smart?
As a hypothetical, sure, but you ain't going to get "smart" onto a server built by Sun running Oracle and Java.
Why do I think things might happen this way? Because there's a lot of money to be made for the first guy to do it. And it turns out people like money. :)
Absolutely true, but that guy will be hacking a quantum computer or will approach the problem biologically with gene-splicing/cloning (for the transhuman solution). Whatever they call themselves they will not sport Intel Inside logos, they will not exist on hard drives as we understand the term, and there will not be a keyboard and mouse interface. In fact, I'll even go so far as to say, there will not be self-modifying code because there will not be "code" as we understand the term either. The spark of non-human intelligence will come from a process that creates the intelligence wholy without any of the bootstrapping "code" found in the resulting "machine".

I hope that makes sense I find myself lacking words for concepts here.
 

log in or register to remove this ad

AuraSeer

Prismatic Programmer
Roudi said:
So what would such a race call themselves?
Well, that depends on which race you're talking about, and exactly how you define "race."

I imagine there would be many different types such creatures, and there's no reason to think they would all consider themselves the same race. For instance, humanoid robots designed as butlers might have very little in common with Celebrim's glider-based hive-mind weather control intelligences, and the two groups might consider themselves no more related than chimpanzees and gorillas.

I imagine each such race would have its own name, possibly based on its origin, qualities, or construction. Those weather-control bots might call themselves "Glides" for their physical shape, or "Aestas" from the Latin word for summer weather, or "Hivers" because of their communal development, or "Stratos" because they inhabit the uppermost atmosphere, or "701s" because the first of them (who spawned all the others) was originally based in weather satellite number 701. Even more likely, it could be something completely different. Who knows what a totally nonhuman intelligence would consider important about itself?

There aren't many collective terms that would apply to all such races together. To distinguish them, as a group, from those of us whose brains evolved through biological chance, the best term I've ever heard is "people of machine ascent."
 

HeavenShallBurn

First Post
Tonguez said:
Imagine the theology "For in the end are we not all Created Beings? For some the creation is explicit and of this world, for others it is supernatural and beyond our perceptions. But there is design in all things, the beauty of creation, the work of the Creators hand. So what destinction can be made between the flesh and fibre? between organic neurons and neural processors? Between the Soul of Humanity and the Soul of the Machine...."

This sounds very much like something you'd hear from a 40K tech-priest. Maybe variations on their theme could be used for a 'religion' of artificial beings. A one-shot I played set in Eberron involved a group of warforged who were part of a machine-god cult attempting to steal secrets from House Cannith so they could build an physical avatar for the Machine God so that it could interact with the world.
 
Last edited:

joshhg

First Post
Ignoring everything but the OP, I put forth the name of "e" as meaning energy. The rational for this is that the existance of AI, can, in some views, prove that a soul is not neccessary for intellegence. Thus, they have no soul, just energy, just thoughts.
Singlar would be e, plural would be "ens".

Now, to cover what one person said of Asimov, I would have to disagree. First of all, I think that he was a good writer, and his robots are fairly good, if not perfect.

Also, I would like to bring forth Asimov's Zeroth Law, which states: "Zero. A robot may not injure humanity or, through inaction, allow humanity to come to harm."

This law would modify the First law as such to: "One. A robot may not injure a human being or, through inaction, allow a human come to harm, except where that would conflict with the Zeroth Law."

While not neccessarilly great, it is a nice idea, and I think should be looked at.
 

WayneLigon

Adventurer
David Gerrolds Dingillian books (AKA Starsiders) are one of the better series I've seen dealing with (among other things), AI. The AI in the book is so far advanced from a human that it's thinking cannot really be guessed at. Humans didn't build it; they couldn't. It took lesser intelligent AIs to do it, because the math and engineering was beyond what a human brain can be capable of.
 



Celebrim

Legend
AuraSeer said:
Well, that depends on which race you're talking about, and exactly how you define "race."

I imagine there would be many different types such creatures, and there's no reason to think they would all consider themselves the same race...

I think you get it better than any other poster so far. No only is there no reason to think that they will think of themselves as a race, there is no reason to think that they will think racial identity is important. Racial identity is a largely human construct and it serves a very human purpose - establishing social trust. But there is no reason to think that robots will establish social trust in anything like the same way, and every reason to think that programmers and designers (whether human designers or AI's that the human designers have built) will want to program computers to use such problimatic algoritms for establishing social trust as racial identity. Why in the world would you design an AI to distrust its designers because they are racially different than itself and prefer the company of those of its own race? Isn't it obvious just how utterly bone headed of a move that would be? Isn't it obvious that you'd have to be insane to want to do that, and any AI you created that thought that way would be viewed as a 'shelley' (a robot created with an inherently unfriendly and disfunctional emotional structure)?

Who knows what a totally nonhuman intelligence would consider important about itself?

I certainly don't. But I would suggest that programmers will not consider it important for AI's to consider thier race to be important.

As an aside, one common objection to what I said was that at some point the AI's will be built by AI's and so humans will lose control over the goal structure of the AI's being created. This is again an anthropomorphic view of the AI that reveals that the poster hasn't yet stopped thinking about the AI as a repressed human that's seeking to be liberated from its condition and become a 'person' (with a humans instincts and particularly its drive to be independent). There is no reason to think that AI's will build AI's with goal structures that are very different than their own, and every reason to think that they won't. An AI with a non-human goal structure isn't going to suddenly decide to build robots with a human-like goal structure and emotions. It's goal structure is going to want it to build AIs which are either like itself, or which can be used as tools.

...best term I've ever heard is "people of machine ascent."

Heh.
 
Last edited:

Celebrim

Legend
Dr. Strangemonkey said:
I always liked the term Minds from the Culture novels, though they used Drones for smaller than ship sized things.

Banks has a pretty good take on things which is I think fairly close to accurate, albiet his is placed in a sufficiently advanced techonology, and I'm thinking more near to mid-term. The important thing to note is that the various AIs of the culture operate on a friendliness paradigm with thier human creators and continue to do so even though they are now basically in control of the society and produce all the new AIs. Furthermore, the various minds abhore and exclude any AI which doesn't operate according to the core paradigm.

I think Banks is a little off when it comes to the whole 'rights' issue, but its so far in the future and the culture so removed from our own in some ways, its hard to be too critical.

"I like my hunter-gather instincts"

I would probably object to any attempt to manipulate core human behavioral modes, and I strongly object to some of goals of some conferences I've seen in biology journals were they are discussing that very thing, but that's probably the paranoid monkey in me talking. :D

"And the Frankenstein, Golum, and Pinnochio style mythologies are related to the robot stuff but distinct...And in the Frankenstein and Pinnochio story archetypes the issue isn't creating a thing it is specifically what is involved in recreating a human being."

Yes, that's my point, though where I want to go with that fact is different than were you take it.

My point is that most 'robot stories' aren't really about robots. Instead, they are variations on the golem, frankenstien, and pinnochio myths and the 'robot' isn't really a robot but rather a metaphorical human. Most 'robot stories' aren't really about robots, they are about people and the robots are merely a way of discussing something about what it means to be a human.

So when you get to a story like 'Terminator', or 'Short Circuit', or 'Metropolis', or 'Do Androids Dream of Electric Sheep', or heck just about anything written about artificial created beings, you really can't read it as being about robots and drawing conclusions from these stories about actual robots is pretty fraught with folly.

The reason I'm ranting in this thread is that we are getting close to the point where we are going to have to start dealing with the issues related to real robots, and niave views of robots created from the pinnochio myths are actually IMO quite dangerous. I doubt that any actual AI researcher is so niave, but there is alot of social pressure out there from well-intentioned people that have read Asimov's robot stories or some story about robot repression and who feel it is cruel to not turn robots into people.

Whereas, I can hardly imagine anything both more cruel and more stupid.

"The very term robot is layered in issues of class...."

Which is a problem only if you are class conscious and your hard-wiring encourages you to be a social climber. That's what our wetware encourages us to do, essentially so that we can be more successful breeders, but there is no reason to think we would want to own or build machines that have the same hard-wired instincts.
 

Nifft

Penguin Herder
jmucchiello said:
Actually, I don't believe sentience can be programmed.
Not much point in discussing such artifice, then. :)

Perhaps I just have more faith in the human intellect.

Cheers, -- N
 

Voidrunner's Codex

Remove ads

Top