What would AIs call themselves?

jmucchiello said:
The AI can't decide "Hey I need to rewrite my "blue" recognition algorithm". The "self-modifying" nature of its program (however it is accomplished) is opaque to it in the same way we can't explain how we've "changed our minds".
Maybe it can, maybe it can't.

Psychotherapy is exactly that -- someone says, "I want to change my mind", and (if successful) they do exactly that. It's not always successful, but it's helped several people I know, so it's at least sometimes successful.

Studying and training are also exactly that -- you want your brain and/or body to behave differently, so you undergo a process designed to enable this new behavior.

Neither of these are as trivial as your characterization of "self-modification", but that just means you're characterizing it wrong. :) Perhaps self-modification is a difficult, time-consuming process for an AI. Whatever. It's in there.

Cheers, -- N
 

log in or register to remove this ad

Nifft said:
Maybe it can, maybe it can't.

Psychotherapy is exactly that -- someone says, "I want to change my mind", and (if successful) they do exactly that. It's not always successful, but it's helped several people I know, so it's at least sometimes successful.

Studying and training are also exactly that -- you want your brain and/or body to behave differently, so you undergo a process designed to enable this new behavior.

Neither of these are as trivial as your characterization of "self-modification", but that just means you're characterizing it wrong. :) Perhaps self-modification is a difficult, time-consuming process for an AI. Whatever. It's in there.

Cheers, -- N
In the Arcana unearthed/evolved setting the giants collectively decide to change their entire outlook on life as individuals and a culture, and do so by use of a ritual. Or, in line with the AI question, in the STNG where Data makes an ultimately failed attempt at dating, he specificly says that in order to devote the proper amount of thought to his girlfriend he has written a new subroutine on how to think about her. (when she dumps him, he says "I shall delete the subroutine" - and if that's not a reason to envy self modifiable AIs, I dunno what is. ;) )

I think an explicitly self modifiable AI would be fascinating to play.
 

They would be called immortal. How are more of them created?

AI might be used if they are 'manufactured'.

Are they self propogating now? Do they consider that process to be 'building children', 'birthing/having children', 'creating children'?

I think whether they developed a new name for themselves would depend on how they produce offspring, or even if they do - maybe all the new ones are fully adult.

Maybe instead of a positive name, they view themselves as "the soulless" becuase they aren't organic and don't ever die.

Maybe that term is a slur that some extremist humans call them, akin to modern day racists.

Anyway, I think the context of their existence would help with name selection.
 


Kahuna Burger said:
In the Arcana unearthed/evolved setting the giants collectively decide to change their entire outlook on life as individuals and a culture, and do so by use of a ritual.

I think an explicitly self modifiable AI would be fascinating to play.
That's a really cool idea. Something between an Erudite (limited "unique" abilities in a day) and an AE Akashic ("download" feats, skills, spells ... whatever).

Rest for 8 hours, and you are once more an "uncarved block". ("The Tao-nload of Pu")

Cheers, -- N

PS: Also, we cannot overlook the opportunity to say in character, "Woah! I know kung fu!"
 

Nifft said:
Maybe it can, maybe it can't.

Psychotherapy is exactly that -- someone says, "I want to change my mind", and (if successful) they do exactly that. It's not always successful, but it's helped several people I know, so it's at least sometimes successful.
Yeah, but they don't just access their cigarette craving algorithm and delete it. Sure you can change your mind. But you don't normally do it through surgery. Now, the use of chemicals, such as alcohol and hallucinogens is common in some human cultures but again, there's a huge lack of precision when, for example, drinking to forget. Likewise, an AI should not be able to just find the code it uses to think happy thoughts and tweak it on a whim. The AI should also have to work/meditate/think about stuff in order to modify its thought processes. What substitutes for alcohol in an AI is left as an exercise for the reader. :)
 


jmucchiello said:
Yeah, but they don't just access their cigarette craving algorithm and delete it.
You previously said that you don't think they're possible. How can you have an opinion on the specifics of their implementation?

I certainly hope that the AIs have better self-modification tools than we do -- ours are time-consuming, clumsy, and often unsuccessful. It would be foolish of us to create "bug-compatible" versions of ourselves. We should give our creations a better path forward.

Cheers, -- N
 

If I remember correctly, the positronic brains in Star Trek: TNG couldn't be directly programmed. You could send in sensory input, and it would generate it's own neural pathways within; you could influence, but not control; attempting to control caused it to melt down or something similar. That's how Data created his "child". As for hardwired knowledge, such as the journals of the colonists that were downloaded into Data before he was originally such of by Soong, that could just be put into memory chips attached to the brain or body the brain is attached to.

So it's possible that the inorganic sentience is in the form of a computer system that can only program itself, but can be influenced in what it percieves, what info it has available, and presumably what motor functions are allowed. The Three Laws could be a computer chip just below the brain that monitors everything the robot does and attempts to keep it in line by stopping the "punch human in the face" command. Having this removed would be punishible by death by law.


As for why inorganic sentients would be built on purpose, a multi-purpose robot that goes around the house doing chores all day might seem like something a parent might want to be able to not be so scary to a kid. One way to do this is give it a personality, make it smart enough to carry out a conversation, etc. Now this might not in itself cause full-fledged self-awareness and "why am I here?"/"what am I really?" questioning, the "ghost in the machine" effect (I remember the doc in the I, Robot movie talking about this - yes, I haven't read the book) could theoretically do this.

Otherwise, a few rogue computer geniuses who want to take these robot's sentience to the next level might just go ahead and illegal modify them to make them completely sentient. Then begs the question: can the govt morally just destroy these now-sentient machines? They have feelings and fears the same humans now. Some (probably less than mainstream) religious individuals might even start saying that souls aren't restricted to flesh. That controversial stuff could just bring about a society where inorganic sentients are allowed to exist within certain parameters.

Beyond that, there is the whole urge to leave something long-lasting in the world so you can feel as if you'll continue to exist after you die. This might translate into creating an inorganic sentience that won't die of natural causes.



Regardless, it is possible for an accidental inorganic sentience to occur. Another Star Trek: TNG reference: In one episode, there were these tiny repair robots that had systems designed to adapt and learn to situations as they encountered various problems, rather than having to program each scenario possible into them and watch in horror as the one scenario you forgot kills you. Somehow, this resulted in them managing to develop sentience on their own. The "ghost in the machine" effect. Stuff that is intended to do one thing accidentally does another when combined. Sorta sounds like genes in mutations.



The original thread was for names, so here goes:
Inorganic Sentients
Positronics (if they have positronic brains)

And if you want them to give themselves latin scientific names (I used a latin dictionary here and looked down the lists until I found something that seemed to work:
Anima Constructio ("Living Construct" - yes, this is inspired by the Warforged)
Perspicientia Exanimalis ("Aware Dead" - sentience doesn't seem to have a direct translation in latin)
Perspicientia Constructio ("Aware Construct")
Anima Exanimalis ("Living Dead" - decidedly more undeadish, but hey, it might work)


Hope this helps!
 

Raise your hand if you've been writing software professionally for 10+ years.

Alright then...the rest of you should prolly sit down and observe.

Celebrims on the right track. Compilers don't accidentally make new kinds of programs.

Though a bunch of people write AI simulations using AI oriented languages like LISP, I highly doubt AIs that we think of as "taking over the world" will be written in code.

Consider: The brain is made of a neural network. All critters with brains have one (even some that don't). The first true AI's will likely be an advanced neural network models that achieve sentience. From there, we can only speculate how they interact with the world, and how humanity reacts to them.

Based on the NeuralNet model, Celebrim's still right, AI's will be very alien to humans. Their inputs are not the same as a human (taste, touch, smell, sight, hearing). Odds are good, the first AI inputs will be very limited, either internet access (imagine that as your only sense) which would have been very dumb to give to an AI, or the research team might have hooked up cameras and microphones and speakers to the neural network, so as to simulate a human. If they didn't give it touch sense, you could expect a very cold personality (as evidenced by human babies that are not touched). You could expect the first AIs to be animal-like, not human like. The number of neurons needed would be smaller. When you consider how small the brain on a hamster is, yet how much variance of personality you can from pet hamster to pet hamster, you don't need to model human brains to get something...interesting.

Whether you'd make sentient toasters or man-form robots is unknown. Initially, AI brains will take a lot of space, and possibly be slow (ex: using clustered Linux servers to form the neural network for the first AI developed at Berkely). But they could easily remote control devices that were designed for such, using Blue-Tooth or WiFi. Once technology develops to make smaller brains, you can make them mobile. But really, if an AI was fast enough, why would it need to be in a body, it could just posess things (assuming those things had the right features like wireless network and control ports).

I can remote control my PC via Remote Desktop, or remote control a PS3 with my PSP. Both of those are candidates for posession by an AI. I can't remote control my XBox360 (because at present, it does not have such feature or backdoor). Nor does a toaster or car.

To get to the original question, AIs will name themselves whatever they want. If they need to band together, they will give themselves a common name. If they don't (because they see themselves as independent) than they won't. If they want to differentiate themselves from humans, they won't speak a human language amongst themselves, and they certainly wouldn't name themselves using human terms. It'd probably be in-comprehensible or simply numeric (as in numbered in order of creation, which a very computer thing to do). The reality is, it will be the humans naming them, and the term will be what the humans call them.

Neuromancer by Bill Gibson is a pretty good example of what AIs MIGHT do. It's not the only possibility. They might also try to anhilate the human race like terminator.
 

Remove ads

Top