What would AIs call themselves?

Janx said:
Neuromancer by Bill Gibson is a pretty good example of what AIs MIGHT do. It's not the only possibility. They might also try to anhilate the human race like terminator.
Trust the Computer. The Computer is your friend.
 

log in or register to remove this ad

Janx said:
Raise your hand if you've been writing software professionally for 10+ years.

Alright then...the rest of you should prolly sit down and observe.
or possibly just skip the rest of any post that starts with that sort of attitude.
 

Kahuna Burger said:
or possibly just skip the rest of any post that starts with that sort of attitude.

You're right. You can do that.

There's plenty of good ideas and philosophical thoughts in this thread. What's disorienting is non-technical people espousing how something works. How the AIs live and behave is quite seperate from their technical make-up (or can be). Folks who can't make an AI, should stick to being vague about how they work.
 

Janx said:
You're right. You can do that.

There's plenty of good ideas and philosophical thoughts in this thread. What's disorienting is non-technical people espousing how something works. How the AIs live and behave is quite seperate from their technical make-up (or can be). Folks who can't make an AI, should stick to being vague about how they work.

There are a lot of good ideas and discussion in this thread but to be honest, as far as REAL AI is concerned, you are no more qualified than I at commenting on it. Being the smartest dog on the sled team (on one subject) still means your a dog. Basically you're a cave man trying to comment on how a star ship would work. Don't critisize others for their opinions on a gaming forum.

As for the original question I do agree with you that humans would name them. There'd be no motivation for them to name themselves beyond a serial number or something like an IP address.
 

Janx said:
You're right. You can do that.

There's plenty of good ideas and philosophical thoughts in this thread. What's disorienting is non-technical people espousing how something works. How the AIs live and behave is quite seperate from their technical make-up (or can be). Folks who can't make an AI, should stick to being vague about how they work.
So everyone should shut up since no AI has ever been made.

Oh, and 15+ years as Programmer/Analyst with a Master's Degree in Computing so I hope you weren't referring to me.
 

Nifft said:
You previously said that you don't think they're possible. How can you have an opinion on the specifics of their implementation?

I certainly hope that the AIs have better self-modification tools than we do -- ours are time-consuming, clumsy, and often unsuccessful. It would be foolish of us to create "bug-compatible" versions of ourselves. We should give our creations a better path forward.
Again, I don't believe in Elves or Unicorns shall I write some articles about their mating habits? (among their own kind, not elf on unicorn action... er... um...)

Moving on... I did state that AI could be built using non-Von Neuman computers and posited that AIs might be made using Quantum Computers or through biological computing. These system will still run algorithms and have analogs to stuff like procedures and as such the AI will still not be able to "change its mind" by locating the implementation of a certain behavior and deleting it.

It's only the computer on your desktop and its ilk that I believe can never possess true sentience.
 

jmucchiello said:
At this point I think we've converged in agreement from diverse definitions far more than from diverse opinions.

The only thing I would reject completely was the concept that the AI in any way had access to its own thought process any more so than you or I do. The AI can't decide "Hey I need to rewrite my "blue" recognition algorithm". The "self-modifying" nature of its program (however it is accomplished) is opaque to it in the same way we can't explain how we've "changed our minds".

I would agree. I think that it would be folly on several levels to give an AI complete access to what we would call 'instincts'. There are some things that you'll want to teach the AI that you'll never want it to unlearn.

Not only is this a good safety feature, but its just good design. Alot of its low level processes will be the AI equivalent of breathing, and you just don't want to risk them being tinkered with directly. Even if the AI sees a way to make its low level processes more efficient, you wouldn't want to allow it because there is no reason to assume that an AI is going to be a perfect programmer. It will also produce bugs, and as such, you don't want to market a household bot that occasionally shuts down because it decided to modify its power recharging/regulation reutines and it introduced a fatal bug.

Similarly, you never want an AI that could tinker with the instinctive relationship it has to its legal gaurdian.

And speaking of, legal responcibility is just another reason why independent sentient robots are problimatic. From the standpoint of the law, the real question isn't just proving sentience, but proving that the entity is independent in its motivations is extremely difficult in a created being. One could easily imagine a political group creating large numbers of AIs with a subtle bias in thier programming for certain beliefs. Should we know recognize these beings as fully enfranchised beings with all the same civil rights to excercise as anyone else? Virtually any sort of hidden dependence like this creates all sorts of problems, and in a sufficiently complex system it would be probably more difficult to recognize than sentience (and that's hard enough).
 

Remove ads

Top