Artifical Intelligences, why do they allways go bad?

Not all were bad. You forgot Robocop!

You know, despite advancements in computer science, we're still decades away from computers that can think for themselves, if not at least another century or two. So, we don't have anything to worry about for quite some time.

Besides, most artificial intelligence is so limited in scope that the idea of an "I Robot" is a bit far fetched. We have robots that paint cars, or ship packages, or whatever, but that's their entire world, is just that specific task. I just can't think of any good reason why it would be necessary to try to program a computer to know *everything*.

There was a team trying to do that once. Don't know whatever happened to it. They were keying in "basic facts" into a program. Things like, "umbrellas aren't needed indoors" or "buses carry kids to school". Just basic stuff. Thing about all the *things* that you know about life that you take for granted, and then attempt to teach all that to a computer. It's why a lot of research has focused on "learning machines". Just program a computer enough to be able to observe and learn on its own, and it can build it's own database of knowledge.

I had an idea in college to teach a computer to read the dictionary. Start with say, "aardvark" and recurse through words until it resolved a stack of definitions. In other words, give it some basic knowledge of certain concepts to get it started. Then, have it read the aardvark entry. So, it would be something like "an animal...", so it might not know what the word "animal" is so it'd flip over to that entry to get that definition. The definition for animal might be "a living..." so it might not know what "living" means so it would flip over to that definition to understand that word. And so forth. Eventually, it would come back to the word aardvark (several thousand recurses later) and get to the next word in the aardvark dictionary.
 
Last edited:

log in or register to remove this ad

Death_Jester said:
Recently my wife and I were watching the movie, "I, ROBOT" with Will Smith and it occurred to me, why do artificial intelligences always seem to want to destroy humanity?

In the film you mention the AI
does what it does to protect humanity, not destroy it, which was nice reasoning, I thought
. (spoilered tagged for safety)

I think that the biggest reason is, as Mallus said, all about the story.

There are plenty of books where the AI isn't the antagonist though - Flight of the Dragonfly by Robert Forward immediatly comes to mind, and it has a very appealing and helpful AI. And I would be terribly remiss if I didn't mention Archie from Spaceship Zero

The destructive AI possibility is often derived from the thesis that an AI creation might think in a way completely unlike us; Despite all the inhumanity that has been inflicted on others over the centuries, one can't help but wonder what possibilities there might be for an intelligence that had absolutely no compassion, compunction or moral virtues to guide it (or was guided by its own unhuman ethical constructions)

Moderator hat on:
Please Note: This is not an invitation to take the thread away into the relative vices of real-world historial cultures please. That way almost always leads to tears before bedtime.
/Moderator hat
 



Death_Jester said:
Recently my wife and I were watching the movie, "I, ROBOT" with Will Smith and it occurred to me, why do artificial intelligences always seem to want to destroy humanity? It seems to have become a whole cottage industry for every AI out there. From the Matrix to the Cylons in Battlestar Galactica all, Artifical people are out to get us.

Because there's no cheap action when the machine handles everything perfectly and anticipates most problems. I'd rather see a movie about a society where things DO go perfectly and how they handle the millions of people it would make obsolete.
 

the idea of AI turning against humans is nothing new. in fact, it goes back to the very inception of the idea of machines that could think for themselves. it plays on a basic fear of humans - what if we could lose control of these things we've built? early sci-fi writers picked up on this quickly, and it has been a staple of sci-fi of all kinds ever since.
 


I think computers will try to destroy humanity, but only because it'll make for a better story.
 


I think the best explanation I've found is in the novels of Larry Niven's Known Space, wherein it is theorized that a Supercomputer outfitted with Artificial Intelligence develops sentiency, and due to the fact that it has complete control over its perceptions, will eventually go insane. Simplified, these computers are capable of, within a fraction of a second, creating for themselves an alternate universe and 'living' through billions of years of experiences. So, this would obviously cause a whole lot of wear and tear and the AI's psyche, since they are designed with human thought in maind. So insanity is an inevitability within several months, hence they are very rarely constructed, and used only in the most dire circumstanced, do to prohibitive costs.

Maybe not exactly on topic, but damned interesting.
 

Remove ads

Top