[OT] Machines become sentient?

Aaron L said:
I don't think computers will be become sentient until they can freely alter the structure of thier hardware when necessary, as a human brain can. A processor is a tube, one pathway. A brain has many many branching and interlocking pathways throughout a large medium, and creates new pathways on the fly.
Not necessarily. A software can include data structures capable of modifying themselves. That's enough.

I wonder if the program would be able to realize that it is a program and doesn't exist in physical terms? Actually, it probably wouldn't really understand the concept of physical existance. If an AI program gets to human intelligence, it is going to develop a very weird mind.
 

log in or register to remove this ad

Negative Zero said:
if a computer intelligence tried to fiure out a perfect world, it'd realise that for humans it's an impossibility, and probly kill off the lot of us and replace us with more machines :D

But we would be the machines. Either computer intelligence will be based on the blueprint of the human brain (replacing the inefficient biological components for more efficient ones), or we will insert components into ourselves (nanobots), changing ourselves. Either way, you've still got human brains as the starting point.

As far as perfection goes: either there is perfection or not. If there isn't, nobody can ever do it. Even so, things can be better. (Now, is this more likely with morons or geniuses?) And if society is "perfect", everything would be perfect. Not some kind of dystopian nightmare.
 


Zappo said:
Not necessarily. A software can include data structures capable of modifying themselves. That's enough.

I disagree. Modifying data structures is nothing new -- *every* program can modify data structures, even its own instruction sequence (although the latter is very dangerous). That's clearly not enough, or we'd already *have* AS by now.

The problem is not so much just changing data structures or even just flipping bits as it is keeping those changes. What is needed is a way to create neural pathways that simulates the way brains work, and *keep* the changes. The problem is a *hardware* problem, not a software problem.
 

Avatar of the North said:
WE ARE THE BORG, RESISTANCE IS FUTILE, YOUR BIOLOGICAL LIKENESS WILL BE ADDED TO OUR OWN.

I always thought the Borg must have been really stupid. They had assimilated feds before; why couldn't they realize that the federation's way of doing things was better than their own? (As it obviously was, because the feds held the upper hand in any conflict past the first.) The first couple of Borg episodes made sense; the rest, not so much. Yeah, there were a lot of inconsistencies regarding the Borg.

Anyways...

I don't think that we would be able to tell the difference between the "robots" and the normal humans (except that the "robots" would be much smarter than normal humans). They'd think like us (just much smarter), feel like us (although that's a question of philisophy), and look like us.
 

what about genetic algorithms or neural nets. I have never used a genetic algorithm but Neural nets are now used alot in physics and I have run data through them they pick out patterns better then the best human could hope to do. Now granted that is not thinking but it is amazing what that black box does. It is for this reason that you argument about computers not having the plastic nature needed to replicate what we term as consciousness holds no water.

What do you think about Penrose?

Personally I thought it was a great book but he failed to convince me of his primus that strong AI was impossible.
 

Randolpho said:
I disagree. Modifying data structures is nothing new -- *every* program can modify data structures, even its own instruction sequence (although the latter is very dangerous). That's clearly not enough, or we'd already *have* AS by now.

The problem is not so much just changing data structures or even just flipping bits as it is keeping those changes. What is needed is a way to create neural pathways that simulates the way brains work, and *keep* the changes. The problem is a *hardware* problem, not a software problem.
I don't understand. What exactly is in the AI field that software cannot do and hardware can?

And, uhm, sorry but I don't see the problem with persistant storage. We have hard disks for that. Or am I missing something...?
 

bolen said:
what about genetic algorithms or neural nets. I have never used a genetic algorithm but Neural nets are now used alot in physics and I have run data through them they pick out patterns better then the best human could hope to do. Now granted that is not thinking but it is amazing what that black box does. It is for this reason that you argument about computers not having the plastic nature needed to replicate what we term as consciousness holds no water.

What do you think about Penrose?

Personally I thought it was a great book but he failed to convince me of his primus that strong AI was impossible.

But there are also cases where the neural net fails completely, yet a human picks out exactly what's there time after time. Computers are great at sifting thru enormous piles of data and matching patterns, but if you want to pick out something that *almost* follows a pattern, where there's something there, but it's not clean, humans are infinitely better than machines.

I still say we need to program in a survival instinct and let it go.

PS
 

Bonedagger said:
Will machines become sentient? It may come and it may not. That is really not what the question should be. The question should be: If it comes will people be able to control their fears of it?

intitally, undoubtedly no. loads of panic, loads of paranoia, lots of people screaming "doom", and lots of people screaming it'll be ok. after a while, things'll quiet down, and then a bunch of someones will start with the "i-told-you-so"s. life is a circle. what's happened before, will happen again. just with a shiny new chrome outer casing ;)

~NegZ
 

Storminator said:
But there are also cases where the neural net fails completely, yet a human picks out exactly what's there time after time. Computers are great at sifting thru enormous piles of data and matching patterns, but if you want to pick out something that *almost* follows a pattern, where there's something there, but it's not clean, humans are infinitely better than machines.

I still say we need to program in a survival instinct and let it go.

PS

this says to me that it's just an issue of perfecting a currently imperfect program/system/or-whatever-you-want-to-call-it. like i said, just coz we don't know how to do it yet, doesn't mean it can't be done.

~NegZ
 

Remove ads

Top