• NOW LIVE! Into the Woods--new character species, eerie monsters, and haunting villains to populate the woodlands of your D&D games.

Philosophical thread of the week: Could robots be conscious?

Turanil

First Post
I want your opinions on this subject. I don't ask for any truth, but would be glad to hear what you think:

So, we can reasonably foresee a time when Artificial Intelligence will match that of the human brain, coupled with the incredible computing abilities of computers. So we can imagine computers or robots that would look like sentient. I mean, being able to discuss with humans and interact with the world as well as we do, plus being able to perform incredible feats of calculations. Now... this doesn't mean that intelligent machines would automatically be aware/conscious. Consciousness is probably much different from intelligence. But hell, what is consciousness anyway?? :confused:
 

log in or register to remove this ad

Interesting idea. In my understanding of computer technology leads me to believe that no, a computer or robot could not possess a consciousness or sentense; it only possesses the knowledge and programming it was built with. Granted there is artificial intelligence, but that is a mathematical formula that basically boils down to if then statements.

It may be possible to build a robot that can think and react on its own. In that case, I think that would eliminate one of the advantages of using a robot, ability to completely control its behavior.
 


Turanil said:
I want your opinions on this subject. I don't ask for any truth, but would be glad to hear what you think:

So, we can reasonably foresee a time when Artificial Intelligence will match that of the human brain, coupled with the incredible computing abilities of computers. So we can imagine computers or robots that would look like sentient. I mean, being able to discuss with humans and interact with the world as well as we do, plus being able to perform incredible feats of calculations. Now... this doesn't mean that intelligent machines would automatically be aware/conscious. Consciousness is probably much different from intelligence. But hell, what is consciousness anyway?? :confused:

The problem here is that how would we know that a computer is really self-conscious, rather than just 'faking' by having intelligent-seeming conversation? By testing it with some sort of conversation test? If the computer displays high enough traits of consciousness, we would probably think that it is indeed conscious. But the problem is that a suitably advanced computer might fool the test. Then we'd make an enhanced test. And again, someone could enhance the computer to pass that test. See where this is going?

At no point could we be certain of the computers consciousness. What this boils down is that it doesn't really even matter. A computer 'faking' consciousness so well we can't make the difference, is for all purposes as good as truly conscious one.
 

ssampier said:
Interesting idea. In my understanding of computer technology leads me to believe that no, a computer or robot could not possess a consciousness or sentense; it only possesses the knowledge and programming it was built with. Granted there is artificial intelligence, but that is a mathematical formula that basically boils down to if then statements.

Even systems based on clear rules have a grey area where it is impossible to say whether a statement is 'if'. Gödel proved that in his famous incompleteness theorem. It basically says that any system that is not too simple (trivial) contains statements that cannot be proven true or false within that system.

In mathematics that means that there are statements about, for example, natural numbers that cannot be proven to be either true or false statements, using mathematical tools. The result for programming is that any advanced system has the possibility for situations where deciphering 'if' statements is impossible. How consciousness might arise from that, I don't know, but some for example the book Gödel, Escher & Bach examines the connection of Gödels theorem and AI at quite a length.
 

I wrote my Master's thesis on the mind as a computational model- I concluded that computers could have qualitative experiences. (The usual definition of "real" consciousness involves being able to experience things like the "redness" of red, or the "awfulness" of pain.) And since robots are basically computers with feet, then yeah, robots could be conscious.
 

Roboticists would claim that consciousness can never be created in robots, because we don't understand what creates self-awareness in humanity. Even if we could, understanding and recreating are hardly the same thing.

Nonetheless, the concensus seems to be that, at best, we could create a facsimile of consciousness with sufficiently powerful computers, etc., that would *seem* like consciousness to humans. You might never know that you weren't talking to a self-aware being, but it would actually just be "going through the motions." Now, if it's indistuishable from consciousness, I might contend that it *is* consciousness, since our hypothetical robot can't convincingly say one way or the other if it is self aware (if it answers yes, it's just going through the motions - if it answers no, doesn't that imply a certain level of thought required for consciousness?)

It's an interesting problem - I think Man is capable of creating many things, and the ultimate question will be one of *should,* not *can.*
 

It isn't like "consciousness" or "sentience" are particularly well-defined for those of us who are not machines. Darned difficult to tell if a machine has them or not.

Heck, African Grey parrots are now considered to have mentation levels similar to those of a 3 to 5 year old human. Are they conscious or sentient?

There have been entire books written on this subject. The best ones I've read coming at it from a standpoint of the physical sciences suggest that deterministic state machines can't be either.

Why not? While such a machine might pass a Turing Test - in the sense that a layman would not be albe to distinguish its output from that of a human - that's not the end-all, be-all of sentience. A true sentient being has the ability to choose. It has will.

A deterministic computer (like what we're using now) can't make choices, per se. It can only make determinations. It has a set of rules. Possibly horrendously complex rules, but rules nontheless. For a given set of inputs, there is only one possible output. Anyone who knows the rules and the inputs knows what the output will be. There is no other possibility. It cannot decide to do otherwise. And that's not will, or choice.

Now, by this logic, we have a problem. We build the computers, so we know that their operation is deterministic. We don't know this for certain about the human brain. We don't fully understand the mechanism, and the set of inputs are so incredibly large that we cannot hope to isolate them all and test ourselves. So, technically, we don't know that we, ourselves, are sentient.

Personally, if my brain is deterministic - if all my experience and choices are merely clockwork operations - I don't want to know it, because I'm not at all sure that there's a reason to continue on if that is the case...
 

The problem is defining what conciousness is. Can you define it? If so, does that definition hold true beyond you? If it does hold true. meaning that conciousness is now a definable standard, does that standard apply to all things? Would a robot or machine of any kind, be held to a higher or lower standard than man? If so, why?
 

I think it will happen, eventually.
Unfortunately, if that happens, probably it will do so before the humankind is ready for it, so those poor sentient robots will be subject to horrendous atrocities until laws are made to protect them.
 

Into the Woods

Remove ads

Top