• NOW LIVE! Into the Woods--new character species, eerie monsters, and haunting villains to populate the woodlands of your D&D games.

Philosophical thread of the week: Could robots be conscious?

A lot of times the word "consciousness" is used to mean "awareness." To be conscious of something is to be aware of it. In this sense consciousness is distinguished from being deeply asleep or in a coma. Some people who suffer damage to their visual cortex can be aware of stimuli without being conscious of it; they can "guess" the orientation (vertical vs horizontal) of a line with great accuracy while claiming they can't see it. This is called blindsight (different from the DnD term!) and is usually cited as evidence that awareness is not what is meant by consciousness.

Philosophers who talk about consciousness are usually referring to "phenomenal consciousness". This consists of having qualitative states; it is "like something" to have those states. Something's having a qualitative state does not necessarily mean that it is easy or even possible for us to know what it is like to have that state. For example, it has been argued that the question "what is it like to be a bat" cannot be answered. An entity can have consciousness without being able to verbally report the content of these states; no one thinks, for example, that someone who has aphasia lacks consciousness.

The same two examples of qualitative states are used over and over again in the literature. The experience of color (specifically the color red) and the experience of pain. If an entity can experience color and/or pain, then it would have phenomenal consciousness.

As for free will (which I also discuss in my thesis) there does not seem to be any relevant physical difference between artificial computers and human brains. On the neural level the brain is as deterministic as a computer chip. The main difference lies in the complexity of the wiring.

Now it may be that there are indeterminacies in the operation of neurons- perhaps quantum fluctuations are amplified (in a geiger counter like manner) to make real differences in the output of the the neurons. But this kind of indeterminism does not seem to capture the nature of genuine freedom. And even if it did, I don't see why a mechanical equivalent (with a real geiger counter, say) couldn't function in an similar manner.
 

log in or register to remove this ad

I had a program a LONG time ago that was an AI simulator for a cockroach. This was a neural net program that attempted to recreate a cockroach as it moved around a smallish room, navigating by feeling and sensing its' environment. It had a little graphical image of a roach that had some antenna and that was about it. I thought it was wicked cool. The important point about this program is that it was state of the art for its time, and recreated the exact neural network (I guess as well as could be understood) of an actual cockroach. The kicker was that it ran just fine on my 286 computer.

I imagine that a high end computer nowadays might be able to recreate something much more sophisticated like the neural network of a butterfly or maybe even a shrew.

But we are still a long way away from being able to accurately create a human's thought processes. First off, we don't even understand a human's thought processes. So, at best, we can only simulate them.

I think the important thing to understand is that a computer has to learn just like a human has to learn. I think the only way this can be accomplished is to create a program that has the capacity to observe and form impressions about the world, and build a database of information. Just like a toddler, it would learn as it advanced, and gathered information from around itself. You have to start somewhere. I think it's naive to believe that we can just create a genius computer. Think about all the little facts that you know about the world you live in, and take for granted. Things like "umbrelllas keep me dry in the rain" or "Carpet can be used to generate static electricity".

I had an idea in college of creating a recursing program that would read through dictionary entries. When it encountered a word, it would recurse to that word to understand it, and then go back to the original entry in order to build a complete picture. So, if it started with "aardvark" it would read "a small animal" - so then maybe it didn't understand what "small" meant, so it would flip over to understand small, and assuming that it got that, it would then come back and read "animal" and then it would recurse down to try to understand what an "animal" was, and having figured that out, then it could come back and finish the definition of an aardvark. You would have to build in thousands of base words just so it could get somewhere, and not infinitely recurse on itself. Theoretically, once you were done, the program would "know" everything in the world, since it would have it's own internal dictionary of information that gave it its' view of the world. Computers are no where near being able to do this.

I doubt we'll see it in our lifetime.
 

Cheiromancer said:
On the neural level the brain is as deterministic as a computer chip. The main difference lies in the complexity of the wiring.

As I understand the state of neurophysics today, this is not necessarily, true. Actually, as I understand it, it isn't true at all.

The switches in computers are deterministic - if you apply the voltage, the switch flips, period, end of discussion.

The switches in the human brain are not deterministic. The signal comes down from one nerve to the juncture, and the next neuron fires. Usually. But not always. There's the vagueries of neurotransmitter action there - in this, there are several layers of statistical propbability, such that the next neuron may not fire. There are few enough misfires (and enough redundancy) that the system is stable, but there may be enough so that the system is not deterministic.

Now it may be that there are indeterminacies in the operation of neurons- perhaps quantum fluctuations are amplified (in a geiger counter like manner) to make real differences in the output of the the neurons. But this kind of indeterminism does not seem to capture the nature of genuine freedom.

Given the possibility of the amplification, why doesn't it capture the nature of genuine freedom. Freedom is the state of not being locked into a particular behavior. Freedom is the ability to not follow rules. The root of the work around is not the issue - merely the result.

And even if it did, I don't see why a mechanical equivalent (with a real geiger counter, say) couldn't function in an similar manner.

With a single Geiger counter, or even many counters, you're still talking only about input and process. In essence, you're talking about a computer that has an input of an "eye" that sees radiation, and reacts to it in a predictable manner. You may be able to reach the desired goal in this way, but there's another route that is more likely: make your computer like the human brain - non-deterministic in the details of it's operation. Make teh switches non-dterministic.

We are just starting to build such things, called "quantum computers" in the common parlance. Each of their "switches" is non-deterministic.
 

I think that, yes, eventually they will develop consciousness and awareness. And probably they will develop it, rather than have us program and engineer it, probably as a result of a program (or whatever you wish to call it - it probably won't really resemble programming as we now know it) in a 'fuzzy logic' computer (one that has three states rather than the current two: yes, no, and maybe).

One of the best depictions I've seen of a truly sentient machine has been in Alan Dean Foster's series Jumping Off The Planet, Bouncing Off the Moon, and Leaping to the Stars. In that near future, machines build and program other machines because they're the only ones that can grasp the insane level of complexity and deal with the computations. Once the seed programs are in place, they run and develop on their own, creating and seeking new solutions just like a human child does from the instant of birth. Only they of course live thousands of times faster than a person and can create and experience many more 'connections' than a human ever could. They're intelligent, but not very 'human' if you get my drift.
 

Umbran said:
We are just starting to build such things, called "quantum computers" in the common parlance. Each of their "switches" is non-deterministic.

A quantum computer is a computer that uses quantum bits to store information. As I understand it, it's no different than a normal computer, it just manages to store things at a level never before achieved due to the technological hurdles.

No, you'd want a "fuzzy" computer. One that utilizes "fuzzy" logic. Which, is really kind of a buzzword which died out in the mid-90s.
 

As a current student in Artificial Intelligence at MIT, I thought I'd weigh in here:

The current state of AI is nowhere near any capability that would allow for 'consciousness'. That said, we do have models that act similarly to how Umbran described the neuron, but they haven't been in use much after the amazing confluence in the 80s where it seemed like neural networks were some sort of magic confluence of cognitive science, neuroscience, and AI.

That said, the following thought experiment is interesting:

Imagine a civilisation more advanced than our own that can produce a robotic replica of a neuron that acts in the same way as a single neuron in your head, a particular one. Now, we take out your biological neuron and replace it with the robotic replacement. And your consciousness is still fine because the mechanical neuron has been defined to work identically to a normal one, and everything else stayed the same. Now we switch out one more neuron. And another...and another, until your entire brain is made of the replica neurons, which work identically to the biological ones you replaced. And you have retained your human consciousness throughout the process, including at the end. Have we just transformed you into a cyborg with a cyberbrain?
 

der_kluge said:
I imagine that a high end computer nowadays might be able to recreate something much more sophisticated like the neural network of a butterfly or maybe even a shrew.

Have you learned nothing from Master Yoda? Size matters not!

Don't count the shrew so cheaply. He may be small, but his net of neurons is many orders of magnitude larger and more complex than that of any insect (probably that of any arthropod). This is one of the major differences between the chordates and all the other living things on the planet - their neural complexity.

I think the only way this can be accomplished is to create a program that has the capacity to observe and form impressions about the world, and build a database of information.

Such things exist in computer science today - they are called "neural networks" for the reason that they are designed to try to mimic the action of real neural networks. And they have to be "trained", rather like a toddler's brain does.

I had an idea in college of creating a recursing program that would read through dictionary entries. When it encountered a word, it would recurse to that word to understand it, and then go back to the original entry in order to build a complete picture.

Well, you noticed the first issue - boostrapping. You can't use the dictionary alone to udnerstand the dictionary. That would be circular reference - a logical loop that gets you nowhere. The second is, honestly, that we'd need to define "understanding", adn then build a macnine capable of that at all. :)
 

der_kluge said:
A quantum computer is a computer that uses quantum bits to store information. As I understand it, it's no different than a normal computer, it just manages to store things at a level never before achieved due to the technological hurdles.

No, I actually refer to a computer that uses quantum interference processes between bits to enact it's logic. The logic is not merely "fuzzy", it is not deterministic.

The problem with such a computer is that it is not guaranteed to get the right result for any particular operation. You ask it what one plus one is, and there's a probability distribution for the answers it'll give that has a central peak around two, but the probabilities for zero, one, three, and four are non-zero. And, on a series of individual operations, it is notably slower than it's normal digital counterpart.

However, where a normal computer must do it's operations pretty much in series, a quantum computer is massively parallel (much like a human brain). Where a normal computer has to do millions of floating point operations one after another, the quantum computer does it's work as a collective whole, with all the bits interfering with each other at once.

This means that (at least in theory) the quantum computer ends up much better at tasks that require standard digital computers vast numbers of individual operations - like pattern recognition and travelling salesman problems.
 

Numion said:
Even systems based on clear rules have a grey area where it is impossible to say whether a statement is 'if'. Gödel proved that in his famous incompleteness theorem. It basically says that any system that is not too simple (trivial) contains statements that cannot be proven true or false within that system.

In mathematics that means that there are statements about, for example, natural numbers that cannot be proven to be either true or false statements, using mathematical tools. The result for programming is that any advanced system has the possibility for situations where deciphering 'if' statements is impossible. How consciousness might arise from that, I don't know, but some for example the book Gödel, Escher & Bach examines the connection of Gödels theorem and AI at quite a length.

Fascinating stuff.
 

I think the problem with achieving true computer consciousness is that (almost?) all of the researchers and scientists seem to be trying to replicate just the functionality of the human brain, and they ignore the malfunctionality - the neurons that fire but go to the wrong target or just miss entirely, the cells that reproduce but don't quite get a perfect copy of the data stored within, and similar phenomenon.

When we achieve an AI capable of experiencing an "um, why did I come into the kitchen again?" moment, that's when we'll be onto something. :D
 

Into the Woods

Remove ads

Top