Computer becomes first to pass Turing Test

Yeah, not buying it. Once again, we have a primary example of the media misrepresenting the actual truth.

First off, it wasn't a computer...it was a chatbot that passed the 30% requirement (IIRC, Cleverbot did it with an over 50% reliability).

Secondly, it was one test...one, and it was done with judges that the event organizer got to pick. It needs to be peer reviewed and then it needs to be repeated. Science is not science unless it's repeatable. I can say that I've created cold fusion until I'm blue in the face, but if no one can repeat my experiments, then my claim is false.

Lastly, and most importantly in my mind, the event was organized by Kevin Warwick. For those who are unfamiliar with him, Warwick is a sensationalist who has been caught on numerous occasions for making ridiculous claims that are then sopped up by the media. For example, back in 2000, he implanted a microchip into his arm and then claimed he was the world's first cyborg. People now not-so-affectionately refer to him as Captain Cyborg.

If you're curious, here's an interesting site that discusses Warwick. It's archived, but you'll get the gist:

https://web.archive.org/web/20040829131505/http://www.kevinwarwick.org.uk/
 

log in or register to remove this ad

Zombie_Babies

First Post
yeah, that was kind of lame. I don't think Turing meant to spend 5 minutes deciding if a twit on the internet was a troll or a bot.

I think the spirit of his intent was that an adult talking to an adult on the other end of a chat box couldn't tell the difference due to the sophistication and behavior of the chat bot.

Bear in mind, his idea came before "computers" as we think of them existed. Eliza, Dr. Sbaitzo, or even online chatting didn't exist yet. So the ease of developing a program that responds to entered dialog wasn't obvious at the time.

For this Eugene Goostman, the online version fails at:
where are you from?
Ukraine
Oh, I'm from Ukraine.
I've never been there.

I think if Turing were conducting the AI test, his 5 minute conversation would be testing for memory, consistency and abstract problem solving, such as being able to follow a syllogism or complete an analogy.

Exactly. This was a trick, not science.
 

Janx

Hero
Exactly. This was a trick, not science.

This got me thinking. Way back when I was a young lad, I wrote what today would be considered a chat bot. It was based on a point Asimov made that Memory is often mistook as Intelligence (per some short story he wrote). So I designed it the parrot whatever I told it to say in response to something. With enough time spent "talking" to it, it would have a perfect memory of something to say.

It was a simple enough pet project, and I've had ideas on improving it, but the trick of it was that it was a chat bot, and not an AI as I would have called it back then.

The core problem to the concept of the Turing Test, is that people are writing chat bots to take the test, instead of designing true AIs and then having those true AIs take the test.
 

Umbran

Mod Squad
Staff member
Supporter
The core problem to the concept of the Turing Test, is that people are writing chat bots to take the test, instead of designing true AIs and then having those true AIs take the test.

I think the core problem with what is in the news is that it has actually done an ed-run around the concept of the Turing Test.

You complain that they didn't build a "True AI" - Turing would say that the idea of a "true AI" is rubbish. "True AI" carries around a bunch of preconceptions as to what intelligence is. The point of the Turing Test is that we don't know how to measure thought, so we cannot define a "true AI" to begin with. Turing basically approached the problem with the idea that, "if it looks, walks, and quacks like a duck, it is probably a duck". We know humans think by way of their responses to their world, so we should apply the same to computers. Turing suggested that if a computer could consistently fool humans, then it was, for all intents and purposes, a thinking machine.

The big point here is *consistently*. By reducing the threshold for passing to fooling a mere 30% of judges, and by allowing the "pretend to not be a native speaker" trick, the organizers actually tossed the check for doing this consistently out the window. The chatbot in question can only manage to win the game in the confines of a carefully controlled situation, not a general conversation.

Now, eventually, we may be able to create a chatbot good enough to pass a real Turing Test. Then, we'll have something interesting - and perhaps we'll learn what "true AI" is, rather than defining it before we begin.
 
Last edited:



Janx

Hero
I think the core problem with what is in the news is that it has actually done an ed-run around the concept of the Turing Test.

You complain that they didn't build a "True AI" - Turing would say that the idea of a "true AI" is rubbish. "True AI" carries around a bunch of preconceptions as to what intelligence is. The point of the Turing Test is that we don't know how to measure thought, so we cannot define a "true AI" to begin with. Turing basically approached the problem with the idea that, "if it looks, walks, and quacks like a duck, it is probably a duck". We know humans think by way of their responses to their world, so we should apply the same to computers. Turing suggested that if a computer could consistently fool humans, then it was, for all intents and purposes, a thinking machine.

I disagree here on the concept of a "True AI" not being a model that we are comparing to. Not in that nobody knows what a True AI is, so much as there are obvious things that are not a True AI. A rock is not a True AI, nor is a plain chatbot.

Even the Duck test has to exclude, that a thing designed to fool the Duck test specifically by its test parameters is not automatically granted recognition as a Duck.

Especially when you consider the scope of what undocumented attributes are being tested for when you look at the Duck Candidate. You're looking at its feathers, it's breathing. Who knows what else.

It's my belief (or opinion which holds as much weight as it is worth to you), that Turing expected to see technology progress to where the AI really did have the mental faculties of a real person. or close enough to it in the early stages. His idea was the modern equivalent of "I think, therefore I am." in that he wouldn't count a chatbot parroting that onto the screen, he was expecting to find that for all intents and purposes, the AI was thinking, and thus was sentient.

Bear in mind, for the purposes of discussion, I don't care about the Media or Captain Cyborg aspects of the original story. I'm talking about the point of the test, and why a chatbot designed to fool a human performing the Turing Test cannot qualify.

If nothing else, it is because the intent of the Goostman project was to cheat the Turing Test, it is effectively disqualified from passing the Turing Test.
 

Umbran

Mod Squad
Staff member
Supporter
I disagree here on the concept of a "True AI" not being a model that we are comparing to. Not in that nobody knows what a True AI is, so much as there are obvious things that are not a True AI. A rock is not a True AI, nor is a plain chatbot.

Well, now you've added a qualifier - "plain".

Imaging a bot that was designed to talk with people. But raise the threshhold for "passing" to something like 90%. Put in the rules that it must speak in the native language of the questioner. Now, it has to be a whole lot better at chatting. It must be able to parse and speak in something like natural language. It has to be respond reasonably to arbitrary input, in a non-repetitive way. It has to have memory of the conversation. It probably needs access to a font of information about the world equivalent to that in the mind of an adult human. It must have a persona and personal history that can be referenced to create reasonable-sounding responses.

Is this now what you'd call a "simple" chatbot? All it does is chat, after all. It just does it really, really well, using all the systems a human would, with a different implementation.

If it has all the systems of a human, what does that mean?

Even the Duck test has to exclude, that a thing designed to fool the Duck test specifically by its test parameters is not automatically granted recognition as a Duck.

We know, to great levels of precision, what a duck actually is, right? Even if someone built a duck robot (a duckdroid?) you know that someone with a scalpel could cut into it, and know very quickly if that duck was really a duck. We could do a genetic analysis, to see if it *really* was a duck (unless that duck was built by the Cylons, I guess...), and not another waterfowl subjected to Moreau-level manipulations...

Turing's point in positing the test was that we do *NOT* have that same understanding of intelligence and thought. We do not know how to measure it, in general. We don't know what's really required to make a thing act in what we call an "intelligent" manner. The Turing Test really is a "proof is in the pudding" thing - if it really does act in a way that we'd call intelligent, well, maybe that's all that's really required for intelligence.

This includes the possibility that "intelligence" isn't really as awe-inspiring as we'd like to think. Turing was prepared to be disappointed that maybe human intelligence isn't very grand.
 

Janx

Hero
Well, now you've added a qualifier - "plain".

Imaging a bot that was designed to talk with people. But raise the threshhold for "passing" to something like 90%. Put in the rules that it must speak in the native language of the questioner. Now, it has to be a whole lot better at chatting. It must be able to parse and speak in something like natural language. It has to be respond reasonably to arbitrary input, in a non-repetitive way. It has to have memory of the conversation. It probably needs access to a font of information about the world equivalent to that in the mind of an adult human. It must have a persona and personal history that can be referenced to create reasonable-sounding responses.

Is this now what you'd call a "simple" chatbot? All it does is chat, after all. It just does it really, really well, using all the systems a human would, with a different implementation.

If it has all the systems of a human, what does that mean?
at some point, that becomes a complex chatbot. When it takes on more and more capabilities of a human, it may in turn reach AI classification.

I posit that a guy writing a chatbot to pass the Turing test has not built an AI. A guy building an AI that he presents to the Turing Test may be designated at succeeding if it passes the Turing Test.

The assumption (and it is an assumption), that the guy trying to build an AI is trying to something that is Intelligent. the guy building a chatbot is building an illusion of intelligence.


Turing's point in positing the test was that we do *NOT* have that same understanding of intelligence and thought. We do not know how to measure it, in general. We don't know what's really required to make a thing act in what we call an "intelligent" manner. The Turing Test really is a "proof is in the pudding" thing - if it really does act in a way that we'd call intelligent, well, maybe that's all that's really required for intelligence.

This includes the possibility that "intelligence" isn't really as awe-inspiring as we'd like to think. Turing was prepared to be disappointed that maybe human intelligence isn't very grand.

I disagree here. Turing thought we'd have AIs in 20 years time. He very likely thought we had or would develop a greater understanding of intelligence and thought in the same time frame. His test, to me, means that if you think your thing is an AI, it should be able to pass the Test.
 

Umbran

Mod Squad
Staff member
Supporter
I disagree here. Turing thought we'd have AIs in 20 years time.

When he posited the initial forms of the test in 1950, he thought the number was 50 years (see below) - so, by his estimates, we should have had them over a decade ago.

He very likely thought we had or would develop a greater understanding of intelligence and thought in the same time frame.

Here's the essential point where I disagree.

In his 1950 paper, "Computing Machinery and Intelligence" Turing said the following:

"I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think." The definitions might be framed so as to reflect so far as possible the normal use of the words, but this attitude is dangerous, If the meaning of the words "machine" and "think" are to be found by examining how they are commonly used it is difficult to escape the conclusion that the meaning and the answer to the question, "Can machines think?" is to be sought in a statistical survey such as a Gallup poll. But this is absurd. Instead of attempting such a definition I shall replace the question by another, which is closely related to it and is expressed in relatively unambiguous words."

He then goes on to replace the question, "Can machines think?" with the question, "Can a machine imitate a human to the point where we cannot tell the difference?" by way of the Imitation Game.

Later in the paper, he says:

"I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning. The original question, "Can machines think?" I believe to be too meaningless to deserve discussion. Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted. "

(We note that 10^9 binary digits, a gigabit, was surpassed long ago - we now think in terms of machines with tera- and peta-bytes of memory, and talk about *transfer* speeds in the gigabit per second range. Another point on which Turing turned out to be optimistic.)

So, an explicit statement that he expects folks can create machines to play the imitation game, but *not* a statement that they should be programmed to do anything else. It does not need to be in the context of a machine built for some other purpose, that also happens to be put to the game.

Now, he had his own personal beliefs, and he calls them out as such - he expected the best way to get a machine that'll play the game well would be to create a machine that could learn, and that "intelligence" would arise as an emergent quality. But he allowed that he could be incorrect as to the path to reach the goal. He also noted that his own learning machine would be self-modifying. A programmer might write code and start it, but he would not know what it looked like at the end.

As a scientist, then, he must then have a test that will determine the quality *without* basing it in the details of the implementation!
 

Remove ads

Top