Computer becomes first to pass Turing Test


log in or register to remove this ad


Janx

Hero
Turing never created a test.

and that's probably the persnickety detail. Turing wrote an article (as Umbran posted a chunk of it).

Other people designed a test, based on his article and named it after him.

The Turing Test need not hew perfectly to what Turing wrote, given that it is an expansion on his idea.
 

Morrus

Well, that was fun
Staff member
and that's probably the persnickety detail. Turing wrote an article (as Umbran posted a chunk of it).

Other people designed a test, based on his article and named it after him.

The Turing Test need not hew perfectly to what Turing wrote, given that it is an expansion on his idea.

I personally feel (as a complete layman) that the test sounds kinda pointless. I guess it helps drive innovation. But 30% of people fooled into thinking it a human is the test? I mean, if we ever meet aliens, and they are clearly intelligent, but it's also clear from the way they speak that they're not human, they fail the test.

I think better tests would be directed towards sentience; but I'm not sure that's possible. I'm no AI expert. But a dog is clearly sentient, but can't fool a human into thinking it's a human. Making a machine with the sentience of a dog would be amazing, though. I did hear one AI talk (was it a TED talk? Dunno!) which suggested showing a scene and then having the AI describe the scene, answer questions about it, and derive conclusions from it.

I guess folks who understand this stuff better than I can tell me where my errors are, but it just seems silly to me.
 

Janx

Hero
I personally feel (as a complete layman) that the test sounds kinda pointless. I guess it helps drive innovation. But 30% of people fooled into thinking it a human is the test? I mean, if we ever meet aliens, and they are clearly intelligent, but it's also clear from the way they speak that they're not human, they fail the test.

I think better tests would be directed towards sentience; but I'm not sure that's possible. I'm no AI expert. But a dog is clearly sentient, but can't fool a human into thinking it's a human. Making a machine with the sentience of a dog would be amazing, though. I did hear one AI talk (was it a TED talk? Dunno!) which suggested showing a scene and then having the AI describe the scene, answer questions about it, and derive conclusions from it.

I guess folks who understand this stuff better than I can tell me where my errors are, but it just seems silly to me.

I suspect, the idea of talking to the AI was to detect if it was sentient. I'm not happy with the 30% threshold either, but I suspect it's some interpretation of Turing's 70% remark (the inverse condition). It may also be part of some counter-bias, as anybody acting as a Judge to run a Turing Test may be inclined to vote No, just because they know most of the entrants are not humans.


There's also the matter that the specific thing that brings up this discussion was an event run by Captain Cyborg, a well known attention whore, at his college. the whole thing is kinda rigged.
 

tomBitonti

Adventurer
Huh,

The sample chat, above, is unconvincing. The chat is using a lot of tricks to deflect from having to engage the questions.

Also, using typed text as the test, while making for a easier to define test, really is cheating, as the bandwidth of text is so much less than an actual face-to-face conversation.

But, I thought that (at least) two points of the "Turing Test" were that "Thinking" was undefined, with the only current measure was "can the system emulate a human" with a criteria that the result was measure by observation.

Following on that, what I've gathered about perspectives on the Turing Test, as defined, was that if a system can emulate a person, to an arbitrarily high standard, then however the system works doesn't matter, as the system must have, somewhere, sufficient complexity to match whatever a person does.

The definition of the test does create a problem in regards to measuring "thinking", in shifting the burden all on side. The person is (as a given) thinking. That the person perceives a system as also thinking may be an illusion of perception; a fault of how brains work. That gets into questions of perceptions and how thinking works. (E.g., all of a person's awareness is a construct of the brain. What we think we are aware of is all brain state, conveyed through physical processes for sure, but at the perception level, brain state.)

Thx!

TomB
 

Umbran

Mod Squad
Staff member
Supporter
Turing never created a test.

In the paper I reference above, Turing suggested the "imitation game":

"The new form of the problem can be described in terms of a game which we call the 'imitation game." It is played with three people, a man (A), a woman (B), and an interrogator (C) who may be of either sex. The interrogator stays in a room apart front the other two. The object of the game for the interrogator is to determine which of the other two is the man and which is the woman. He knows them by labels X and Y, and at the end of the game he says either "X is A and Y is B" or "X is B and Y is A." The interrogator is allowed to put questions to A and B thus:

C: Will X please tell me the length of his or her hair?

Now suppose X is actually A, then A must answer. It is A's object in the game to try and cause C to make the wrong identification. His answer might therefore be:

"My hair is shingled, and the longest strands are about nine inches long."

In order that tones of voice may not help the interrogator the answers should be written, or better still, typewritten. The ideal arrangement is to have a teleprinter communicating between the two rooms. Alternatively the question and answers can be repeated by an intermediary. The object of the game for the third player (B) is to help the interrogator. The best strategy for her is probably to give truthful answers. She can add such things as "I am the woman, don't listen to him!" to her answers, but it will avail nothing as the man can make similar remarks.

We now ask the question, "What will happen when a machine takes the part of A in this game?" Will the interrogator decide wrongly as often when the game is played like this as he does when the game is played between a man and a woman? These questions replace our original, "Can machines think?"


Then later, as I quoted, he posits:

"I believe that in about fifty years' time it will be possible, to programme computers, with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning."


But 30% of people fooled into thinking it a human is the test?

Note that he's not saying that this would indicate that the machine was thinking! He was just predicting that playing the game that well would be possible by the year 2000.

I think better tests would be directed towards sentience; but I'm not sure that's possible.

As mentioned - Turing felt that "thinking" was not a measurable thing*. By extension, "sentience" is similarly not measurable. Sentience is like pornography - we can't define it, but we know it when we see it. You can think of the Turing Test as just a framing of that observation.




*He said, "The original question, 'Can machines think?' I believe to be too meaningless to deserve discussion. "
 

Remove ads

Top