Twelve reasons to toss the Turing test
I posted this one day to comp.ai.philosophy, because it seemed to me that the Turing test had passed from being an interesting philosophical piece to a stumbling block which actually impedes thinking about AI and about intelligence.
- Its definition is hopelessly vague; I've seen 'Turing Test' used
for anything from teletype exchanges limited to 5 minutes, to any kind
of external behavior, to any kind of physical observable whatsoever,
to unobservable phenomena as well (e.g. "thinking"). Such a wide range
of denotations does not amount to a "test" in any useful sense.
- It does nothing to guide AI research. AI does not proceed by
throwing up candidates for the Turing Test, but by attempting to solve
particular practical problems, or by emulating biological intelligence.
The TT thus does no good within the field of AI.
- If a machine "passed the Turing Test"
the people who already believe a machine can't think would only say
"who cares?" A test whose results have no uncontroversial
reading
is no test. The TT therefore does no good (or has failed) in preparing
people outside the field for the possibility of AI.
- It's fatally subjective. There is no demonstration that results
are reproducible, even with a single observer. Calling it a "test" at
all gives it in fact a pseudo-scientific air it does not deserve;
far from eliminating human prejudice, it elevates it into the arbiter
of objective fact.
- It's easy to fool. Turing seemed to think that people will not on
the whole accept "intelligence" in machines. On the contrary, many
people accept it all too readily, or even figure it's already been done.
(People unfamiliar with computers generally assume that the machine is much
smarter than it is.)
- Focussing on external behavior as it does, the TT encourages the
notion
that only algorithmic structure, rather than any physical fact about
human brains, produces intelligence. That may be, but it should be a
matter for investigation, not an initial assumption.
- It's biased toward language use, rather than any other demonstration
of intelligence-- e.g. the ability to read a map, fix a bicycle,
play a violin, draw a portrait, survive in the jungle, or calm a baby.
- If it worked at all, it would be because humans have an ability
to determine what is or is not "intelligence"; an ability which we
should examine to see where it comes from and how it works, not
mindlessly take as an unanalyzable given.
- Better definitions of intelligence exist; for instance, it can be
analyzed as a combination of capacities to remember, to learn, to
reason, to use language, to create, to plan, to know a good deal about
the world, to execute everyday tasks.
- Indeed, the reification of 'intelligence' (as well as the related notion of IQ) is really a statistical misunderstanding. (See Gould's The Mismeasure of Man for the whole story.)
- One of the central tasks of AI (and cognitive science in general) is
to
give us a complete theory of mind, which would include an explanation of
intelligence and how to search for it. Far from being necessary for AI,
the TT would be superseded by any successful AI (which would have to be
built on a theory of mind incorporating a far better explanation of
what intelligence is).
- Not the least use of any theory is the counter-arguments that are
raised against it, which are often useful for refining our
understanding.
The counter-examples suggested by the Turing Test-- roomfuls of monkeys,
humongous lookup tables, Chinese Rooms-- are diverting, but shed little
light on what it takes to build an intelligent machine.
Turing originally proposed the test in "Computing Machinery and Intelligence", Mind, Vol. LIX, No. 236 (1950). The article is reprinted in The Mind's I, eds. Hofstadter and Dennett.
[ Home ]