Skip to main content

Verified by Psychology Today

Autism

How to Win a Turing Tournament

A computer could be taken for a person if presented as an autistic savant.

An article in today's Daily Telegraph makes much the same point about computers needing to be able to acquire mentalistic skills—and full speech comprehension in particular—that I made in a recent post. But if computers could be engineered to be mentalistic enough to understand their users’ speech and interpret their intentions, then there seems little reason why they should not eventually also become mentalistic enough to read books—and, perhaps more importantly, to begin to understand them.

Wikimedia commons
Source: Wikimedia commons

The advances in machine-reading ability that we are imagining would be the automated equivalent of so-called hyperlexia. This is the opposite of dyslexia, and describes precocious ability to read, often without much insight into the meaning of a text, and often found in conjunction with ASD. The late Kim Peek (left) was hyperlexic to an extreme degree and had taught himself to read by the time he was 16 months old. Indeed, tests revealed that he could read left- and right-hand pages of an open book with left and right eyes simultaneously and with 98 per cent comprehension in 8 to 10 seconds. Thanks to such astonishing skills, he got through Tom Clancy’s novel, The Hunt for Red October, in one hour and 25 minutes and was still able to give verbatim quotations in response to specific questions of factual detail four months later!

However, although having this amazing, machine-like ability to reproduce text verbatim, Peek found it very difficult to recount the content of what he had read in his own words, and in a way that is typical of hyperlexia, showed severe deficits where understanding, rather than simply repeating, what he had read was concerned. And here again an arresting parallel is found in computers which are nowadays also hyperlexic in the sense that they can read text back to you, albeit without any understanding of its meaning whatsoever!

The basic difficulty for computer design in relation to engineering a machine’s ability to read and understand a book is closely allied to that of engineering an ability to understand a person’s speech. Both rely on language ability, but more particularly on the capacity to understand mentalistic terminology and appreciate mental states such as belief, knowledge, and intention. However, once such terminology became accessible to a computer through the engineering of a mentalistic user-interface as suggested in the previous post, so too would the vast depository of human knowledge encoded in the world’s books. The real problem is the considerable amount of common-sense knowledge that is also required to interpret what you read: not just what words mean in the dictionary, but what they mean in their social, cultural, and psychological context.

Like autistic people, computers can also be expected to have mentalistic deficits where comprehension is concerned—and nowhere more obviously so than in relation to another fundamental aspect of mentalism where you find deficits in autism: the appreciation of humour. Following a lecture he gave, a member of the audience asked Kim Peek a question about Abraham Lincoln’s Gettysburg Address to which Peek replied; “Will’s House, 227 North West Front Street. But he stayed there only one night—he gave the speech the next day.” The laughter that greeted this remark surprised Peek at first, but having seen the joke himself, he then regularly re-cycled the comment for its comic effect.

In questioning an intelligent computer about Lincoln’s Gettysburg Address you could readily understand how the system might make exactly the same mistake. But given that misunderstandings like this are very likely to happen, software engineers will be faced with the problem of how to handle such break-downs in communication between user and machine, and here an obvious fix would be to imitate nature and give the machine a capacity to laugh off its own mistakes (not to mention finding its user’s witticisms amusing). As a minimum requirement, a competent mentalistic interface would have to be able to appreciate irony (another major deficit in autistics), and a truly intelligent system would certainly have to be able to understand humour in all its forms if it were to attempt to comprehend its human users.

And in any event, unintentional hilarity is as likely to be produced by talking computers as it is by young children. Engineers intent on making their mentalistic user-interfaces seem more grown-up in this respect would be certain to build on developments already taking place to develop the system’s ability to handle humour, and this would demand not simply the avoidance of childish solecisms and derisory double entendres, but appreciation of real jokes, and perhaps even the ability to tell them. Indeed, you might even envisage the system’s sense of humour being a user-defined parameter: with settings ranging from the wildly wacky to the tersely Teutonic!

Comparable limitations regarding computers’ ability to handle contextual, common-sense knowledge means that at present Turing tests devised to see if a computer's responses to questions can be distinguished from those of a person cannot be completely open with regard to the topic of discussion. Machines have done quite well with wine, politics, and religion as the subjects for conversation (presumably because these are topics about which you can talk complete nonsense and still be taken seriously). Indeed, one program has even passed a Turing test to the extent of convincing five out of ten judges that it was a human psychotherapist (admittedly, something that might say as much about psychotherapy as it does about a machine’s intelligence)!

Yet if computers could access and acquire any knowledge on any subject as easily as any human being could, there might be no need to restrict the subjects of conversation in the test—and presumably no way in which a person could use their peculiarly human knowledge to judge whether the system with which they were interacting was running on a machine or not. On the contrary, if the computer’s literacy were of a higher order than that of the human judge, the machine might have the advantage.

But of course, the machine might still fail—perhaps because it seemed to know too much, or still seemed somewhat “autistic” by comparison to the average human being. Nevertheless, even if such systems still could not pass for completely normal persons, they easily might become the machine equivalents of autistic savants like Kim Peek. His expertise was primarily an encyclopaedic knowledge of hundreds of books, and presumably a mentalistically programmed computer that could manage to read an equivalent amount could achieve comparable feats and present itself as a similar kind of savant. Indeed, these considerations suggest an obvious ploy for programmers intent on writing Turing-test-winning software: explain away both the strengths and weaknesses of your system’s cognitive style by having it masquerade as an autistic savant!

(Extracted and condensed from my forth-coming book, The Diametric Mind: Insights into AI, IQ, society, and consciousness: a sequel to The Imprinted Brain.)

advertisement
More from Christopher Badcock Ph.D.
More from Psychology Today