Cognition
The One-Question Turing Test
The Turing test says as much about us as it does about thinking in machines.
Updated February 15, 2024
Can machines think? The originator of modern, electronic computing asked this question in terms of the now famous Turing test. Turing proposed it as a variant of an “imitation game” in which, using only indirect means of communication such as a teleprinter or computer terminal, you had to distinguish between a male and female respondent who were not obliged to tell the truth about who they were. In the crucial variant that he proposed to answer the question of whether machines can think, one of the protagonists is a computer programmed to imitate a person: what today we would call a chatbot. He argued that if most people were no more than randomly good at saying which was the chatbot and which was not after a suitable period of interrogation of both, you could say that the chatbot had passed the test.
Inevitable limitations regarding computers’ ability to handle contextual, common-sense knowledge means that at present Turing tests cannot be completely open with regard to the topic of discussion. Computer programs have done quite well with wine, politics, and religion as the subjects for conversation—presumably because these are topics about which you can talk complete nonsense and still be taken seriously! And of course, the fact that five out of ten of its clients were convinced by a program mimicking a psychotherapist probably says more about psychotherapy than it does about the Turing test.
But however that may be, here is a single-question Turing test that goes to the heart of the matter:
You are interrogating two respondents via a computer terminal. One is a person; the other is a chatbot that is programmed to make you think it is a person. You must decide which is which on the basis of a reply to a single question to just one of them. What question must you ask to determine definitely whether you are interrogating the chatbot or the person?
Obviously, asking, “Are you the person?” or “Are you the chatbot?” of either will not suffice, because although the person will answer truthfully, the chatbot will give you a false answer, and you have no way of knowing which you are addressing, because both will claim to be human.
Nevertheless, if you consider what each would tell you about what the other would say, a solution can be found. This involves imagining alternatives that in turn demand an understanding of each respondent’s knowledge of the other’s truth telling or otherwise.
Specifically this is known in the autism literature as an appreciation of false belief, and generally these issues are aspects of what is often called “theory-of-mind skills,” “mind-reading,” or in a word, mentalism. Deficits in mentalism are generally diagnostic of autism, and tests of false belief are particularly crucial. Furthermore, as this example suggests, such mentalistic skills are the key to passing the Turing test and giving computers an appearance of being able to think.
The answer is to ask either “Which of you would be indicated as the person if I asked the other to indicate him?”
Clearly, you might be questioning the person or the chatbot. If you were addressing the chatbot, it would of course say that it would be indicated. But if you were addressing the person, they would confirm that answer because they would know that the chatbot would give a false answer, and indicate itself as human. You could then confidently conclude that, whichever you asked, the person was the respondent not indicated in the answer.
Or at least, you could unless the chatbot was clever enough to realize that you would think this. Clearly, if the chatbot’s mentalistic intelligence was high enough, it might realize that lying in answer to this question would give it away, but that telling the truth would not, because the person would be wrongly assuming that the chatbot would lie.
Being able to lie is yet another aspect of mentalism where autistics—perhaps to their credit—have serious failings. This is because lying exploits those same mentalistic skills that the single question solution here requires, and this in turn suggests that, were the chatbot to be clever enough to tell the truth in this instance, there would be no single question solution to such a Turing test.
A thinking chatbot might deduce that deceit was the essence of being human—even when it sometimes meant telling the truth. And the chatbot might be right!