Environment
Lipreading for the FBI
We all read lips. Our brains can’t help it.
Posted March 16, 2010
Sue Thomas was sure she was being fired from the FBI. She had been analyzing fingerprints for just a week, but had found the work unbearably tedious. When asked to report to her supervisor’s office, she was prepared for bad news. Instead, she was confronted with seven of the FBI’s top officers who were interested in discussing, of all things, her lipreading. Sue is completely deaf, and her supervisors had noticed that during their interactions, she seemed particularly good at lipreading. Now they were interested in whether she could also lipread from videos. She told them that she could, as long as there was a clear view of the face.
Sue was shown a surveillance video of a suspected illegal gambling transaction involving organized crime. The video didn’t contain an audio track, so it was Sue’s job to transcribe the dialogue from lipreading. She succeeded, and with this became the FBI’s first professional lipreader. This story is conveyed in Thomas’ engaging memoir Silent Night, as well as in the syndicated television show based on her book, Sue Thomas: FBEye.
Sue Thomas is a superb lipreader. When I met with her, we chatted for 45 minutes and she only asked me to repeat myself once. What makes her so good? Sue was never formally taught to lipread. But she believes that the intensive speech therapy she had for her produced speech helped her lipreading skill. Also, Sue went deaf when she was 18 months old, before a majority of her language acquisition. Research shows that relying on lipreading to acquire language puts an individual at a strong advantage for superior lipreading throughout life. Sue has two other attributes that could play a role in her skill. She’s a woman and she’s very bright. Both of these characteristics have been shown to correlate with superior lipreading.
But what about the rest of us? While most of us are not as skilled as Sue Thomas, we all lipread. In fact, we do it all the time. When you’re talking to someone in a noisy restaurant, you’re lipreading. When you’re in a quieter environment, and talking to someone with a thick accent, you’re lipreading. Even if it’s quiet and the talker doesn’t have an accent, if they’re discussing something complicated, you’re lipreading. You lipread when you’re learning a new language, and you were lipreading when you learned your first language. Research suggests that you lipread in all of these contexts. This research shows that whenever you’re able to see a talker’s face, you watch and use their visible speech movements, and have been doing so all your life. You lipread to enhance the speech you hear.
The speech you lipread can even override the speech you hear. Watch the video on the left (with the sound on). What syllables do you hear? Most people hear “ba, va, tha, and ga”. Now replay the video with your eyes closed. You’ll notice that the soundtrack contains just one repeated syllable. But even knowing this fact, if you again watch the video with your eyes open, the speech you see will influence the speech you hear. This demonstration is known as the ‘McGurk Effect’, and it is very strong. In our own laboratory, we find that a visible 'va' overrides an audible 'ba' 98% of the time. The effect works on infants and on adults of every language background for which it’s been tested (although the specific types of speech integrations depends on the language). The effect works if the voice and face are of different genders, and even if one doesn’t know that they’re looking at a face.
The McGurk effect offers compelling evidence that we lipread automatically. It’s also consistent with neuroimaging research showing that the brain responds to lipread information as if it’s hearing speech. Seeing a face silently articulate induces activity in the ‘hearing’ areas of the brain including auditory cortex and auditory brain stem. This is true for all of us, whether hearing or deaf. These findings are part of the now overwhelming evidence that speech perception evolved as a multisensory function. And as I posted in an early blog piece (How Words Feel), we can even perceive speech from touching faces. Your speech brain is hungry for information about speech articulation and is willing to get it from wherever it can.
So you may not be able to lipread from a silent video well enough to perform FBI surveillance. But the lipreading you do, typically for enhancing heard speech, is constant and automatic. Your brain can’t help it.
Lawrence Rosenblum is a Professor of Psychology at the University of California, Riverside. He studies multimodal speech perception and general auditory perception. His new book on hidden perceptual skills, “See What I’m Saying: The Extraordinary Powers of Our Five Senses” (www.lawrencerosenblum.com) is published by Norton Press (2010).
References
Auer, E., & Bernstein, L. (2007). Enhanced visual speech perception in individuals with early-onset hearing impairment. Journal of Speech, Language, and Hearing Research, 50, 1157–1165.
Calvert, G.A., Bullmore, E., Brammer, M.J., Campbell, R., Iversen, S.D., Woodruff, P., et al. (1997). Silent lipreading activates the auditory cortex. Science, 276, 593–596.
Green, K.P., Kuhl, P.K., Meltzoff, A.M., & Stevens, E.B. (1991). Integrating speech information across talkers, gender, and sensory modality: Female faces and male voices in the McGurk effect. Perception & Psychophysics, 50, 524-536.
Massaro, D. W., Cohen, M. M., Gesi, A., Heredia, R., & Tsuzaki, M. (1993). Bimodal speech perception: An examination across languages. Journal of Phonetics, 21, 445-478.
McGurk, H., & MacDonald, J.W. (1976). Hearing lips and seeing voices. Nature, 264, 746–748.
Musacchia, G., Sams, M., Nicol, T., & Kraus, N. (2005). Seeing speech affects acoustic information processing in the human brainstem. Experimental Brain Research, 168, 1–10.
Rosenblum, L.D. (2008). Speech Perception as a Multimodal Phenomenon. Current Directions in Psychological Science, 17, 405-409.
Rosenblum, L.D. and Saldaña, H.M. (1996). An audiovisual test of kinematic primitives for visual speech perception. Journal of Experimental Psychology: Human Perception and Performance. 22(2), 318-331.
Rosenblum, L.D., Schmuckler, M.A., & Johnson, J.A. (1997). The McGurk effect in infants. Perception & Psychophysics, 59(3), 347-357.