Volume 8, Issue 3 pp. 307-316
RESEARCH ARTICLE

“Look who's talking!” Gaze Patterns for Implicit and Explicit Audio-Visual Speech Synchrony Detection in Children With High-Functioning Autism

Ruth B. Grossman

Corresponding Author

Ruth B. Grossman

Emerson College, Department of Communication Sciences and Disorders, 120 Boylston Street, Boston, Massachusetts

University of Massachusetts Medical School Shriver Center, 200 Trapelo Rd, Waltham, Massachusetts

Address for correspondence and reprints: Ruth B. Grossman, Emerson College, 120 Boylston Street, Boston, MA 02116. E-mail: [email protected]Search for more papers by this author
Erin Steinhart

Erin Steinhart

University of Massachusetts Medical School Shriver Center, 200 Trapelo Rd, Waltham, Massachusetts

Search for more papers by this author
Teresa Mitchell

Teresa Mitchell

University of Massachusetts Medical School Shriver Center, 200 Trapelo Rd, Waltham, Massachusetts

Search for more papers by this author
William McIlvane

William McIlvane

University of Massachusetts Medical School Shriver Center, 200 Trapelo Rd, Waltham, Massachusetts

Search for more papers by this author
First published: 24 January 2015
Citations: 38

Abstract

Conversation requires integration of information from faces and voices to fully understand the speaker's message. To detect auditory-visual asynchrony of speech, listeners must integrate visual movements of the face, particularly the mouth, with auditory speech information. Individuals with autism spectrum disorder may be less successful at such multisensory integration, despite their demonstrated preference for looking at the mouth region of a speaker. We showed participants (individuals with and without high-functioning autism (HFA) aged 8–19) a split-screen video of two identical individuals speaking side by side. Only one of the speakers was in synchrony with the corresponding audio track and synchrony switched between the two speakers every few seconds. Participants were asked to watch the video without further instructions (implicit condition) or to specifically watch the in-synch speaker (explicit condition). We recorded which part of the screen and face their eyes targeted. Both groups looked at the in-synch video significantly more with explicit instructions. However, participants with HFA looked at the in-synch video less than typically developing (TD) peers and did not increase their gaze time as much as TD participants in the explicit task. Importantly, the HFA group looked significantly less at the mouth than their TD peers, and significantly more at non-face regions of the image. There were no between-group differences for eye-directed gaze. Overall, individuals with HFA spend less time looking at the crucially important mouth region of the face during auditory-visual speech integration, which is maladaptive gaze behavior for this type of task. Autism Res 2015, 8: 307–316. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

The full text of this article hosted at iucr.org is unavailable due to technical difficulties.