Volume 11, Issue 3 pp. 409-418
ORIGINAL ARTICLE

Audiovisual synchrony detection for fluent speech in early childhood: An eye-tracking study

Han-yu Zhou

Han-yu Zhou

Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China

Department of Psychology, University of Chinese Academy of Sciences, Beijing, China

Search for more papers by this author
Han-xue Yang

Han-xue Yang

Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China

Department of Psychology, University of Chinese Academy of Sciences, Beijing, China

Search for more papers by this author
Zhen Wei

Zhen Wei

Affiliated Shenzhen Maternity and Child Healthcare Hospital, Shenzhen, China

Search for more papers by this author
Guo-bin Wan

Guo-bin Wan

Affiliated Shenzhen Maternity and Child Healthcare Hospital, Shenzhen, China

Search for more papers by this author
Simon S. Y. Lui

Simon S. Y. Lui

Department of Psychiatry, The University of Hong Kong, Hong Kong Special Administrative Region, China

Search for more papers by this author
Raymond C. K. Chan

Corresponding Author

Raymond C. K. Chan

Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China

Department of Psychology, University of Chinese Academy of Sciences, Beijing, China

Correspondence

Professor Raymond C. K. Chan, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Beijing 10101, China.

Email: [email protected]

Search for more papers by this author
First published: 29 March 2022

Funding information: National Natural Science Foundation of China, Grant/Award Number: 31970997

Abstract

During childhood, the ability to detect audiovisual synchrony gradually sharpens for simple stimuli such as flashbeeps and single syllables. However, little is known about how children perceive synchrony for natural and continuous speech. This study investigated young children's gaze patterns while they were watching movies of two identical speakers telling stories side by side. Only one speaker's lip movements matched the voices and the other one either led or lagged behind the soundtrack by 600 ms. Children aged 3–6 years (n = 94, 52.13% males) showed an overall preference for the synchronous speaker, with no age-related changes in synchrony-detection sensitivity as indicated by similar gaze patterns across ages. However, viewing time to the synchronous speech was significantly longer in the auditory-leading (AL) condition compared with that in the visual-leading (VL) condition, suggesting asymmetric sensitivities for AL versus VL asynchrony have already been established in early childhood. When further examining gaze patterns on dynamic faces, we found that more attention focused on the mouth region was an adaptive strategy to read visual speech signals and thus associated with increased viewing time of the synchronous videos. Attention to detail, one dimension of autistic traits featured by local processing, has been found to be correlated with worse performances in speech synchrony processing. These findings extended previous research by showing the development of speech synchrony perception in young children, and may have implications for clinical populations (e.g., autism) with impaired multisensory integration.

CONFLICT OF INTEREST

The authors declare that there are no conflicts of interest.

The full text of this article hosted at iucr.org is unavailable due to technical difficulties.