Centre d'Information et de documentation du CRA Rhône-Alpes
CRA
Informations pratiques
-
Adresse
Centre d'information et de documentation
du CRA Rhône-Alpes
Centre Hospitalier le Vinatier
bât 211
95, Bd Pinel
69678 Bron CedexHoraires
Lundi au Vendredi
9h00-12h00 13h30-16h00Contact
Tél: +33(0)4 37 91 54 65
Mail
Fax: +33(0)4 37 91 54 37
-
Résultat de la recherche
1 recherche sur le mot-clé 'audio-visual integration'
Affiner la recherche Générer le flux rss de la recherche
Partager le résultat de cette recherche Faire une suggestion
“Look who's talking!” Gaze Patterns for Implicit and Explicit Audio-Visual Speech Synchrony Detection in Children With High-Functioning Autism / Ruth B. GROSSMAN in Autism Research, 8-3 (June 2015)
[article]
Titre : “Look who's talking!” Gaze Patterns for Implicit and Explicit Audio-Visual Speech Synchrony Detection in Children With High-Functioning Autism Type de document : Texte imprimé et/ou numérique Auteurs : Ruth B. GROSSMAN, Auteur ; Erin STEINHART, Auteur ; Teresa MITCHELL, Auteur ; William MCILVANE, Auteur Article en page(s) : p.307-316 Langues : Anglais (eng) Mots-clés : face perception audio-visual integration high-functioning autism eye tracking mouth-directed gaze Index. décimale : PER Périodiques Résumé : Conversation requires integration of information from faces and voices to fully understand the speaker's message. To detect auditory-visual asynchrony of speech, listeners must integrate visual movements of the face, particularly the mouth, with auditory speech information. Individuals with autism spectrum disorder may be less successful at such multisensory integration, despite their demonstrated preference for looking at the mouth region of a speaker. We showed participants (individuals with and without high-functioning autism (HFA) aged 8–19) a split-screen video of two identical individuals speaking side by side. Only one of the speakers was in synchrony with the corresponding audio track and synchrony switched between the two speakers every few seconds. Participants were asked to watch the video without further instructions (implicit condition) or to specifically watch the in-synch speaker (explicit condition). We recorded which part of the screen and face their eyes targeted. Both groups looked at the in-synch video significantly more with explicit instructions. However, participants with HFA looked at the in-synch video less than typically developing (TD) peers and did not increase their gaze time as much as TD participants in the explicit task. Importantly, the HFA group looked significantly less at the mouth than their TD peers, and significantly more at non-face regions of the image. There were no between-group differences for eye-directed gaze. Overall, individuals with HFA spend less time looking at the crucially important mouth region of the face during auditory-visual speech integration, which is maladaptive gaze behavior for this type of task. En ligne : http://dx.doi.org/10.1002/aur.1447 Permalink : https://www.cra-rhone-alpes.org/cid/opac_css/index.php?lvl=notice_display&id=261
in Autism Research > 8-3 (June 2015) . - p.307-316[article] “Look who's talking!” Gaze Patterns for Implicit and Explicit Audio-Visual Speech Synchrony Detection in Children With High-Functioning Autism [Texte imprimé et/ou numérique] / Ruth B. GROSSMAN, Auteur ; Erin STEINHART, Auteur ; Teresa MITCHELL, Auteur ; William MCILVANE, Auteur . - p.307-316.
Langues : Anglais (eng)
in Autism Research > 8-3 (June 2015) . - p.307-316
Mots-clés : face perception audio-visual integration high-functioning autism eye tracking mouth-directed gaze Index. décimale : PER Périodiques Résumé : Conversation requires integration of information from faces and voices to fully understand the speaker's message. To detect auditory-visual asynchrony of speech, listeners must integrate visual movements of the face, particularly the mouth, with auditory speech information. Individuals with autism spectrum disorder may be less successful at such multisensory integration, despite their demonstrated preference for looking at the mouth region of a speaker. We showed participants (individuals with and without high-functioning autism (HFA) aged 8–19) a split-screen video of two identical individuals speaking side by side. Only one of the speakers was in synchrony with the corresponding audio track and synchrony switched between the two speakers every few seconds. Participants were asked to watch the video without further instructions (implicit condition) or to specifically watch the in-synch speaker (explicit condition). We recorded which part of the screen and face their eyes targeted. Both groups looked at the in-synch video significantly more with explicit instructions. However, participants with HFA looked at the in-synch video less than typically developing (TD) peers and did not increase their gaze time as much as TD participants in the explicit task. Importantly, the HFA group looked significantly less at the mouth than their TD peers, and significantly more at non-face regions of the image. There were no between-group differences for eye-directed gaze. Overall, individuals with HFA spend less time looking at the crucially important mouth region of the face during auditory-visual speech integration, which is maladaptive gaze behavior for this type of task. En ligne : http://dx.doi.org/10.1002/aur.1447 Permalink : https://www.cra-rhone-alpes.org/cid/opac_css/index.php?lvl=notice_display&id=261