
- <Centre d'Information et de documentation du CRA Rhône-Alpes
- CRA
- Informations pratiques
-
Adresse
Centre d'information et de documentation
Horaires
du CRA Rhône-Alpes
Centre Hospitalier le Vinatier
bât 211
95, Bd Pinel
69678 Bron CedexLundi au Vendredi
Contact
9h00-12h00 13h30-16h00Tél: +33(0)4 37 91 54 65
Mail
Fax: +33(0)4 37 91 54 37
-
Adresse
Auteur Zivile BERNOTAITE
|
|
Documents disponibles écrits par cet auteur (3)
Faire une suggestion Affiner la rechercheAuditory and Semantic Processing of Speech-in-Noise in Autism: A Behavioral and EEG Study / Jiayin LI in Autism Research, 18-10 (October 2025)
![]()
[article]
Titre : Auditory and Semantic Processing of Speech-in-Noise in Autism: A Behavioral and EEG Study Type de document : texte imprimé Auteurs : Jiayin LI, Auteur ; Maleeha SUJAWAL, Auteur ; Zivile BERNOTAITE, Auteur ; Ian CUNNINGS, Auteur ; Fang LIU, Auteur Article en page(s) : p.2011-2030 Langues : Anglais (eng) Mots-clés : autism N400 neural tracking speech-in-noise temporal response functions Index. décimale : PER Périodiques Résumé : ABSTRACT Autistic individuals often struggle to recognize speech in noisy environments, but the neural mechanisms behind these challenges remain unclear. Effective speech-in-noise (SiN) processing relies on auditory processing, which tracks target sounds amidst noise, and semantic processing, which further integrates relevant acoustic information to derive meaning. This study examined these two processes in autism. Thirty-one autistic and 31 non-autistic adults completed a sentence judgment task under three conditions: quiet, babble noise, and competing speech. Auditory processing was measured using EEG-derived temporal response functions (TRFs), which tracked how the brain follows speech sounds, while semantic processing was assessed via behavioral accuracy and the N400 component, a neural marker of semantic processing. Autistic participants showed reduced TRF responses and delayed N400 onset, indicating less efficient auditory processing and slower semantic processing, despite similar N400 amplitude and behavioral performance. Moreover, non-autistic participants demonstrated a trade-off between auditory and semantic processing resources. In the competing speech condition, they showed enhanced semantic integration but reduced neural tracking of auditory information when managing linguistic competition introduced by intelligible speech noise. In contrast, the autistic group showed no modulation of neural responses, suggesting reduced flexibility in adjusting auditory and semantic demands. These findings highlight distinct neural processing patterns in autistic individuals during SiN tasks, providing new insights into how atypical auditory and semantic processing shape SiN perception in autism. En ligne : https://doi.org/10.1002/aur.70097 Permalink : https://www.cra-rhone-alpes.org/cid/opac_css/index.php?lvl=notice_display&id=569
in Autism Research > 18-10 (October 2025) . - p.2011-2030[article] Auditory and Semantic Processing of Speech-in-Noise in Autism: A Behavioral and EEG Study [texte imprimé] / Jiayin LI, Auteur ; Maleeha SUJAWAL, Auteur ; Zivile BERNOTAITE, Auteur ; Ian CUNNINGS, Auteur ; Fang LIU, Auteur . - p.2011-2030.
Langues : Anglais (eng)
in Autism Research > 18-10 (October 2025) . - p.2011-2030
Mots-clés : autism N400 neural tracking speech-in-noise temporal response functions Index. décimale : PER Périodiques Résumé : ABSTRACT Autistic individuals often struggle to recognize speech in noisy environments, but the neural mechanisms behind these challenges remain unclear. Effective speech-in-noise (SiN) processing relies on auditory processing, which tracks target sounds amidst noise, and semantic processing, which further integrates relevant acoustic information to derive meaning. This study examined these two processes in autism. Thirty-one autistic and 31 non-autistic adults completed a sentence judgment task under three conditions: quiet, babble noise, and competing speech. Auditory processing was measured using EEG-derived temporal response functions (TRFs), which tracked how the brain follows speech sounds, while semantic processing was assessed via behavioral accuracy and the N400 component, a neural marker of semantic processing. Autistic participants showed reduced TRF responses and delayed N400 onset, indicating less efficient auditory processing and slower semantic processing, despite similar N400 amplitude and behavioral performance. Moreover, non-autistic participants demonstrated a trade-off between auditory and semantic processing resources. In the competing speech condition, they showed enhanced semantic integration but reduced neural tracking of auditory information when managing linguistic competition introduced by intelligible speech noise. In contrast, the autistic group showed no modulation of neural responses, suggesting reduced flexibility in adjusting auditory and semantic demands. These findings highlight distinct neural processing patterns in autistic individuals during SiN tasks, providing new insights into how atypical auditory and semantic processing shape SiN perception in autism. En ligne : https://doi.org/10.1002/aur.70097 Permalink : https://www.cra-rhone-alpes.org/cid/opac_css/index.php?lvl=notice_display&id=569 Linguistic and Musical Syntax Processing in Autistic and Non-Autistic Individuals: An Event-Related Potential (ERP) Study / Anna PETROVA ; Zivile BERNOTAITE ; Maleeha SUJAWAL ; Chen ZHAO ; Hiba AHMED ; Cunmei JIANG ; Fang LIU in Autism Research, 18-6 (June 2025)
![]()
[article]
Titre : Linguistic and Musical Syntax Processing in Autistic and Non-Autistic Individuals: An Event-Related Potential (ERP) Study Type de document : texte imprimé Auteurs : Anna PETROVA, Auteur ; Zivile BERNOTAITE, Auteur ; Maleeha SUJAWAL, Auteur ; Chen ZHAO, Auteur ; Hiba AHMED, Auteur ; Cunmei JIANG, Auteur ; Fang LIU, Auteur Article en page(s) : p.1245-1256 Langues : Anglais (eng) Mots-clés : autism language music P600 syntax Index. décimale : PER Périodiques Résumé : ABSTRACT Syntactic processing in both language and music involves combining elements?such as words or chords?into coherent structures. The Shared Syntactic Integration Resource Hypothesis (SSIRH) was introduced based on observations of similar neural responses to syntactic violations across both domains. This hypothesis suggests that difficulties in syntactic processing in one domain may result in similar challenges in the other. The current study tested the SSIRH in autism, a neurodevelopmental condition often associated with language difficulties but relatively preserved musical abilities. Thirty-one autistic and 31 non-autistic participants judged the acceptability of syntactically congruent and incongruent sentences and musical sequences while their neural responses were recorded using electroencephalography. Autistic participants exhibited a reduced and delayed P600 effect?a marker of syntactic integration across both domains, despite achieving similar behavioral accuracy to the non-autistic group. These findings suggest parallel difficulties in syntactic processing in autism for both language and music, providing support for the SSIRH. This is the first study to directly examine real-time syntactic integration in both domains in autistic individuals, offering novel insights into cross-domain syntactic processing in autism and contributing to a deeper understanding of language and music processing more broadly. En ligne : https://doi.org/10.1002/aur.70038 Permalink : https://www.cra-rhone-alpes.org/cid/opac_css/index.php?lvl=notice_display&id=558
in Autism Research > 18-6 (June 2025) . - p.1245-1256[article] Linguistic and Musical Syntax Processing in Autistic and Non-Autistic Individuals: An Event-Related Potential (ERP) Study [texte imprimé] / Anna PETROVA, Auteur ; Zivile BERNOTAITE, Auteur ; Maleeha SUJAWAL, Auteur ; Chen ZHAO, Auteur ; Hiba AHMED, Auteur ; Cunmei JIANG, Auteur ; Fang LIU, Auteur . - p.1245-1256.
Langues : Anglais (eng)
in Autism Research > 18-6 (June 2025) . - p.1245-1256
Mots-clés : autism language music P600 syntax Index. décimale : PER Périodiques Résumé : ABSTRACT Syntactic processing in both language and music involves combining elements?such as words or chords?into coherent structures. The Shared Syntactic Integration Resource Hypothesis (SSIRH) was introduced based on observations of similar neural responses to syntactic violations across both domains. This hypothesis suggests that difficulties in syntactic processing in one domain may result in similar challenges in the other. The current study tested the SSIRH in autism, a neurodevelopmental condition often associated with language difficulties but relatively preserved musical abilities. Thirty-one autistic and 31 non-autistic participants judged the acceptability of syntactically congruent and incongruent sentences and musical sequences while their neural responses were recorded using electroencephalography. Autistic participants exhibited a reduced and delayed P600 effect?a marker of syntactic integration across both domains, despite achieving similar behavioral accuracy to the non-autistic group. These findings suggest parallel difficulties in syntactic processing in autism for both language and music, providing support for the SSIRH. This is the first study to directly examine real-time syntactic integration in both domains in autistic individuals, offering novel insights into cross-domain syntactic processing in autism and contributing to a deeper understanding of language and music processing more broadly. En ligne : https://doi.org/10.1002/aur.70038 Permalink : https://www.cra-rhone-alpes.org/cid/opac_css/index.php?lvl=notice_display&id=558 Listening in a noisy world: The impact of acoustic cues and background music on speech perception in autism / Jiayin LI in Autism, 30-1 (January 2026)
![]()
[article]
Titre : Listening in a noisy world: The impact of acoustic cues and background music on speech perception in autism Type de document : texte imprimé Auteurs : Jiayin LI, Auteur ; Maleeha SUJAWAL, Auteur ; Zivile BERNOTAITE, Auteur ; Ian CUNNINGS, Auteur ; Fang LIU, Auteur Article en page(s) : p.134-149 Langues : Anglais (eng) Mots-clés : acoustic cue autism background music speech-in-noise processing Résumé : Recognising speech in noise involves focusing on a target speaker while filtering out competing voices and sounds. Acoustic cues, such as vocal characteristics and spatial location, help differentiate between speakers. However, autistic individuals may process these cues differently, making it more challenging for them to perceive speech in such conditions. This study investigated how autistic individuals use acoustic cues to follow a target speaker and whether background music increases processing demands. Thirty-six autistic and 36 non-autistic participants, recruited in the United Kingdom, identified information from a target speaker while ignoring a competing speaker and background music. The competing speaker’s gender and location either matched or differed from the target. The autistic group exhibited lower mean accuracy across cue conditions, indicating general challenges in recognising speech in noise. Trial-level analyses revealed that while both groups showed accuracy improvements over time without acoustic cues, the autistic group demonstrated smaller gains, suggesting greater difficulty in tracking the target speaker without distinct acoustic features. Background music did not disproportionately affect autistic participants but had a greater impact on those with stronger local processing tendencies. Using a naturalistic paradigm mimicking real-life scenarios, this study provides insights into speech-in-noise processing in autism, informing strategies to support speech perception in complex environments.Lay abstract This study examined how autistic and non-autistic adults understand speech when other voices or music were playing in the background. Participants focused on one main speaker while another voice played simultaneously. Sometimes, the second voice differed from the main one in gender or where the sound was coming from. These differences made it easier to tell the voices apart and understand what the main speaker was saying. Both autistic and non-autistic participants did better when these differences were present. But autistic individuals struggled more when the two voices were the same gender and came from the same location. Background music also made it harder to understand speech for everyone, but it especially affected autistic participants who tended to focus more on small details. These findings help us understand how autistic individuals process speech in noisy environments and could lead to better ways to support communication. En ligne : https://dx.doi.org/10.1177/13623613251376484 Permalink : https://www.cra-rhone-alpes.org/cid/opac_css/index.php?lvl=notice_display&id=578
in Autism > 30-1 (January 2026) . - p.134-149[article] Listening in a noisy world: The impact of acoustic cues and background music on speech perception in autism [texte imprimé] / Jiayin LI, Auteur ; Maleeha SUJAWAL, Auteur ; Zivile BERNOTAITE, Auteur ; Ian CUNNINGS, Auteur ; Fang LIU, Auteur . - p.134-149.
Langues : Anglais (eng)
in Autism > 30-1 (January 2026) . - p.134-149
Mots-clés : acoustic cue autism background music speech-in-noise processing Résumé : Recognising speech in noise involves focusing on a target speaker while filtering out competing voices and sounds. Acoustic cues, such as vocal characteristics and spatial location, help differentiate between speakers. However, autistic individuals may process these cues differently, making it more challenging for them to perceive speech in such conditions. This study investigated how autistic individuals use acoustic cues to follow a target speaker and whether background music increases processing demands. Thirty-six autistic and 36 non-autistic participants, recruited in the United Kingdom, identified information from a target speaker while ignoring a competing speaker and background music. The competing speaker’s gender and location either matched or differed from the target. The autistic group exhibited lower mean accuracy across cue conditions, indicating general challenges in recognising speech in noise. Trial-level analyses revealed that while both groups showed accuracy improvements over time without acoustic cues, the autistic group demonstrated smaller gains, suggesting greater difficulty in tracking the target speaker without distinct acoustic features. Background music did not disproportionately affect autistic participants but had a greater impact on those with stronger local processing tendencies. Using a naturalistic paradigm mimicking real-life scenarios, this study provides insights into speech-in-noise processing in autism, informing strategies to support speech perception in complex environments.Lay abstract This study examined how autistic and non-autistic adults understand speech when other voices or music were playing in the background. Participants focused on one main speaker while another voice played simultaneously. Sometimes, the second voice differed from the main one in gender or where the sound was coming from. These differences made it easier to tell the voices apart and understand what the main speaker was saying. Both autistic and non-autistic participants did better when these differences were present. But autistic individuals struggled more when the two voices were the same gender and came from the same location. Background music also made it harder to understand speech for everyone, but it especially affected autistic participants who tended to focus more on small details. These findings help us understand how autistic individuals process speech in noisy environments and could lead to better ways to support communication. En ligne : https://dx.doi.org/10.1177/13623613251376484 Permalink : https://www.cra-rhone-alpes.org/cid/opac_css/index.php?lvl=notice_display&id=578

