[article]
| Titre : |
Advanced ASD detection through facial and fMRI data integration with attention guidance |
| Type de document : |
texte imprimé |
| Auteurs : |
B. MAGESH KUMAR, Auteur ; K. PREMALATHA, Auteur ; S. JOTHIMANI, Auteur |
| Article en page(s) : |
p.202766 |
| Langues : |
Anglais (eng) |
| Mots-clés : |
Autism Spectrum Disorder Multimodal Fusion FMRI Facial Images Attention Mechanism Deep Learning Neuroimaging |
| Index. décimale : |
PER Périodiques |
| Résumé : |
Diagnosing Autism Spectrum Disorder (ASD) quickly and reliably has long frustrated clinicians because the condition arises from intricate brain development and depends almost entirely on behavioural signs. To overcome this problem, we present a mixed deep-learning system that pairs facial-image analysis with resting-fMRI scans to more precisely detect ASD. Each input type passes through its pre-processing chain, winning out over noise, misalignment, and across-subject differences that can cloud analysis. Facial pictures are aligned using keypoint landmarks and contrast enhancement, while fMRI volumes undergo motion correction, Gaussian smoothing, and ICA-AROMA-based artifact cleaning. Critical characteristics are then extracted from the two channels by distinct convolutional networks and integrated by an attention-driven fusion layer that learns to highlight the most informative areas. As a result, the final multimodal classifier can use complementary facial and neural cues to produce more distinct lines between normal and abnormal development. Experimental evaluation, conducted using stratified 5-fold cross-validation, demonstrated that the proposed multimodal framework achieved a test accuracy of 95.24 %, precision of 95.00 %, recall of 95.50 %, and F1-score of 95.25 %, outperforming the unimodal baseline. Visualization of intermediate feature maps confirmed that the Model focuses on salient regions in both facial and neurofunctional modalities. These results highlight the potential of the proposed multimodal approach as a reliable and interpretable diagnostic tool for clinical detection of ASD. |
| En ligne : |
https://doi.org/10.1016/j.reia.2025.202766 |
| Permalink : |
https://www.cra-rhone-alpes.org/cid/opac_css/index.php?lvl=notice_display&id=579 |
in Research in Autism > 130 (February 2026) . - p.202766
[article] Advanced ASD detection through facial and fMRI data integration with attention guidance [texte imprimé] / B. MAGESH KUMAR, Auteur ; K. PREMALATHA, Auteur ; S. JOTHIMANI, Auteur . - p.202766. Langues : Anglais ( eng) in Research in Autism > 130 (February 2026) . - p.202766
| Mots-clés : |
Autism Spectrum Disorder Multimodal Fusion FMRI Facial Images Attention Mechanism Deep Learning Neuroimaging |
| Index. décimale : |
PER Périodiques |
| Résumé : |
Diagnosing Autism Spectrum Disorder (ASD) quickly and reliably has long frustrated clinicians because the condition arises from intricate brain development and depends almost entirely on behavioural signs. To overcome this problem, we present a mixed deep-learning system that pairs facial-image analysis with resting-fMRI scans to more precisely detect ASD. Each input type passes through its pre-processing chain, winning out over noise, misalignment, and across-subject differences that can cloud analysis. Facial pictures are aligned using keypoint landmarks and contrast enhancement, while fMRI volumes undergo motion correction, Gaussian smoothing, and ICA-AROMA-based artifact cleaning. Critical characteristics are then extracted from the two channels by distinct convolutional networks and integrated by an attention-driven fusion layer that learns to highlight the most informative areas. As a result, the final multimodal classifier can use complementary facial and neural cues to produce more distinct lines between normal and abnormal development. Experimental evaluation, conducted using stratified 5-fold cross-validation, demonstrated that the proposed multimodal framework achieved a test accuracy of 95.24 %, precision of 95.00 %, recall of 95.50 %, and F1-score of 95.25 %, outperforming the unimodal baseline. Visualization of intermediate feature maps confirmed that the Model focuses on salient regions in both facial and neurofunctional modalities. These results highlight the potential of the proposed multimodal approach as a reliable and interpretable diagnostic tool for clinical detection of ASD. |
| En ligne : |
https://doi.org/10.1016/j.reia.2025.202766 |
| Permalink : |
https://www.cra-rhone-alpes.org/cid/opac_css/index.php?lvl=notice_display&id=579 |
|  |