Pubmed du 28/11/25
1. Sonn JY, Kim W, Iwanaszko M, Aoi Y, Li Y, Qi G, Parkitny L, Brissette JL, Weiner L, Botas J, Al-Ramahi I, Shilatifard A, Zoghbi HY. MeCP2 interacts with the super elongation complex to regulate transcription. Sci Adv. 2025; 11(48): eadt5937.
Loss-of-function mutations in methyl-CpG binding protein 2 (MECP2) cause Rett syndrome. While we know that MeCP2 binds to methylated cytosines on DNA, the full breadth of the molecular mechanisms by which MeCP2 regulates gene expression remains incompletely understood. Here, using a genetic modifier screen, we identify the super elongation complex, a P-TEFb-containing elongation factor that releases promoter-proximally paused RNA polymerase II, as a genetic interactor of MECP2. MeCP2 physically interacts with SEC subunits and directly binds AFF4, the scaffold of the SEC, via the transcriptional repression domain. Furthermore, MeCP2 facilitates the binding of AFF4 on a subset of genes in the mouse brain regulating synaptic plasticity and concordantly promotes the binding of RNA polymerase II on these genes. Last, while haploinsufficiency of Aff4 does not exhibit any behavioral deficits in mice, it exacerbates the impaired contextual learning behavior of Mecp2 hypomorphic mice. We propose a previously unknown mechanism by which MePC2 regulates gene expression underlying synaptic plasticity.
Lien vers le texte intégral (Open Access ou abonnement)
2. Wu Q. New intelligent music therapy method for applications of enhancing social skills of autism children based on TL-GCN and deep learning. Sci Rep. 2025; 15(1): 42364.
To address the long-standing challenges children with autism face in social skills and emotional regulation, this study introduces Emotion-based Music Intelligent Network (EmoMusik-Net)-a deep learning model designed for intelligent music therapy. The model focuses on emotional impairments exhibited during social interactions, integrating Transformer-based temporal modeling with a Transfer Learning-based Graph Convolutional Network (TL-GCN). This combination enables high-precision recognition of facial expression sequences and supports a dynamically adaptive, closed-loop mechanism for personalized music recommendation. EmoMusik-Net was trained and optimized using three publicly available emotional video datasets. A pre- and post-intervention study, conducted in collaboration with the families of 182 children with autism, employed questionnaire-based assessments to systematically evaluate the model’s real-world feasibility and effectiveness. Experimental results demonstrated that EmoMusik-Net achieved an emotion recognition accuracy above 0.970, an F1-score consistently over 0.960, and an Area Under the Curve (AUC) of 0.978. The model also showed outstanding robustness on large-scale datasets, with a stability score of 0.994, indicating strong classification performance and generalizability. In terms of intervention outcomes, boys aged 1-6 showed a marked increase in social interest scores, rising from 1.280 to 2.540-a 98.44% improvement. Girls aged 7-12 exhibited significant gains in emotional response scores, from 1.670 to 3.120-an 86.77% increase. Further statistical analysis using the Mixed-effects Model for Repeated Measures (MMRM) and bootstrap confidence interval estimation confirmed the intervention’s significance both statistically and clinically, with particularly strong effects observed in younger participants. Expert blind evaluations further validated the system’s effectiveness, showing high consistency in rhythm and emotion matching. The Intraclass Correlation Coefficient (ICC) ranged from 0.75 to 0.91, with matching accuracy surpassing 94% in certain subgroups. EmoMusik-Net not only addresses the current research gap in integrating intelligent emotion recognition with music-based interventions but also offers a responsive, technology-driven support tool for parents, educators, and clinicians. This approach holds strong potential to advance autism spectrum disorder interventions toward personalized, data-driven methodologies.