Back to all papers

Dual-Modal Deep Learning with In-Domain Training and Attention for Infant Brain Myelination Prediction.

February 18, 2026pubmed logopapers

Authors

Sri Harshitha M,G M,Thomas A,Francis B,P Gopi V,Sehrawat A

Affiliations (3)

  • Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, 620015, Tiruchirappalli, Tamil Nadu, India.
  • Department of Electronics and Communication Engineering, National Institute of Technology Tiruchirappalli, 620015, Tiruchirappalli, Tamil Nadu, India. [email protected].
  • Department of Radiodiagnosis and Imaging, All India Institute of Medical Sciences, Bhopal, Madhya Pradesh, India.

Abstract

Myelin plays a critical role in the central nervous system, and its maturation is essential for understanding brain development. However, assessing myelin progression remains challenging due to variability across age groups. Radiologists typically rely on developmental atlases and age-based milestones, but manual evaluation is time-consuming and prone to inter-observer variability. This paper presents a novel dual-input deep learning framework that leverages both [Formula: see text] and [Formula: see text]-weighted MRI modalities for automated myelin maturation assessment. Each modality is processed through an in-domain trained DenseNet121 feature extractor, followed by Channel and Multi-Head Attention Blocks to enhance feature prioritization and spatial contextualization. Cross-Attention enables effective inter-modality information exchange, while early fusion via concatenation integrates structural insights from both contrasts. The fused features are refined using Global Average Pooling and passed to a regression-optimized dense layer. Trained on 710 samples and tested on 123 from a publicly available dataset (833 total), the model achieved a Mean Absolute Error (MAE) of 1.18 months, a Pearson Correlation Coefficient (PCC) of 0.98, a Coefficient of Determination ([Formula: see text]) of 0.96, and a Concordance Correlation Coefficient (CCC) of 0.98. Visual interpretability through Grad-CAM revealed the model's focus on clinically meaningful brain regions, with abnormal cases showing heightened activation in peripheral and ventral areas. These findings confirm the model's ability to deliver accurate and interpretable predictions, supporting its potential for real-world diagnostic integration in pediatric neuroimaging.

Topics

Deep LearningBrainMyelin SheathMagnetic Resonance ImagingAttentionJournal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.