Back to all papers

Dynamic multi-scale deep learning with mixture of experts for differentiating iNPH and PSP using MRI.

November 18, 2025pubmed logopapers

Authors

Sawa F,Fujita D,Shimada K,Aihara H,Uehara T,Koide Y,Kawasaki R,Ishii K,Kobashi S

Affiliations (7)

  • Graduate School of Engineering, University of Hyogo, Kobe, Hyogo, Japan.
  • Neurocognitive Disorders Center, Hyogo Prefectural Harima-Himeji General Medical Center, Himeji, Hyogo, Japan.
  • Department of Neurosurgery, Hyogo Prefectural Harima-Himeji General Medical Center, Himeji, Hyogo, Japan.
  • Department of Neurology, Hyogo Prefectural Harima-Himeji General Medical Center, Himeji, Hyogo, Japan.
  • Department of Radiology, Hyogo Prefectural Harima-Himeji General Medical Center, Himeji, Hyogo, Japan.
  • Department of Radiology, Faculty of Medicine, Kindai University, Osaka, Japan.
  • Graduate School of Engineering, University of Hyogo, Kobe, Hyogo, Japan. [email protected].

Abstract

Distinguishing idiopathic normal pressure hydrocephalus (iNPH) from progressive supranuclear palsy (PSP) presents a clinical challenge due to overlapping clinical symptoms such as gait disturbances and cognitive decline. This study presents a novel multi-scale deep learning framework that integrates global and local magnetic resonance imaging (MRI) features using a mixture of experts (MoE) mechanism, enhancing diagnostic accuracy and minimizing interobserver variability. The proposed framework combines a 3D convolutional neural network (CNN) for capturing global volumetric features with a 2.5D recurrent CNN focusing on disease-specific regions of interest (ROIs), including the lateral ventricles, high convexity sulci, midbrain, and Sylvian fissures. The MoE mechanism dynamically weights global and local features, optimizing the classification process. Model performance was assessed using stratified fivefold cross-validation on T1-weighted MRI from 118 patients (53 iNPH, 65 PSP) to ensure balanced class distributions across training folds. The MoE model using ResNet-34 achieved an accuracy of 0.983 (95% CI 0.875-1.000), a recall of 0.985 (95% CI 0.750-1.000), a precision of 0.986 (95% CI 0.769-1.000), and an area under the curve (AUC) of 1.000 (95% CI 1.000-1.000), outperforming traditional morphological markers and single-branch deep learning models. The MoE mechanism allowed adaptive weighting of global and local features, contributing to both improved robustness and interpretability. Grad-CAM visualizations highlighted disease-specific regions, demonstrating that the model focused on relevant features in both successful and failure modes of the 3D CNN expert for iNPH and PSP. The dynamic integration of global and local MRI features through the MoE framework offers a powerful, robust, and interpretable tool for differentiating iNPH from PSP. This approach reduces reliance on subjective visual assessments and has the potential for broader clinical application through dataset expansion and multicenter validation.

Topics

Journal Article

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.