Back to all papers

LLM-guided multimodal attention network for robust multiclass Parkinson's disease diagnosis.

March 3, 2026pubmed logopapers

Authors

Zeng T,Ye Y,Ding B,Huang Y,Umar MI,Chipusu K,Huang J

Affiliations (5)

  • Faculty of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou, China.
  • Fujian Provincial Key Laboratory of Data Intensive Computing, Quanzhou, China.
  • Key Laboratory of Intelligent Computing and Information Processing, Fujian Province University, Quanzhou, China.
  • Department of Diagnostic Radiology, Huaqiao University Affiliated Strait Hospital, Quanzhou, Fujian, China.
  • 12443 School of Artificial Intelligence and Automation, Huazhong University of Science and Technology , Wuhan, China.

Abstract

Accurate identification of Parkinson's disease (PD), particularly during its prodromal stage, remains a major clinical challenge due to heterogeneous symptom presentation and overlapping neurological patterns. This study proposes an LLM-Guided Multimodal Attention Network (LLM-MAN) to improve PD staging by jointly modeling structural MRI and clinical/cognitive metadata. We develop a unified multimodal framework that encodes structural MRI using a ResNet-18 backbone enhanced with Convolutional Block Attention Modules (CBAM) for discriminative neuroimaging feature extraction, and represents clinical/cognitive metadata using an LLM-based text encoder (pre-trained BERT) for contextualized semantic modeling. A Meta-Guided Cross-Attention (MGCA) module is introduced to align clinical semantic knowledge with imaging features, enabling robust cross-modal fusion for multiclass classification (Normal Control, prodromal PD, and diagnosed PD). The model is evaluated on the Parkinson's Progression Markers Initiative (PPMI) dataset and further validated on an independent external cohort. On the PPMI dataset, LLM-MAN achieved an accuracy of 95.68 % for distinguishing Normal Control, prodromal PD, and diagnosed PD. External validation on an independent cohort yielded 94.10 % accuracy, indicating strong generalization performance across datasets. LLM-guided multimodal fusion via MGCA provides reliable and interpretable approach for PD staging, substantially improving prodromal PD identification by integrating semantic clinical knowledge with neuroimaging representations.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.