Back to all papers

An interpretable machine learning framework with data-informed imaging biomarkers for diagnosis and prediction of Alzheimer's disease.

February 6, 2026pubmed logopapers

Authors

Kang W,Li B,Jiskoot LC,De Deyn PP,Biessels GJ,Koek HL,Claassen JAHR,Middelkoop HAM,van der Flier WM,Jansen WJ,Klein S,Bron EE

Affiliations (10)

  • Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands. Electronic address: [email protected].
  • Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands; Harvard Medical School, Boston, MA, USA.
  • Department of Neurology, Erasmus MC, Rotterdam, The Netherlands.
  • Department of Neurology & Alzheimer Center, University Medical Center Groningen, Groningen, The Netherlands.
  • Department of Geriatric Medicine, University Medical Center Utrecht, Utrecht, The Netherlands.
  • Radboud University Medical Center, Nijmegen, The Netherlands.
  • Department of Neurology & Neuropsychology, Leiden University Medical Center, Leiden, The Netherlands; Institute of Psychology, Health, Medical and Neuropsychology Unit, Leiden University, The Netherlands.
  • Amsterdam University Medical Center, location VUmc, Amsterdam, The Netherlands.
  • Alzheimer Center Limburg, School for Mental Health and Neuroscience (MHeNS), Maastricht University Medical Center, Maastricht, The Netherlands.
  • Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands.

Abstract

Machine learning methods based on imaging and other clinical data have shown great potential for improving the early and accurate diagnosis of Alzheimer's disease (AD). However, for most deep learning models, especially those including high-dimensional imaging data, the decision-making process remains largely opaque which limits clinical applicability. Explainable Boosting Machines (EBMs) are inherently interpretable machine learning models, but are typically applied to low-dimensional data. In this study, we propose an interpretable machine learning framework that integrates data-driven feature extraction based on Convolutional Neural Networks (CNNs) with the intrinsic transparency of EBMs for AD diagnosis and prediction. The framework enables interpretation at both the group-level and individual-level by identifying imaging biomarkers contributing to predictions. We validated the framework on the Alzheimer's Disease Neuroimaging Initiative (ADNI) cohort, achieving an area-under-the-curve (AUC) of 0.969 for AD vs. control classification and 0.750 for MCI conversion prediction. External validation was performed on an independent cohort, yielding AUCs of 0.871 for AD vs. subjective cognitive decline (SCD) classification and 0.666 for MCI conversion prediction. The proposed framework achieves performance comparable to state-of-the-art black-box models while offering transparent decision-making, a critical requirement for clinical translation. Our code is available at: https://gitlab.com/radiology/neuro/interpretable_ad_classification.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.