Skull-stripping induces shortcut learning in MRI-based Alzheimer's disease classification.
Authors
Affiliations (6)
Affiliations (6)
- Department of Neurology, Medical University of Graz, Graz, Austria. [email protected].
- Department of Neurology, Medical University of Graz, Graz, Austria.
- Institute of Biomedical Imaging, Graz University of Technology, Graz, Austria.
- BioTechMed-Graz, Graz, Austria.
- Department of Neurology, Medical University of Graz, Graz, Austria. [email protected].
- BioTechMed-Graz, Graz, Austria. [email protected].
Abstract
High classification accuracy of Alzheimer's disease (AD) from structural MRI has been achieved using deep neural networks, yet the specific image features contributing to these decisions remain unclear. In this study, the contributions of T1-weighted (T1w) gray-white matter texture, volumetric information, and preprocessing-particularly skull-stripping-were systematically assessed. A dataset of 990 matched T1w MRIs from AD patients and cognitively normal controls from the ADNI database was used. Preprocessing was varied through skull-stripping and intensity binarization to isolate texture and shape contributions. A 3D convolutional neural network was trained on each configuration, and classification performance was compared using exact McNemar tests with discrete Bonferroni-Holm correction. Feature relevance was analyzed using Layer-wise Relevance Propagation, image similarity metrics, and spectral clustering of relevance maps. Despite substantial differences in image content, classification accuracy, sensitivity, and specificity remained stable across preprocessing conditions. Models trained on binarized images preserved performance, indicating minimal reliance on gray-white matter texture. Instead, volumetric features-particularly brain contours introduced through skull-stripping-were consistently used by the models. This behavior reflects a shortcut learning phenomenon, where preprocessing artifacts act as potentially unintended cues. The resulting Clever Hans effect emphasizes the critical importance of interpretability tools to reveal hidden biases and to ensure robust and trustworthy deep learning in medical imaging. We investigated the mechanisms underlying deep learning-based disease classification using a widely utilized Alzheimer's disease dataset, and our findings reveal a reliance on features induced through skull-stripping, highlighting the need for careful preprocessing to ensure clinically relevant and interpretable models. Shortcut learning is induced by skull-stripping applied to T1-weighted MRIs. Explainable deep learning and spectral clustering estimate the bias. Highlights the importance of understanding the dataset, image preprocessing and deep learning model, for interpretation and validation.