Gaining Brain Insights by Tapping into the Black Box: Linking Structural MRI Features to Age and Cognition using Shapley-Based Interpretation Methods.
Authors
Affiliations (2)
Affiliations (2)
- Center for Lifespan Changes in Brain and Cognition, Department of Psychology, University of Oslo, Oslo, Norway. [email protected].
- Center for Lifespan Changes in Brain and Cognition, Department of Psychology, University of Oslo, Oslo, Norway.
Abstract
Global interpretability in machine learning holds great potential for extracting meaningful insights from neuroimaging data to improve our understanding of brain function. Although various approaches exist to identify key contributing features at both local and global levels, the high dimensionality and correlations in neuroimaging data require careful selection of interpretability methods to achieve reliable global insights into brain function using machine learning. In this study, we evaluate multiple interpretability techniques such as SHAP, which relies on feature independence, as well as recent advances that account for feature dependence in the context of global interpretability, and inherently global methods such as SAGE. To demonstrate the practical application, we trained XGBoost models to predict age and fluid intelligence using neuroimaging measures from the UK Biobank dataset. By applying these interpretability methods, we found that mean intensities in subcortical regions are consistently and significantly associated with brain aging, while the prediction of fluid intelligence is driven by contributions of the hippocampus and the cerebellum, alongside established regions such as the frontal and temporal lobes. These results underscore the value of interpretable machine learning methods in understanding brain function through a data-driven approach.