Efficient Deep Learning Models for Predicting Individualized Task Activation from Resting-State Functional Connectivity
Authors
Affiliations (1)
Affiliations (1)
- Stanford University
Abstract
Deep learning has shown promise in predicting task-evoked brain activation patterns from resting-state fMRI. In this study, we replicate the state-of-the-art BrainSurfCNN model using data from the Human Connectome Project, and explore biologically motivated frameworks to improve prediction performance and computational efficiency. Specifically, we evaluate two model variants: BrainSERF, which integrates a Squeeze-and-Excitation attention mechanism into the U-Net backbone, and BrainSurfGCN, a lightweight graph neural network architecture that leverages mesh topology for efficient message passing. Both models yield comparable prediction performance to BrainSurfCNN, with BrainSERF offering modest improvements in subject identification accuracy and BrainSurfGCN delivering substantial reductions in model size and training time. We also investigate factors contributing to inter-individual variability in prediction accuracy and identify task performance and data quality as significant modulators. Our findings highlight new architectural avenues for improving the scalability of brain decoding models and underscore the need to consider individual variability when evaluating prediction fidelity.