Reducing annotation burden in medical imaging with ADGNET: A semi-supervised deep learning strategy.
Authors
Affiliations (1)
Affiliations (1)
- Department of Information Science and Technology, Zhejiang Shuren University, Hangzhou, Zhejiang, P. R.China.
Abstract
We propose ADGNET, a semi-supervised framework for Alzheimer's disease (AD) diagnosis that jointly optimizes image reconstruction and classification through shared feature representations. The architecture integrates a residual backbone with attention modulation for dynamic feature selection, an encoder-decoder reconstruction branch for unsupervised representation learning, and a classification branch with focal loss to address class imbalance. This dual-task design enables effective feature learning from limited annotations. On two public MRI datasets-KACD (2D, 6,400 images) and ROAD (3D, 532 scans)-ADGNET achieves average performance improvements of 4.1% and 7.2% over state-of-the-art methods (ResNeXt WSL, SimCLR) across six metrics. Interpretability analysis using Grad-CAM and attention visualization confirms that the model focuses on clinically relevant neuroanatomical structures, particularly the hippocampus and temporal lobes, with strong correlation to established AD pathology (r = 0.67, p < 0.001). These results validate the model's exceptional generalization capability and feature representation effectiveness across multi-modal medical imaging data, offering an efficient solution for few-shot medical image analysis.