Sort by:
Page 20 of 1601593 results

Invisible Attributes, Visible Biases: Exploring Demographic Shortcuts in MRI-based Alzheimer's Disease Classification

Akshit Achara, Esther Puyol Anton, Alexander Hammers, Andrew P. King

arxiv logopreprintSep 11 2025
Magnetic resonance imaging (MRI) is the gold standard for brain imaging. Deep learning (DL) algorithms have been proposed to aid in the diagnosis of diseases such as Alzheimer's disease (AD) from MRI scans. However, DL algorithms can suffer from shortcut learning, in which spurious features, not directly related to the output label, are used for prediction. When these features are related to protected attributes, they can lead to performance bias against underrepresented protected groups, such as those defined by race and sex. In this work, we explore the potential for shortcut learning and demographic bias in DL based AD diagnosis from MRI. We first investigate if DL algorithms can identify race or sex from 3D brain MRI scans to establish the presence or otherwise of race and sex based distributional shifts. Next, we investigate whether training set imbalance by race or sex can cause a drop in model performance, indicating shortcut learning and bias. Finally, we conduct a quantitative and qualitative analysis of feature attributions in different brain regions for both the protected attribute and AD classification tasks. Through these experiments, and using multiple datasets and DL models (ResNet and SwinTransformer), we demonstrate the existence of both race and sex based shortcut learning and bias in DL based AD classification. Our work lays the foundation for fairer DL diagnostic tools in brain MRI. The code is provided at https://github.com/acharaakshit/ShortMR

Resource-Efficient Glioma Segmentation on Sub-Saharan MRI

Freedmore Sidume, Oumayma Soula, Joseph Muthui Wacira, YunFei Zhu, Abbas Rabiu Muhammad, Abderrazek Zeraii, Oluwaseun Kalejaye, Hajer Ibrahim, Olfa Gaddour, Brain Halubanza, Dong Zhang, Udunna C Anazodo, Confidence Raymond

arxiv logopreprintSep 11 2025
Gliomas are the most prevalent type of primary brain tumors, and their accurate segmentation from MRI is critical for diagnosis, treatment planning, and longitudinal monitoring. However, the scarcity of high-quality annotated imaging data in Sub-Saharan Africa (SSA) poses a significant challenge for deploying advanced segmentation models in clinical workflows. This study introduces a robust and computationally efficient deep learning framework tailored for resource-constrained settings. We leveraged a 3D Attention UNet architecture augmented with residual blocks and enhanced through transfer learning from pre-trained weights on the BraTS 2021 dataset. Our model was evaluated on 95 MRI cases from the BraTS-Africa dataset, a benchmark for glioma segmentation in SSA MRI data. Despite the limited data quality and quantity, our approach achieved Dice scores of 0.76 for the Enhancing Tumor (ET), 0.80 for Necrotic and Non-Enhancing Tumor Core (NETC), and 0.85 for Surrounding Non-Functional Hemisphere (SNFH). These results demonstrate the generalizability of the proposed model and its potential to support clinical decision making in low-resource settings. The compact architecture, approximately 90 MB, and sub-minute per-volume inference time on consumer-grade hardware further underscore its practicality for deployment in SSA health systems. This work contributes toward closing the gap in equitable AI for global health by empowering underserved regions with high-performing and accessible medical imaging solutions.

Training With Local Data Remains Important for Deep Learning MRI Prostate Cancer Detection.

Carere SG, Jewell J, Nasute Fauerbach PV, Emerson DB, Finelli A, Ghai S, Haider MA

pubmed logopapersSep 11 2025
Domain shift has been shown to have a major detrimental effect on AI model performance however prior studies on domain shift for MRI prostate cancer segmentation have been limited to small, or heterogenous cohorts. Our objective was to assess whether prostate cancer segmentation models trained on local MRI data continue to outperform those trained on external data with cohorts exceeding 1000. We simulated a multi-institutional consortium using the public PICAI dataset (PICAI-TRAIN: <i>1241 exams</i>, PICAI-TEST: <i>259</i>) and a local dataset (LOCAL-TRAIN: <i>1400 exams</i>, LOCAL-TEST: <i>308</i>). IRB approval was obtained and consent waived. We compared nnUNet-v2 models trained on the combined data (CENTRAL-TRAIN) and separately on PICAI-TRAIN and LOCAL-TRAIN. Accuracy was evaluated using the open source PICAI Score on LOCAL-TEST. Significance was tested using bootstrapping. Just 22% (309/1400) of LOCAL-TRAIN exams would be sufficient to match the performance of a model trained on PICAI-TRAIN. The CENTRAL-TRAIN performance was similar to LOCAL-TRAIN performance, with PICAI Scores [95% CI] of 65 [58-71] and 66 [60-72], respectively. Both of these models exceeded the model trained on PICAI-TRAIN alone which had a score of 58 [51-64] (<i>P</i> < .002). Reducing training set size did not alter these relative trends. Domain shift limits MRI prostate cancer segmentation performance even when training with over 1000 exams from 3 external institutions. Use of local data is paramount at these scales.

A Gabor-enhanced deep learning approach with dual-attention for 3D MRI brain tumor segmentation.

Chamseddine E, Tlig L, Chaari L, Sayadi M

pubmed logopapersSep 11 2025
Robust 3D brain tumor MRI segmentation is significant for diagnosis and treatment. However, the tumor heterogeneity, irregular shape, and complicated texture are challenging. Deep learning has transformed medical image analysis by feature extraction directly from the data, greatly enhancing the accuracy of segmentation. The functionality of deep models can be complemented by adding modules like texture-sensitive customized convolution layers and attention mechanisms. These components allow the model to focus its attention on pertinent locations and boundary definition problems. In this paper, a texture-aware deep learning method that improves the U-Net structure by adding a trainable Gabor convolution layer in the input for rich textural feature capture is proposed. Such features are fused in parallel with standard convolutional outputs to better represent tumors. The model also utilizes dual attention modules, Squeeze-and-Excitation blocks in the encoder for dynamically adjusting channel-wise features and Attention Gates for boosting skip connections by removing trivial areas and weighting tumor areas. The working of each module is explored through explainable artificial intelligence methods to ensure interpretability. To address class imbalance, a weighted combined loss function is applied. The model achieves Dice coefficients of 91.62%, 89.92%, and 88.86% for whole tumor, tumor core, and enhancing tumor respectively on BraTS2021 dataset. Large-scale quantitative and qualitative evaluations on BraTS2021, validated on BraTS benchmarks, prove the accuracy and robustness of the proposed model. The proposed approach results are superior to benchmark U-Net and other state-of-the-art segmentation methods, offering a robust and interpretable solution for clinical use.

Neurodevelopmental deviations in schizophrenia: Evidences from multimodal connectome-based brain ages.

Fan YS, Yang P, Zhu Y, Jing W, Xu Y, Xu Y, Guo J, Lu F, Yang M, Huang W, Chen H

pubmed logopapersSep 11 2025
Pathologic schizophrenia processes originate early in brain development, leading to detectable brain alterations via structural and functional magnetic resonance imaging (MRI). Recent MRI studies have sought to characterize disease effects from a brain age perspective, but developmental deviations from the typical brain age trajectory in youths with schizophrenia remain unestablished. This study investigated brain development deviations in early-onset schizophrenia (EOS) patients by applying machine learning algorithms to structural and functional MRI data. Multimodal MRI data, including T1-weighted MRI (T1w-MRI), diffusion MRI, and resting-state functional MRI (rs-fMRI) data, were collected from 80 antipsychotic-naive first-episode EOS patients and 91 typically developing (TD) controls. The morphometric similarity connectome (MSC), structural connectome (SC), and functional connectome (FC) were separately constructed by using these three modalities. According to these connectivity features, eight brain age estimation models were first trained with the TD group, the best of which was then used to predict brain ages in patients. Individual brain age gaps were assessed as brain ages minus chronological ages. Both the SC and MSC features performed well in brain age estimation, whereas the FC features did not. Compared with the TD controls, the EOS patients showed increased absolute brain age gaps when using the SC or MSC features, with opposite trends between childhood and adolescence. These increased brain age gaps for EOS patients were positively correlated with the severity of their clinical symptoms. These findings from a multimodal brain age perspective suggest that advanced brain age gaps exist early in youths with schizophrenia.

Attention Gated-VGG with deep learning-based features for Alzheimer's disease classification.

Moorthy DK, Nagaraj P

pubmed logopapersSep 10 2025
Alzheimer's disease (AD) is considered to be one of the neurodegenerative diseases with possible cognitive deficits related to dementia in human subjects. High priority should be put on efforts aimed at early detection of AD. Here, images undergo a pre-processing phase that integrates image resizing and the application of median filters. After that, processed images are subjected to data augmentation procedures. Feature extraction from WOA-based ResNet, together with extracted convolutional neural network (CNN) features from pre-processed images, is used to train proposed DL model to classify AD. The process is executed using the proposed Attention Gated-VGG model. The proposed method outperformed normal methodologies when tested and achieved an accuracy of 96.7%, sensitivity of 97.8%, and specificity of 96.3%. The results have proven that Attention Gated-VGG model is a very promising technique for classifying AD.

3D-CNN Enhanced Multiscale Progressive Vision Transformer for AD Diagnosis.

Huang F, Chen N, Qiu A

pubmed logopapersSep 10 2025
Vision Transformer (ViT) applied to structural magnetic resonance images has demonstrated success in the diagnosis of Alzheimer's disease (AD) and mild cognitive impairment (MCI). However, three key challenges have yet to be well addressed: 1) ViT requires a large labeled dataset to mitigate overfitting while most of the current AD-related sMRI data fall short in the sample sizes. 2) ViT neglects the within-patch feature learning, e.g., local brain atrophy, which is crucial for AD diagnosis. 3) While ViT can enhance capturing local features by reducing the patch size and increasing the number of patches, the computational complexity of ViT quadratically increases with the number of patches with unbearable overhead. To this end, this paper proposes a 3D-convolutional neural network (CNN) Enhanced Multiscale Progressive ViT (3D-CNN-MPVT). First, a 3D-CNN is pre-trained on sMRI data to extract detailed local image features and alleviate overfitting. Second, an MPVT module is proposed with an inner CNN module to explicitly characterize the within-patch interactions that are conducive to AD diagnosis. Third, a stitch operation is proposed to merge cross-patch features and progressively reduce the number of patches. The inner CNN alongside the stitch operation in the MPTV module enhances local feature characterization while mitigating computational costs. Evaluations using the Alzheimer's Disease Neuroimaging Initiative dataset with 6610 scans and the Open Access Series of Imaging Studies-3 with 1866 scans demonstrated its superior performance. With minimal preprocessing, our approach achieved an impressive 90% accuracy and 80% in AD classification and MCI conversion prediction, surpassing recent baselines.

An Explainable Deep Learning Model for Focal Liver Lesion Diagnosis Using Multiparametric MRI.

Shen Z, Chen L, Wang L, Dong S, Wang F, Pan Y, Zhou J, Wang Y, Xu X, Chong H, Lin H, Li W, Li R, Ma H, Ma J, Yu Y, Du L, Wang X, Zhang S, Yan F

pubmed logopapersSep 10 2025
<i>"Just Accepted" papers have undergone full peer review and have been accepted for publication in <i>Radiology: Artificial Intelligence</i>. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content.</i> Purpose To assess the effectiveness of an explainable deep learning (DL) model, developed using multiparametric MRI (mpMRI) features, in improving diagnostic accuracy and efficiency of radiologists for classification of focal liver lesions (FLLs). Materials and Methods FLLs ≥ 1 cm in diameter at mpMRI were included in the study. nn-Unet and Liver Imaging Feature Transformer (LIFT) models were developed using retrospective data from one hospital (January 2018-August 2023). nnU-Net was used for lesion segmentation and LIFT for FLL classification. External testing was performed on data from three hospitals (January 2018-December 2023), with a prospective test set obtained from January 2024 to April 2024. Model performance was compared with radiologists and impact of model assistance on junior and senior radiologist performance was assessed. Evaluation metrics included the Dice similarity coefficient (DSC) and accuracy. Results A total of 2131 individuals with FLLs (mean age, 56 ± [SD] 12 years; 1476 female) were included in the training, internal test, external test, and prospective test sets. Average DSC values for liver and tumor segmentation across the three test sets were 0.98 and 0.96, respectively. Average accuracy for features and lesion classification across the three test sets were 93% and 97%, respectively. LIFT-assisted readings improved diagnostic accuracy (average 5.3% increase, <i>P</i> < .001), reduced reading time (average 34.5 seconds decrease, <i>P</i> < .001), and enhanced confidence (average 0.3-point increase, <i>P</i> < .001) of junior radiologists. Conclusion The proposed DL model accurately detected and classified FLLs, improving diagnostic accuracy and efficiency of junior radiologists. ©RSNA, 2025.

Symmetry Interactive Transformer with CNN Framework for Diagnosis of Alzheimer's Disease Using Structural MRI

Zheng Yang, Yanteng Zhang, Xupeng Kou, Yang Liu, Chao Ren

arxiv logopreprintSep 10 2025
Structural magnetic resonance imaging (sMRI) combined with deep learning has achieved remarkable progress in the prediction and diagnosis of Alzheimer's disease (AD). Existing studies have used CNN and transformer to build a well-performing network, but most of them are based on pretraining or ignoring the asymmetrical character caused by brain disorders. We propose an end-to-end network for the detection of disease-based asymmetric induced by left and right brain atrophy which consist of 3D CNN Encoder and Symmetry Interactive Transformer (SIT). Following the inter-equal grid block fetch operation, the corresponding left and right hemisphere features are aligned and subsequently fed into the SIT for diagnostic analysis. SIT can help the model focus more on the regions of asymmetry caused by structural changes, thus improving diagnostic performance. We evaluated our method based on the ADNI dataset, and the results show that the method achieves better diagnostic accuracy (92.5\%) compared to several CNN methods and CNNs combined with a general transformer. The visualization results show that our network pays more attention in regions of brain atrophy, especially for the asymmetric pathological characteristics induced by AD, demonstrating the interpretability and effectiveness of the method.

Few-shot learning for highly accelerated 3D time-of-flight MRA reconstruction.

Li H, Chiew M, Dragonu I, Jezzard P, Okell TW

pubmed logopapersSep 10 2025
To develop a deep learning-based reconstruction method for highly accelerated 3D time-of-flight MRA (TOF-MRA) that achieves high-quality reconstruction with robust generalization using extremely limited acquired raw data, addressing the challenge of time-consuming acquisition of high-resolution, whole-head angiograms. A novel few-shot learning-based reconstruction framework is proposed, featuring a 3D variational network specifically designed for 3D TOF-MRA that is pre-trained on simulated complex-valued, multi-coil raw k-space datasets synthesized from diverse open-source magnitude images and fine-tuned using only two single-slab experimentally acquired datasets. The proposed approach was evaluated against existing methods on acquired retrospectively undersampled in vivo k-space data from five healthy volunteers and on prospectively undersampled data from two additional subjects. The proposed method achieved superior reconstruction performance on experimentally acquired in vivo data over comparison methods, preserving most fine vessels with minimal artifacts with up to eight-fold acceleration. Compared to other simulation techniques, the proposed method generated more realistic raw k-space data for 3D TOF-MRA. Consistently high-quality reconstructions were also observed on prospectively undersampled data. By leveraging few-shot learning, the proposed method enabled highly accelerated 3D TOF-MRA relying on minimal experimentally acquired data, achieving promising results on both retrospective and prospective in vivo data while outperforming existing methods. Given the challenges of acquiring and sharing large raw k-space datasets, this holds significant promise for advancing research and clinical applications in high-resolution, whole-head 3D TOF-MRA imaging.
Page 20 of 1601593 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.