Sort by:
Page 35 of 1621612 results

ResGSNet: Enhanced local attention with Global Scoring Mechanism for the early detection and treatment of Alzheimer's Disease.

Chen T, Li X

pubmed logopapersAug 28 2025
Recently, Transformer has been widely used in medical imaging analysis for its competitive potential when given enough data. However, Transformer conducts attention on a global scale by utilizing self-attention mechanisms across all input patches, thereby requiring substantial computational power and memory, especially when dealing with large 3D images such as MRI images. In this study, we proposed Residual Global Scoring Network (ResGSNet), a novel architecture combining ResNet with Global Scoring Module (GSM), achieving high computational efficiency while incorporating both local and global features. First, our proposed GSM utilized local attention to conduct information exchange within local brain regions, subsequently assigning global scores to each of these local regions, demonstrating the capability to encapsulate local and global information with reduced computational burden and superior performance compared to existing methods. Second, we utilized Grad-CAM++ and the Attention Map to interpret model predictions, uncovering brain regions related to Alzheimer's Disease (AD) Detection. Third, our extensive experiments on the ADNI dataset show that our proposed ResGSNet achieved satisfactory performance with 95.1% accuracy in predicting AD, a 1.3% increase compared to state-of-the-art methods, and 93.4% accuracy for Mild Cognitive Impairment (MCI). Our model for detecting MCI can potentially serve as a screening tool for identifying individuals at high risk of developing AD and allow for early intervention. Furthermore, the Grad-CAM++ and Attention Map not only identified brain regions commonly associated with AD and MCI but also revealed previously undiscovered regions, including putamen, cerebellum cortex, and caudate nucleus, holding promise for further research into the etiology of AD.

Reverse Imaging for Wide-spectrum Generalization of Cardiac MRI Segmentation

Yidong Zhao, Peter Kellman, Hui Xue, Tongyun Yang, Yi Zhang, Yuchi Han, Orlando Simonetti, Qian Tao

arxiv logopreprintAug 28 2025
Pretrained segmentation models for cardiac magnetic resonance imaging (MRI) struggle to generalize across different imaging sequences due to significant variations in image contrast. These variations arise from changes in imaging protocols, yet the same fundamental spin properties, including proton density, T1, and T2 values, govern all acquired images. With this core principle, we introduce Reverse Imaging, a novel physics-driven method for cardiac MRI data augmentation and domain adaptation to fundamentally solve the generalization problem. Our method reversely infers the underlying spin properties from observed cardiac MRI images, by solving ill-posed nonlinear inverse problems regularized by the prior distribution of spin properties. We acquire this "spin prior" by learning a generative diffusion model from the multiparametric SAturation-recovery single-SHot acquisition sequence (mSASHA) dataset, which offers joint cardiac T1 and T2 maps. Our method enables approximate but meaningful spin-property estimates from MR images, which provide an interpretable "latent variable" that lead to highly flexible image synthesis of arbitrary novel sequences. We show that Reverse Imaging enables highly accurate segmentation across vastly different image contrasts and imaging protocols, realizing wide-spectrum generalization of cardiac MRI segmentation.

An MRI Atlas of the Human Fetal Brain: Reference and Segmentation Tools for Fetal Brain MRI Analysis.

Bagheri M, Velasco-Annis C, Wang J, Faghihpirayesh R, Khan S, Calixto C, Jaimes C, Vasung L, Ouaalam A, Afacan O, Warfield SK, Rollins CK, Gholipour A

pubmed logopapersAug 28 2025
Accurate characterization of in-utero brain development is essential for understanding typical and atypical neurodevelopment. Building upon previous efforts to construct spatiotemporal fetal brain MRI atlases, we present the CRL-2025 fetal brain atlas, which is a spatiotemporal (4D) atlas of the developing fetal brain between 21 and 37 gestational weeks. This atlas is constructed from carefully processed MRI scans of 160 fetuses with typically-developing brains using a diffeomorphic deformable registration framework integrated with kernel regression on age. CRL-2025 uniquely includes detailed tissue segmentations, transient white matter compartments, and parcellation into 126 anatomical regions. This atlas offers significantly enhanced anatomical details over the CRL-2017 atlas, and is released along with the CRL diffusion MRI atlas with its newly created tissue segmentation and labels as well as deep learning-based multiclass segmentation models for fine-grained fetal brain MRI segmentation. The CRL-2025 atlas and its associated tools provide a robust and scalable platform for fetal brain MRI segmentation, groupwise analysis, and early neurodevelopmental research, and these materials are publicly released to support the broader research community.

Seeking Common Ground While Reserving Differences: Multiple Anatomy Collaborative Framework for Undersampled MRI Reconstruction.

Yan J, Yu C, Chen H, Xu Z, Huang J, Li X, Yao J

pubmed logopapersAug 27 2025
Recently, deep neural networks have greatly advanced undersampled Magnetic Resonance Image (MRI) reconstruction, wherein most studies follow the one-anatomy-one-network fashion, i.e., each expert network is trained and evaluated for a specific anatomy. Apart from inefficiency in training multiple independent models, such convention ignores the shared de-aliasing knowledge across various anatomies which can benefit each other. To explore the shared knowledge, one naive way is to combine all the data from various anatomies to train an all-round network. Unfortunately, despite the existence of the shared de-aliasing knowledge, we reveal that the exclusive knowledge across different anatomies can deteriorate specific reconstruction targets, yielding overall performance degradation. Observing this, in this study, we present a novel deep MRI reconstruction framework with both anatomy-shared and anatomy-specific parameterized learners, aiming to "seek common ground while reserving differences" across different anatomies. Particularly, the primary anatomy-shared learners are exposed to different anatomies to model rich shared de-aliasing knowledge, while the efficient anatomy-specific learners are trained with their target anatomy for exclusive knowledge. Four different implementations of anatomy-specific learners are presented and explored on the top of our framework in two MRI reconstruction networks. Comprehensive experiments on brain, knee and cardiac MRI datasets demonstrate that three of these learners are able to enhance reconstruction performance via multiple anatomy collaborative learning. Extensive studies show that our strategy can also benefit multiple pulse sequence MRI reconstruction by integrating sequence-specific learners.

MRI-based machine-learning radiomics of the liver to predict liver-related events in hepatitis B virus-associated fibrosis.

Luo Y, Luo Q, Wu Y, Zhang S, Ren H, Wang X, Liu X, Yang Q, Xu W, Wu Q, Li Y

pubmed logopapersAug 27 2025
The onset of liver-related events (LREs) in fibrosis indicates a poor prognosis and worsens patients' quality of life, making the prediction and early detection of LREs crucial. The aim of this study was to develop a radiomics model using liver magnetic resonance imaging (MRI) to predict LRE risk in patients undergoing antiviral treatment for chronic fibrosis caused by hepatitis B virus (HBV). Patients with HBV-associated liver fibrosis and liver stiffness measurements ≥ 10 kPa were included. Feature selection and dimensionality reduction techniques identified discriminative features from three MRI sequences. Radiomics models were built using eight machine learning techniques and evaluated for performance. Shapley additive explanation and permutation importance techniques were applied to interpret the model output. A total of 222 patients aged 49 ± 10 years (mean ± standard deviation), 175 males, were evaluated, with 41 experiencing LREs. The radiomics model, incorporating 58 selected features, outperformed traditional clinical tools in prediction accuracy. Developed using a support vector machine classifier, the model achieved optimal areas under the receiver operating characteristic curves of 0.94 and 0.93 in the training and test sets, respectively, demonstrating good calibration. Machine learning techniques effectively predicted LREs in patients with fibrosis and HBV, offering comparable accuracy across algorithms and supporting personalized care decisions for HBV-related liver disease. Radiomics models based on liver multisequence MRI can improve risk prediction and management of patients with HBV-associated chronic fibrosis. In addition, it offers valuable prognostic insights and aids in making informed clinical decisions. Liver-related events (LREs) are associated with poor prognosis in chronic fibrosis. Radiomics models could predict LREs in patients with hepatitis B-associated chronic fibrosis. Radiomics contributes to personalized care choices for patients with hepatitis B-associated fibrosis.

Improved brain tumor classification through DenseNet121 based transfer learning.

Rasheed M, Jaffar MA, Akram A, Rashid J, Alshalali TAN, Irshad A, Sarwar N

pubmed logopapersAug 27 2025
Brain tumors have a big effect on a person's health by letting abnormal cells grow unchecked in the brain. This means that early and correct diagnosis is very important for effective treatment. Many of the current diagnostic methods are time-consuming, rely primarily on hand interpretation, and frequently yield unsatisfactory results. This work finds brain tumors in MRI data using DenseNet121 architecture with transfer learning. Model training made use of the Kaggle dataset. By preprocessing the stage, resizing the MRI pictures to minimize noise would help the model perform better. From one MRI scan, the proposed approach divides brain tissues into four groups: benign tumors, gliomas, meningiomas, and pituitary gland malignancies. The designed DenseNet121 architecture precisely classifies brain cancers. We assessed the models' performance in terms of accuracy, precision, recall, and F1-score. The suggested approach proved successful in the multi-class categorization of brain tumors since it attained an average accuracy improvement of 96.90%. Unlike previous diagnostic techniques, such as eye inspection and other machine learning models, the proposed DenseNet121-based approach is more accurate, takes less time to analyze, and requires less human input. Although the automated method ensures consistent and predictable results, human error sometimes causes more unpredictability in conventional methods. Based on MRI-based detection and transfer learning, this paper proposes an automated method for the classification of brain cancers. The method improves the precision and speed of brain tumor diagnosis, which benefits both MRI-based classification research and clinical use. The development of deep-learning models may even further improve tumor identification and prognosis prediction.

Mitigating MRI Domain Shift in Sex Classification: A Deep Learning Approach with ComBat Harmonization

Peyman Sharifian, Mohammad Saber Azimi, AliReza Karimian, Hossein Arabi

arxiv logopreprintAug 27 2025
Deep learning models for medical image analysis often suffer from performance degradation when applied to data from different scanners or protocols, a phenomenon known as domain shift. This study investigates this challenge in the context of sex classification from 3D T1-weighted brain magnetic resonance imaging (MRI) scans using the IXI and OASIS3 datasets. While models achieved high within-domain accuracy (around 0.95) when trained and tested on a single dataset (IXI or OASIS3), we demonstrate a significant performance drop to chance level (about 0.50) when models trained on one dataset are tested on the other, highlighting the presence of a strong domain shift. To address this, we employed the ComBat harmonization technique to align the feature distributions of the two datasets. We evaluated three state-of-the-art 3D deep learning architectures (3D ResNet18, 3D DenseNet, and 3D EfficientNet) across multiple training strategies. Our results show that ComBat harmonization effectively reduces the domain shift, leading to a substantial improvement in cross-domain classification performance. For instance, the cross-domain balanced accuracy of our best model (ResNet18 3D with Attention) improved from approximately 0.50 (chance level) to 0.61 after harmonization. t-SNE visualization of extracted features provides clear qualitative evidence of the reduced domain discrepancy post-harmonization. This work underscores the critical importance of domain adaptation techniques for building robust and generalizable neuroimaging AI models.

MedNet-PVS: A MedNeXt-Based Deep Learning Model for Automated Segmentation of Perivascular Spaces

Zhen Xuen Brandon Low, Rory Zhang, Hang Min, William Pham, Lucy Vivash, Jasmine Moses, Miranda Lynch, Karina Dorfman, Cassandra Marotta, Shaun Koh, Jacob Bunyamin, Ella Rowsthorn, Alex Jarema, Himashi Peiris, Zhaolin Chen, Sandy R. Shultz, David K. Wright, Dexiao Kong, Sharon L. Naismith, Terence J. O'Brien, Ying Xia, Meng Law, Benjamin Sinclair

arxiv logopreprintAug 27 2025
Enlarged perivascular spaces (PVS) are increasingly recognized as biomarkers of cerebral small vessel disease, Alzheimer's disease, stroke, and aging-related neurodegeneration. However, manual segmentation of PVS is time-consuming and subject to moderate inter-rater reliability, while existing automated deep learning models have moderate performance and typically fail to generalize across diverse clinical and research MRI datasets. We adapted MedNeXt-L-k5, a Transformer-inspired 3D encoder-decoder convolutional network, for automated PVS segmentation. Two models were trained: one using a homogeneous dataset of 200 T2-weighted (T2w) MRI scans from the Human Connectome Project-Aging (HCP-Aging) dataset and another using 40 heterogeneous T1-weighted (T1w) MRI volumes from seven studies across six scanners. Model performance was evaluated using internal 5-fold cross validation (5FCV) and leave-one-site-out cross validation (LOSOCV). MedNeXt-L-k5 models trained on the T2w images of the HCP-Aging dataset achieved voxel-level Dice scores of 0.88+/-0.06 (white matter, WM), comparable to the reported inter-rater reliability of that dataset, and the highest yet reported in the literature. The same models trained on the T1w images of the HCP-Aging dataset achieved a substantially lower Dice score of 0.58+/-0.09 (WM). Under LOSOCV, the model had voxel-level Dice scores of 0.38+/-0.16 (WM) and 0.35+/-0.12 (BG), and cluster-level Dice scores of 0.61+/-0.19 (WM) and 0.62+/-0.21 (BG). MedNeXt-L-k5 provides an efficient solution for automated PVS segmentation across diverse T1w and T2w MRI datasets. MedNeXt-L-k5 did not outperform the nnU-Net, indicating that the attention-based mechanisms present in transformer-inspired models to provide global context are not required for high accuracy in PVS segmentation.

FaithfulNet: An explainable deep learning framework for autism diagnosis using structural MRI.

Sujana DS, Augustine DP

pubmed logopapersAug 27 2025
Explainable Artificial Intelligence (XAI) can decode the 'black box' models, enhancing trust in clinical decision-making. XAI makes the predictions of deep learning models interpretable, transparent, and trustworthy. This study employed XAI techniques to explain the predictions made by a deep learning-based model for diagnosing autism and identifying the memory regions responsible for children's academic performance. This study utilized publicly available sMRI data from the ABIDE-II repository. First, a deep learning model, FaithfulNet, was developed to aid in the diagnosis of autism. Next, gradient-based class activation maps and the SHAP gradient explainer were employed to generate explanations for the model's predictions. These explanations were integrated to develop a novel and faithful visual explanation, Faith_CAM. Finally, this faithful explanation was quantified using the pointing game score and analyzed with cortical and subcortical structure masks to identify the impaired brain regions in the autistic brain. This study achieved a classification accuracy of 99.74% with an AUC value of 1. In addition to facilitating autism diagnosis, this study assesses the degree of impairment in memory regions responsible for the children's academic performance, thus contributing to the development of personalized treatment plans.

A robust deep learning framework for cerebral microbleeds recognition in GRE and SWI MRI.

Hassanzadeh T, Sachdev S, Wen W, Sachdev PS, Sowmya A

pubmed logopapersAug 27 2025
Cerebral microbleeds (CMB) are small hypointense lesions visible on gradient echo (GRE) or susceptibility-weighted (SWI) MRI, serving as critical biomarkers for various cerebrovascular and neurological conditions. Accurate quantification of CMB is essential, as their number correlates with the severity of conditions such as small vessel disease, stroke risk and cognitive decline. Current detection methods depend on manual inspection, which is time-consuming and prone to variability. Automated detection using deep learning presents a transformative solution but faces challenges due to the heterogeneous appearance of CMB, high false-positive rates, and similarity to other artefacts. This study investigates the application of deep learning techniques to public (ADNI and AIBL) and private datasets (OATS and MAS), leveraging GRE and SWI MRI modalities to enhance CMB detection accuracy, reduce false positives, and ensure robustness in both clinical and normal cases (i.e., scans without cerebral microbleeds). A 3D convolutional neural network (CNN) was developed for automated detection, complemented by a You Only Look Once (YOLO)-based approach to address false positive cases in more complex scenarios. The pipeline incorporates extensive preprocessing and validation, demonstrating robust performance across a diverse range of datasets. The proposed method achieves remarkable performance across four datasets, ADNI: Balanced accuracy: 0.953, AUC: 0.955, Precision: 0.954, Sensitivity: 0.920, F1-score: 0.930, AIBL: Balanced accuracy: 0.968, AUC: 0.956, Precision: 0.956, Sensitivity: 0.938, F1-score: 0.946, MAS: Balanced accuracy: 0.889, AUC: 0.889, Precision: 0.948, Sensitivity: 0.779, F1-score: 0.851, and OATS dataset: Balanced accuracy: 0.93, AUC: 0.930, Precision: 0.949, Sensitivity: 0.862, F1-score: 0.900. These results highlight the potential of deep learning models to improve early diagnosis and support treatment planning for conditions associated with CMB.
Page 35 of 1621612 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.