Sort by:
Page 21 of 1241236 results

Artificial Intelligence for Alzheimer's disease diagnosis through T1-weighted MRI: A systematic review.

Basanta-Torres S, Rivas-Fernández MÁ, Galdo-Alvarez S

pubmed logopapersSep 2 2025
Alzheimer's disease (AD) is a leading cause of dementia worldwide, characterized by heterogeneous neuropathological changes and progressive cognitive decline. Despite the numerous studies, there are still no effective treatments beyond those that aim to slow progression and compensate the impairment. Neuroimaging techniques provide a comprehensive view of brain changes, with magnetic resonance imaging (MRI) playing a key role due to its non-invasive nature and wide availability. The T1-weighted MRI sequence is frequently used due to its prevalence in most MRI protocols, generating large datasets, ideal for artificial intelligence (AI) applications. AI, particularly machine learning (ML) and deep learning (DL) techniques, has been increasingly utilized to model these datasets and classify individuals along the AD continuum. This systematic review evaluates studies using AI to classify more than two stages of AD based on T1-weighted MRI data. Convolutional neural networks (CNNs) are the most widely applied, achieving an average classification accuracy of 85.93 % (range: 51.80-100 %; median: 87.70 %). These good results are due to CNNs' ability to extract hierarchical features directly from raw imaging data, reducing the need for extensive preprocessing. Non-convolutional neural networks and traditional ML approaches also demonstrated strong performance, with mean accuracies of 82.50 % (range: 57.61-99.38 %; median: 86.67 %) and 84.22 % (range: 33-99.10 %; median: 87.75 %), respectively, underscoring importance of input data selection. Despite promising outcomes, challenges remain, including methodological heterogeneity, overfitting risks, and a reliance on the ADNI database, which limits dataset diversity. Addressing these limitations is critical to advancing AI's clinical application for early detection, improved classification, and enhanced patient outcomes.

Optimizing and Evaluating Robustness of AI for Brain Metastasis Detection and Segmentation via Loss Functions and Multi-dataset Training

Han, Y., Pathak, P., Award, O., Mohamed, A. S. R., Ugarte, V., Zhou, B., Hamstra, D. A., Echeverria, A. E., Mekdash, H. A., Siddiqui, Z. A., Sun, B.

medrxiv logopreprintSep 2 2025
Purpose: Accurate detection and segmentation of brain metastases (BM) from MRI are critical for the appropriate management of cancer patients. This study investigates strategies to enhance the robustness of artificial intelligence (AI)-based BM detection and segmentation models. Method: A DeepMedic-based network with a loss function, tunable with a sensitivity/specificity tradeoff weighting factor \alpha- was trained on T1 post-contrast MRI datasets from two institutions (514 patients, 4520 lesions). Robustness was evaluated on an external dataset from a third institution dataset (91 patients, 397 lesions), featuring ground truth annotations from two physicians. We investigated the impact of loss function weighting factor, \alpha and training dataset combinations. Detection performance (sensitivity, precision, F1 score) and segmentation accuracy (Dice similarity, and 95% Hausdorff distance (HD95)) were evaluated using one physician contours as the reference standard. The optimal AI model was then directly compared to the performance of the second physician. Results: Varying demonstrated a trade-off between sensitivity (higher ) and precision (lower ), with =0.5 yielding the best F1 score (0.80 {+/-} 0.04 vs. 0.78 {+/-} 0.04 for =0.95 and 0.72 {+/-} 0.03 for =0.99) on the external dataset. The optimally trained model achieved detection performance comparable to the physician (F1: AI=0.83 {+/-} 0.04, Physician=0.83 {+/-} 0.04), but slightly underperformed in segmentation (Dice: 0.79 {+/-} 0.04 vs. AI=0.74 {+/-} 0.03; HD95: 2.8 {+/-} 0.14 mm vs. AI=3.18 {+/-} 0.16 mm, p<0.05). Conclusion: The derived optimal model achieves detection and segmentation performance comparable to an expert physician in a parallel comparison.

Overcoming Site Variability in Multisite fMRI Studies: an Autoencoder Framework for Enhanced Generalizability of Machine Learning Models.

Almuqhim F, Saeed F

pubmed logopapersSep 2 2025
Harmonizing multisite functional magnetic resonance imaging (fMRI) data is crucial for eliminating site-specific variability that hinders the generalizability of machine learning models. Traditional harmonization techniques, such as ComBat, depend on additive and multiplicative factors, and may struggle to capture the non-linear interactions between scanner hardware, acquisition protocols, and signal variations between different imaging sites. In addition, these statistical techniques require data from all the sites during their model training which may have the unintended consequence of data leakage for ML models trained using this harmonized data. The ML models trained using this harmonized data may result in low reliability and reproducibility when tested on unseen data sets, limiting their applicability for general clinical usage. In this study, we propose Autoencoders (AEs) as an alternative for harmonizing multisite fMRI data. Our designed and developed framework leverages the non-linear representation learning capabilities of AEs to reduce site-specific effects while preserving biologically meaningful features. Our evaluation using Autism Brain Imaging Data Exchange I (ABIDE-I) dataset, containing 1,035 subjects collected from 17 centers demonstrates statistically significant improvements in leave-one-site-out (LOSO) cross-validation evaluations. All AE variants (AE, SAE, TAE, and DAE) significantly outperformed the baseline mode (p < 0.01), with mean accuracy improvements ranging from 3.41% to 5.04%. Our findings demonstrate the potential of AEs to harmonize multisite neuroimaging data effectively enabling robust downstream analyses across various neuroscience applications while reducing data-leakage, and preservation of neurobiological features. Our open-source code is made available at https://github.com/pcdslab/Autoencoder-fMRI-Harmonization .

Systematic Integration of Attention Modules into CNNs for Accurate and Generalizable Medical Image Diagnosis

Zahid Ullah, Minki Hong, Tahir Mahmood, Jihie Kim

arxiv logopreprintSep 2 2025
Deep learning has become a powerful tool for medical image analysis; however, conventional Convolutional Neural Networks (CNNs) often fail to capture the fine-grained and complex features critical for accurate diagnosis. To address this limitation, we systematically integrate attention mechanisms into five widely adopted CNN architectures, namely, VGG16, ResNet18, InceptionV3, DenseNet121, and EfficientNetB5, to enhance their ability to focus on salient regions and improve discriminative performance. Specifically, each baseline model is augmented with either a Squeeze and Excitation block or a hybrid Convolutional Block Attention Module, allowing adaptive recalibration of channel and spatial feature representations. The proposed models are evaluated on two distinct medical imaging datasets, a brain tumor MRI dataset comprising multiple tumor subtypes, and a Products of Conception histopathological dataset containing four tissue categories. Experimental results demonstrate that attention augmented CNNs consistently outperform baseline architectures across all metrics. In particular, EfficientNetB5 with hybrid attention achieves the highest overall performance, delivering substantial gains on both datasets. Beyond improved classification accuracy, attention mechanisms enhance feature localization, leading to better generalization across heterogeneous imaging modalities. This work contributes a systematic comparative framework for embedding attention modules in diverse CNN architectures and rigorously assesses their impact across multiple medical imaging tasks. The findings provide practical insights for the development of robust, interpretable, and clinically applicable deep learning based decision support systems.

A Preliminary Study on an Intelligent Segmentation and Classification Model for Amygdala-Hippocampus MRI Images in Alzheimer's Disease.

Liu S, Zhou K, Geng D

pubmed logopapersSep 2 2025
This study developed a deep learning model for segmenting and classifying the amygdala-hippocampus in Alzheimer's disease (AD), using a large-scale neuroimaging dataset to improve early AD detection and intervention. We collected 1000 healthy controls (HC) and 1000 AD patients as internal training data from 15 Chinese medical centers. The independent external validation dataset was sourced from another three centers. All subjects underwent neuroimaging and neuropsychological assessments. A semi-automated annotation pipeline was used: the amygdala-hippocampus of 200 cases in each group were manually annotated to train the U²-Net segmentation model, followed by model annotation of 800 cases with iterative refinement. The DenseNet-121 architecture was built for automated classification. The robustness of the model was evaluated using an external validation set. All 18 medical centers were distributed across diverse geographical regions in China. AD patients had lower MMSE/MoCA scores. Amygdala and hippocampal volumes were smaller in AD. Semi-automated annotation improved segmentation with DSC all exceeding 0.88 (P<0.001). The final DSC of the 2000-case cohort was 0.914 in the training set and 0.896 in the testing set. The classification model achieved an AUC of 0.905. The external validation set comprised 100 cases in each group, and it can achieve an AUC of 0.835. The amygdala-hippocampus recognition precision may be improved by the deep learning-based semi-automated approach and classification model, which will help with AD evaluation, diagnosis, and clinical AI application.

An Artificial Intelligence System for Staging the Spheno-Occipital Synchondrosis.

Milani OH, Mills L, Nikho A, Tliba M, Ayyildiz H, Allareddy V, Ansari R, Cetin AE, Elnagar MH

pubmed logopapersSep 2 2025
The aim of this study was to develop, test and validate automated interpretable deep learning algorithms for the assessment and classification of the spheno-occipital synchondrosis (SOS) fusion stages from a cone beam computed tomography (CBCT). The sample consisted of 723 CBCT scans of orthodontic patients from private practices in the midwestern United States. The SOS fusion stages were classified by two orthodontists and an oral and maxillofacial radiologist. The advanced deep learning models employed consisted of ResNet, EfficientNet and ConvNeXt. Additionally, a new attention-based model, ConvNeXt + Conv Attention, was developed to enhance classification accuracy by integrating attention mechanisms for capturing subtle medical imaging features. Laslty, YOLOv11 was integrated for fully-automated region detection and segmentation. ConvNeXt + Conv Attention outperformed the other models and achieved a 88.94% accuracy with manual cropping and 82.49% accuracy in a fully automated workflow. This study introduces a novel artificial intelligence-based pipeline that reliably automates the classification of the SOS fusion stages using advanced deep learning models, with the highest accuracy achieved by ConvNext + Conv Attention. These models enhance the efficiency, scalability and consistency of SOS staging while minimising manual intervention from the clinician, underscoring the potential for AI-driven solutions in orthodontics and clinical workflows.

3D MR Neurography of Craniocervical Nerves: Comparing Double-Echo Steady-State and Postcontrast STIR with Deep Learning-Based Reconstruction at 1.5T.

Ensle F, Zecca F, Kerber B, Lohezic M, Wen Y, Kroschke J, Pawlus K, Guggenberger R

pubmed logopapersSep 2 2025
3D MR neurography is a useful diagnostic tool in head and neck disorders, but neurographic imaging remains challenging in this region. Optimal sequences for nerve visualization have not yet been established and may also differ between nerves. While deep learning (DL) reconstruction can enhance nerve depiction, particularly at 1.5T, studies in the head and neck are lacking. The purpose of this study was to compare double echo steady-state (DESS) and postcontrast STIR sequences in DL-reconstructed 3D MR neurography of the extraforaminal cranial and spinal nerves at 1.5T. Eighteen consecutive examinations of 18 patients undergoing head-and-neck MRI at 1.5T were retrospectively included (mean age: 51 ± 14 years, 11 women). 3D DESS and postcontrast 3D STIR sequences were obtained as part of the standard protocol, and reconstructed with a prototype DL algorithm. Two blinded readers qualitatively evaluated visualization of the inferior alveolar, lingual, facial, hypoglossal, greater occipital, lesser occipital, and greater auricular nerves, as well as overall image quality, vascular suppression, and artifacts. Additionally, apparent SNR and contrast-to-noise ratios of the inferior alveolar and greater occipital nerve were measured. Visual ratings and quantitative measurements, respectively, were compared between sequences by using Wilcoxon signed-rank test. DESS demonstrated significantly improved visualization of the lesser occipital nerve, greater auricular nerve, and proximal greater occipital nerve (<i>P</i> < .015). Postcontrast STIR showed significantly enhanced visualization of the lingual nerve, hypoglossal nerve, and distal inferior alveolar nerve (<i>P</i> < .001). The facial nerve, proximal inferior alveolar nerve, and distal greater occipital nerve did not demonstrate significant differences in visualization between sequences (<i>P</i> > .08). There was also no significant difference for overall image quality and artifacts. Postcontrast STIR achieved superior vascular suppression, reaching statistical significance for 1 reader (<i>P</i> = .039). Quantitatively, there was no significant difference between sequences (<i>P</i> > .05). Our findings suggest that 3D DESS generally provides improved visualization of spinal nerves, while postcontrast 3D STIR facilitates enhanced delineation of extraforaminal cranial nerves.

Application and assessment of deep learning to routine 2D T2 FLEX spine imaging at 1.5T.

Shaikh IS, Milshteyn E, Chulsky S, Maclellan CJ, Soman S

pubmed logopapersSep 2 2025
2D T2 FSE is an essential routine spine MRI sequence, allowing assessment of fractures, soft tissues, and pathology. Fat suppression using a DIXON-type approach (2D FLEX) improves water/fat separation. Recently, a deep learning (DL) reconstruction (AIR™ Recon DL, GE HealthCare) became available for 2D FLEX, offering increased signal-to-noise ratio (SNR), reduced artifacts, and sharper images. This study aimed to compare DL-reconstructed versus non-DL-reconstructed spine 2D T2 FLEX images for diagnostic image quality and quantitative metrics at 1.5T. Forty-one patients with clinically indicated cervical or lumbar spine MRI were scanned between May and August 2023 on a 1.5T Voyager (GE HealthCare). A 2D T2 FLEX sequence was acquired, and DL-based reconstruction (noise reduction strength: 75%) was applied. Raw data were also reconstructed without DL. Three readers (CAQ-neuroradiologist, PGY-6 neuroradiology fellow, PGY-2 radiology resident) rated diagnostic preference (0 = non-DL, 1 = DL, 2 = equivalent) for 39 cases. Quantitative measures (SNR, total variation [TV], number of edges, and fat fraction [FF]) were compared using paired t-tests with significance set at p < .05. Among evaluations, 79.5% preferred DL, 11% found images equivalent, and 9.4% favored non-DL, with strong inter-rater agreement (p < .001, Fleiss' Kappa = 0.99). DL images had higher SNR, lower TV, and fewer edges (p < .001), indicating effective noise reduction. FF remained statistically unchanged in subcutaneous fat (p = .25) but differed slightly in vertebral bodies (1.4% difference, p = .01). DL reconstruction notably improved image quality by enhancing SNR and reducing noise without clinically meaningful changes in fat quantification. These findings support the use of DL-enhanced 2D T2 FLEX in routine spine imaging at 1.5T. Incorporating DL-based reconstruction into standard spine MRI protocols can increase diagnostic confidence and workflow efficiency. Further studies with larger cohorts and diverse pathologies are warranted to refine this approach and explore potential benefits for clinical decision-making.

Predictive modeling of hematoma expansion from non-contrast computed tomography in spontaneous intracerebral hemorrhage patients

Ironside, N., El Naamani, K., Rizvi, T., Shifat-E-Rabbi, M., Kundu, S., Becceril-Gaitan, A., Pas, K., Snyder, H., Chen, C.-J., Langefeld, C., Woo, D., Mayer, S. A., Connolly, E. S., Rohde, G. K., VISTA-ICH,, ERICH investigators,

medrxiv logopreprintSep 2 2025
Hematoma expansion is a consistent predictor of poor neurological outcome and mortality after spontaneous intracerebral hemorrhage (ICH). An incomplete understanding of its biophysiology has limited early preventative intervention. Transport-based morphometry (TBM) is a mathematical modeling technique that uses a physically meaningful metric to quantify and visualize discriminating image features that are not readily perceptible to the human eye. We hypothesized that TBM could discover relationships between hematoma morphology on initial Non-Contrast Computed Tomography (NCCT) and hematoma expansion. 170 spontaneous ICH patients enrolled in the multi-center international Virtual International Trials of Stroke Archive (VISTA-ICH) with time-series NCCT data were used for model derivation. Its performance was assessed on a test dataset of 170 patients from the Ethnic/Racial Variations of Intracerebral Hemorrhage (ERICH) study. A unique transport-based representation was produced from each presentation NCCT hematoma image to identify morphological features of expansion. The principal hematoma features identified by TBM were larger size, density heterogeneity, shape irregularity and peripheral density distribution. These were consistent with clinician-identified features of hematoma expansion, corroborating the hypothesis that morphological characteristics of the hematoma promote future growth. Incorporating these traits into a multivariable model comprising morphological, spatial and clinical information achieved a AUROC of 0.71 for quantifying 24-hour hematoma expansion risk in the test dataset. This outperformed existing clinician protocols and alternate machine learning methods, suggesting that TBM detected features with improved precision than by visual inspection alone. This pre-clinical study presents a quantitative and interpretable method for discovery and visualization of NCCT biomarkers of hematoma expansion in ICH patients. Because TBM has a direct physical meaning, its modeling of NCCT hematoma features can inform hypotheses for hematoma expansion mechanisms. It has potential future application as a clinical risk stratification tool.

Advanced Deep Learning Architecture for the Early and Accurate Detection of Autism Spectrum Disorder Using Neuroimaging

Ud Din, A., Fatima, N., Bibi, N.

medrxiv logopreprintSep 2 2025
Autism Spectrum Disorder (ASD) is a neurological condition that affects the brain, leading to challenges in speech, communication, social interaction, repetitive behaviors, and motor skills. This research aims to develop a deep learning based model for the accurate diagnosis and classification of autistic symptoms in children, thereby benefiting both patients and their families. Existing literature indicates that classification methods typically analyze region based summaries of Functional Magnetic Resonance Imaging (fMRI). However, few studies have explored the diagnosis of ASD using brain imaging. The complexity and heterogeneity of biomedical data modeling for big data analysis related to ASD remain unclear. In the present study, the Autism Brain Imaging Data Exchange 1 (ABIDE-1) dataset was utilized, comprising 1,112 participants, including 539 individuals with ASD and 573 controls from 17 different sites. The dataset, originally in NIfTI format, required conversion to a computer-readable extension. For ASD classification, the researcher proposed and implemented a VGG20 architecture. This deep learning VGG20 model was applied to neuroimages to distinguish ASD from the non ASD cases. Four evaluation metrics were employed which are recall, precision, F1-score, and accuracy. Experimental results indicated that the proposed model achieved an accuracy of 61%. Prior to this work, machine learning algorithms had been applied to the ABIDE-1 dataset, but deep learning techniques had not been extensively utilized for this dataset and the methods implied in this study as this research is conducted to facilitate the early diagnosis of ASD.
Page 21 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.