Sort by:
Page 12 of 1241236 results

Decision Strategies in AI-Based Ensemble Models in Opportunistic Alzheimer's Detection from Structural MRI.

Hammonds SK, Eftestøl T, Kurz KD, Fernandez-Quilez A

pubmed logopapersSep 17 2025
Alzheimer's disease (AD) is a neurodegenerative condition and the most common form of dementia. Recent developments in AD treatment call for robust diagnostic tools to facilitate medical decision-making. Despite progress for early diagnostic tests, there remains uncertainty about clinical use. Structural magnetic resonance imaging (MRI), as a readily available imaging tool in the current AD diagnostic pathway, in combination with artificial intelligence, offers opportunities of added value beyond symptomatic evaluation. However, MRI studies in AD tend to suffer from small datasets and consequently limited generalizability. Although ensemble models take advantage of the strengths of several models to improve performance and generalizability, there is little knowledge of how the different ensemble models compare performance-wise and the relationship between detection performance and model calibration. The latter is especially relevant for clinical translatability. In our study, we applied three ensemble decision strategies with three different deep learning architectures for multi-class AD detection with structural MRI. For two of the three architectures, the weighted average was the best decision strategy in terms of balanced accuracy and calibration error. In contrast to the base models, the results of the ensemble models showed that the best detection performance corresponded to the lowest calibration error, independent of the architecture. For each architecture, the best ensemble model reduced the estimated calibration error compared to the base model average from (1) 0.174±0.01 to 0.164±0.04, (2) 0.182±0.02 to 0.141±0.04, and (3) 0.269±0.08 to 0.240±0.04 and increased the balanced accuracy from (1) 0.527±0.05 to 0.608±0.06, (2) 0.417±0.03 to 0.456±0.04, and (3) 0.348±0.02 to 0.371±0.03.

Multi-Atlas Brain Network Classification through Consistency Distillation and Complementary Information Fusion.

Xu J, Lan M, Dong X, He K, Zhang W, Bian Q, Ke Y

pubmed logopapersSep 16 2025
Brain network analysis plays a crucial role in identifying distinctive patterns associated with neurological disorders. Functional magnetic resonance imaging (fMRI) enables the construction of brain networks by analyzing correlations in blood-oxygen-level-dependent (BOLD) signals across different brain regions, known as regions of interest (ROIs). These networks are typically constructed using atlases that parcellate the brain based on various hypotheses of functional and anatomical divisions. However, there is no standard atlas for brain network classification, leading to limitations in detecting abnormalities in disorders. Recent methods leveraging multiple atlases fail to ensure consistency across atlases and lack effective ROI-level information exchange, limiting their efficacy. To address these challenges, we propose the Atlas-Integrated Distillation and Fusion network (AIDFusion), a novel framework designed to enhance brain network classification using fMRI data. AIDFusion introduces a disentangle Transformer to filter out inconsistent atlas-specific information and distill meaningful cross-atlas connections. Additionally, it enforces subject- and population-level consistency constraints to improve cross-atlas coherence. To further enhance feature integration, AIDFusion incorporates an inter-atlas message-passing mechanism that facilitates the fusion of complementary information across brain regions. We evaluate AIDFusion on four resting-state fMRI datasets encompassing different neurological disorders. Experimental results demonstrate its superior classification performance and computational efficiency compared to state-of-the-art methods. Furthermore, a case study highlights AIDFusion's ability to extract interpretable patterns that align with established neuroscience findings, reinforcing its potential as a robust tool for multi-atlas brain network analysis. The code is publicly available at https://github.com/AngusMonroe/AIDFusion.

CT-Based deep learning platform combined with clinical parameters for predicting different discharge outcome in spontaneous intracerebral hemorrhage.

Wu TC, Chan MH, Lin KH, Liu CF, Chen JH, Chang RF

pubmed logopapersSep 16 2025
This study aims to enhance the prognostic prediction of spontaneous intracerebral hemorrhage (sICH) by comparing the accuracy of three models: a CT-based deep learning model, a clinical variable-based machine learning model, and a hybrid model that integrates both approaches. The goal is to evaluate their performance across different outcome thresholds, including poor outcome (mRS 3-6), loss of independence (mRS 4-6), and severe disability or death (mRS 5-6). A retrospective analysis was conducted on 1,853 sICH patients from a stroke center database (2008-2021). Patients were divided into two datasets: Dataset A (958 patients) for training/testing the clinical and hybrid models, and Dataset B (895 patients) for training the deep learning model. The imaging model used a 3D ResNet-50 architecture with attention modules, while the clinical model incorporated 19 clinical variables. The hybrid model combined clinical data with prediction probability from the imaging model. Performance metrics were compared using the DeLong test. The hybrid model consistently outperformed the other models across all outcome thresholds. For predicting severe disability and death, loss of independence, and poor outcome, the hybrid model achieved accuracies of 82.6%, 79.5%, 80.6% with AUC values of 0.897, 0.871, 0.0873, respectively. GCS scores and imaging model prediction probability were the most significant predictors. The hybrid model, combining CT-based deep learning with clinical variables, offers superior prognostic prediction for sICH outcomes. This integrated approach shows promise for improving clinical decision-making, though further validation in prospective studies is needed. Not applicable because this is a retrospective study, not a clinical trial.

Prediction of cerebrospinal fluid intervention in fetal ventriculomegaly via AI-powered normative modelling.

Zhou M, Rajan SA, Nedelec P, Bayona JB, Glenn O, Gupta N, Gano D, George E, Rauschecker AM

pubmed logopapersSep 16 2025
Fetal ventriculomegaly (VM) is common and largely benign when isolated. However, it can occasionally progress to hydrocephalus, a more severe condition associated with increased mortality and neurodevelopmental delay that may require surgical postnatal intervention. Accurate differentiation between VM and hydrocephalus is essential but remains challenging, relying on subjective assessment and limited two-dimensional measurements. Deep learning-based segmentation offers a promising solution for objective and reproducible volumetric analysis. This work presents an AI-powered method for segmentation, volume quantification, and classification of the ventricles in fetal brain MRI to predict need for postnatal intervention. This retrospective study included 222 patients with singleton pregnancies. An nnUNet was trained to segment the fetal ventricles on 20 manually segmented, institutional fetal brain MRIs combined with 80 studies from a publicly available dataset. The validated model was then applied to 138 normal fetal brain MRIs to generate a normative reference range across a range of gestational ages (18-36 weeks). Finally it was applied to 64 fetal brains with VM (14 of which required postnatal intervention). ROC curves and AUC to predict VM and need for postnatal intervention were calculated. The nnUNet predicted segmentation of the fetal ventricles in the reference dataset were high quality and accurate (median Dice score 0.96, IQR 0.93-0.99). A normative reference range of ventricular volumes across gestational ages was developed using automated segmentation volumes. The optimal threshold for identifying VM was 2 standard deviations from normal with sensitivity of 92% and specificity of 93% (AUC 0.97, 95% CI 0.91-0.98). When normalized to intracranial volume, fetal ventricular volume was higher and subarachnoid volume lower among those who required postnatal intervention (p<0.001, p=0.003). The optimal threshold for identifying need for postnatal intervention was 11 standard deviations from normal with sensitivity of 86% and specificity of 100% (AUC 0.97, 95% CI 0.86-1.00). This work introduces a deep-learning based method for fast and accurate quantification of ventricular volumes in fetal brain MRI. A normative reference standard derived using this method can predict VM and need for postnatal CSF intervention. Increased ventricular volume is a strong predictor for postnatal intervention. VM = ventriculomegaly, 2D = two-dimensional, 3D = three-dimensional, ROC = receiver operating characteristics, AUC = area under curve.

Head-to-Head Comparison of Two AI Computer-Aided Triage Solutions for Detecting Intracranial Hemorrhage on Non-Contrast Head CT.

Garcia GM, Young P, Dawood L, Elshikh M

pubmed logopapersSep 16 2025
This study aims to provide a comprehensive comparison of the performance and reproducibility of two commercially available artificial intelligence (AI) software computer-aided triage and notification solutions, Vendor A (Aidoc) and Vendor B (Viz.ai), for the detection of intracranial hemorrhage (ICH) on non-contrast enhanced head CT (NCHCT) scans performed within a single academic institution. The retrospective analysis was conducted on a large patient cohort from multiple healthcare settings within a single academic institution, utilizing standardized scanning protocols. Sensitivity, specificity, false positive, and false negative rates were evaluated for both vendors. Outputs assessed included AI-generated case-level classification. Among 4,081 scans, 595 were positive for ICH. Vendor A demonstrated a sensitivity of 94.4% and specificity of 97.4%, PPV of 85.9%, and NPV of 99.1%. Vendor B showed a sensitivity of 59.5% and specificity of 99.0%, PPV of 90.0%, and NPV of 92.6%. Vendor A had 20 false negatives, which primarily involved subdural and intraparenchymal hemorrhages, and 97 false positives, which appear to be related to motion artifact. Vendor B had 145 false negatives, largely comprised of subdural and subarachnoid hemorrhages, and 36 false positives, which appeared to be related to motion artifact and calcified or dense lesions. Concordantly, 18 cases were false negatives and 11 cases were false positives for both AI solutions. The findings of this study provide valuable information for clinicians and healthcare institutions considering the implementation of AI software for computer aided-triage and notification in the detection of intracranial hemorrhage. The discussion encompasses the implications of the results, the importance of evaluating AI findings in context-especially in the absence of explainability tools, potential areas for improvement, and the relevance of standardized scanning protocols in ensuring the reliability of AI-based diagnostic tools in clinical practice. ICH = Intracranial Hemorrhage; NCHCT = Non-contrast Enhanced Head CT; AI = Artificial Intelligence; SDH = Subdural Hemorrhage; SAH = Subarachnoid Hemorrhage; IPH = Intraparenchymal Hemorrhage; IVH = Intraventricular Hemorrhage; PPV = Positive Predictive Value; NPV = Negative Predictive Value; CADt = Computer-Aided Triage; PACS = Picture Archiving and Communication System; FN = False Negative; FP = False Positive; CI = Confidence Interval.

Automated brain extraction for canine magnetic resonance images.

Lesta GD, Deserno TM, Abani S, Janisch J, Hänsch A, Laue M, Winzer S, Dickinson PJ, De Decker S, Gutierrez-Quintana R, Subbotin A, Bocharova K, McLarty E, Lemke L, Wang-Leandro A, Spohn F, Volk HA, Nessler JN

pubmed logopapersSep 16 2025
Brain extraction is a common preprocessing step when working with intracranial medical imaging data. While several tools exist to automate the preprocessing of magnetic resonance imaging (MRI) of the human brain, none are available for canine MRIs. We present a pipeline mapping separate 2D scans to a 3D image, and a neural network for canine brain extraction. The training dataset consisted of T1-weighted and contrast-enhanced images from 68 dogs of different breeds, all cranial conformations (mesaticephalic, dolichocephalic, brachycephalic), with several pathological conditions, taken at three institutions. Testing was performed on a similarly diverse group of 10 dogs with images from a 4th institution. The model achieved excellent results in terms of Dice ([Formula: see text]) and Jaccard ([Formula: see text]) metrics and generalised well across different MRI scanners, the three aforementioned skull types, and variations in head size and breed. The pipeline was effective for a combination of one to three acquisition planes (i.e., transversal, dorsal, and sagittal). Aside from the T1 weighted imaging training datasets, the model also performed well on other MRI sequences with Jaccardian indices and median Dice scores ranging from 0.86 to 0.89 and 0.92 to 0.94, respectively. Our approach was robust for automated brain extraction. Variations in canine anatomy and performance degradation in multi-scanner data can largely be mitigated through normalisation and augmentation techniques. Brain extraction, as a preprocessing step, can improve the accuracy of an algorithm for abnormality classification in MRI image slices.

Data fusion of medical imaging in neurological disorders.

Mirzaei G, Gupta A, Adeli H

pubmed logopapersSep 16 2025
Medical imaging plays a crucial role in the accurate diagnosis and prognosis of various medical conditions, with each modality offering unique and complementary insights into the body's structure and function. However, no single imaging technique can capture the full spectrum of necessary information. Data fusion has emerged as a powerful tool to integrate information from different perspectives, including multiple modalities, views, temporal sequences, and spatial scales. By combining data, fusion techniques provide a more comprehensive understanding, significantly enhancing the precision and reliability of clinical analyses. This paper presents an overview of data fusion approaches - covering multi-view, multi-modal, and multi-scale strategies - across imaging modalities such as MRI, CT, PET, SPECT, EEG, and MEG, with a particular emphasis on applications in neurological disorders. Furthermore, we highlight the latest advancements in data fusion methods and key studies published since 2016, illustrating the progress and growing impact of this interdisciplinary field.

Multi-filter stacking in inception V3 for enhanced Alzheimer's severity classification.

Iqbal A, Iqbal K, Shah YA, Ullah F, Khan J, Yaqoob S

pubmed logopapersSep 16 2025
Alzheimer's disease, a progressive neurodegenerative disorder, is characterized by a decline in brain volume and neuronal loss, with early symptoms often presenting as short-term memory impairment. Automated classification of Alzheimer's disease remains a significant challenge due to inter-patient variability in brain morphology, aging effects, and overlapping anatomical features across different stages. While traditional machine learning techniques, such as Support Vector Machines (SVMs) and various Deep Neural Network (DNN) models, have been explored, the need for more accurate and efficient classification techniques persists. In this study, we propose a novel approach that integrates Multi-Filter Stacking with the Inception V3 architecture, referred to as CASFI (Classifying Alzheimer's Severity using Filter Integration). This method leverages diverse convolutional filter sizes to capture multiscale spatial features, enhancing the model's ability to detect subtle structural variations associated with different Alzheimer's disease stages. Applied to MRI data, CASFI achieved an accuracy of 97.27%, outperforming baseline deep learning models and traditional classifiers in both accuracy and robustness. This approach supports early diagnosis and informed clinical decision-making, providing a valuable tool to assist healthcare professionals in managing and planning treatment for Alzheimer's patients.

MambaDiff: Mamba-Enhanced Diffusion Model for 3D Medical Image Segmentation.

Liu Y, Feng Y, Cheng J, Zhan H, Zhu Z

pubmed logopapersSep 15 2025
Accurate 3D medical image segmentation is crucial for diagnosis and treatment. Diffusion models demonstrate promising performance in medical image segmentation tasks due to the progressive nature of the generation process and the explicit modeling of data distributions. However, the weak guidance of conditional information and insufficient feature extraction in diffusion models lead to the loss of fine-grained features and structural consistency in the segmentation results, thereby affecting the accuracy of medical image segmentation. To address this challenge, we propose a Mamba-Enhanced Diffusion Model for 3D Medical Image Segmentation. We extract multilevel semantic features from the original images using an encoder and tightly integrate them with the denoising process of the diffusion model through a Semantic Hierarchical Embedding (SHE) mechanism, to capture the intricate relationship between the noisy label and image data. Meanwhile, we design a Global-Slice Perception Mamba (GSPM) layer, which integrates multi-dimensional perception mechanisms to endow the model with comprehensive spatial reasoning and feature extraction capabilities. Experimental results show that our proposed MambaDiff achieves more competitive performance compared to prior arts with substantially fewer parameters on four public medical image segmentation datasets including BraTS 2021, BraTS 2024, LiTS and MSD Hippocampus. The source code of our method is available at https://github.com/yuliu316316/MambaDiff.

Trade-Off Analysis of Classical Machine Learning and Deep Learning Models for Robust Brain Tumor Detection: Benchmark Study.

Tian Y

pubmed logopapersSep 15 2025
Medical image analysis plays a critical role in brain tumor detection, but training deep learning models often requires large, labeled datasets, which can be time-consuming and costly. This study explores a comparative analysis of machine learning and deep learning models for brain tumor classification, focusing on whether deep learning models are necessary for small medical datasets and whether self-supervised learning can reduce annotation costs. The primary goal is to evaluate trade-offs between traditional machine learning and deep learning, including self-supervised models under small medical image data. The secondary goal is to assess model robustness, transferability, and generalization through evaluation of unseen data within- and cross-domains. Four models were compared: (1) support vector machine (SVM) with histogram of oriented gradients (HOG) features, (2) a convolutional neural network based on ResNet18, (3) a transformer-based model using vision transformer (ViT-B/16), and (4) a self-supervised learning approach using Simple Contrastive Learning of Visual Representations (SimCLR). These models were selected to represent diverse paradigms. SVM+HOG represents traditional feature engineering with low computational cost, ResNet18 serves as a well-established convolutional neural network with strong baseline performance, ViT-B/16 leverages self-attention to capture long-range spatial features, and SimCLR enables learning from unlabeled data, potentially reducing annotation costs. The primary dataset consisted of 2870 brain magnetic resonance images across 4 classes: glioma, meningioma, pituitary, and nontumor. All models were trained under consistent settings, including data augmentation, early stopping, and 3 independent runs using the different random seeds to account for performance variability. Performance metrics included accuracy, precision, recall, F<sub>1</sub>-score, and convergence. To assess robustness and generalization capability, evaluation was performed on unseen test data from both the primary and cross datasets. No retraining or test augmentations were applied to the external data, thereby reflecting realistic deployment conditions. The models demonstrated consistently strong performance in both within-domain and cross-domain evaluations. The results revealed distinct trade-offs; ResNet18 achieved the highest validation accuracy (mean 99.77%, SD 0.00%) and the lowest validation loss, along with a weighted test accuracy of 99% within-domain and 95% cross-domain. SimCLR reached a mean validation accuracy of 97.29% (SD 0.86%) and achieved up to 97% weighted test accuracy within-domain and 91% cross-domain, despite requiring 2-stage training phases involving contrastive pretraining followed by linear evaluation. ViT-B/16 reached a mean validation accuracy of 97.36% (SD 0.11%), with a weighted test accuracy of 98% within-domain and 93% cross-domain. SVM+HOG maintained a competitive validation accuracy of 96.51%, with 97% within-domain test accuracy, though its accuracy dropped to 80% cross-domain. The study reveals meaningful trade-offs between model complexity, annotation requirements, and deployment feasibility-critical factors for selecting models in real-world medical imaging applications.
Page 12 of 1241236 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.