Sort by:
Page 6 of 90896 results

A Deep Learning Model to Detect Acute MCA Occlusion on High Resolution Non-Contrast Head CT.

Fussell DA, Lopez JL, Chang PD

pubmed logopapersAug 8 2025
To assess the feasibility and accuracy of a deep learning (DL) model to identify acute middle cerebral artery (MCA) occlusion using high resolution non-contrast CT (NCCT) imaging data. In this study, a total of 4,648 consecutive exams (July 2021 to December 2023) were retrospectively used for model training and validation, while an additional 1,011 consecutive exams (January 2024 to August 2024) were used for independent testing. Using high-resolution NCCT acquired at 1.0 mm slice thickness or less, MCA thrombus was labeled using same day CTA as ground-truth. A 3D DL model was trained for per-voxel thrombus segmentation, with the sum of positive voxels used to estimate likelihood of acute MCA occlusion. For detection of MCA M1 segment acute occlusion, the model yielded an AUROC of 0.952 [0.904 -1.00], accuracy of 93.6%[88.1 -98.2], sensitivity of 90.9% [83.1 -100], and specificity of 93.6% [88.0 -98.3]. Inclusion of M2 segment occlusions reduced performance only slightly, yielding an AUROC of 0.884 [0.825 -0.942], accuracy of 93.2% [85.1 -97.2], sensitivity of 77.4% [69.3 92.2], and specificity of 93.6% [85.1 -97.8]. A DL model can detect acute MCA occlusion from high resolution NCCT with accuracy approaching that of CTA. Using this tool, a majority of candidate thrombectomy patients may be identified with NCCT alone, which could aid stroke triage in settings that lack CTA or are otherwise resource constrained. DL= deep learning.

SamRobNODDI: q-space sampling-augmented continuous representation learning for robust and generalized NODDI.

Xiao T, Cheng J, Fan W, Dong E, Wang S

pubmed logopapersAug 8 2025
Neurite Orientation Dispersion and Density Imaging (NODDI) microstructure estimation from diffusion magnetic resonance imaging (dMRI) is of great significance for the discovery and treatment of various neurological diseases. Current deep learning-based methods accelerate the speed of NODDI parameter estimation and improve the accuracy. However, most methods require the number and coordinates of gradient directions during testing and training to remain strictly consistent, significantly limiting the generalization and robustness of these models in NODDI parameter estimation. Therefore, it is imperative to develop methods that can perform robustly under varying diffusion gradient directions. In this paper, we propose a q-space sampling augmentation-based continuous representation learning framework (SamRobNODDI) to achieve robust and generalized NODDI. Specifically, a continuous representation learning method based on q-space sampling augmentation is introduced to fully explore the information between different gradient directions in q- space. Furthermore, we design a sampling consistency loss to constrain the outputs of different sampling schemes, ensuring that the outputs remain as consistent as possible, thereby further enhancing performance and robustness to varying q-space sampling schemes. SamRobNODDI is also a flexible framework that can be applied to different backbone networks. SamRobNODDI was compared against seven state-of-the-art methods across 18 diverse q-space sampling schemes. Extensive experimental validations have been conducted under both identical and diverse sampling schemes for training and testing, as well as across varying sampling rates, different loss functions, and multiple network backbones. Results demonstrate that the proposed SamRobNODDI has better performance, robustness, generalization, and flexibility in the face of varying q-space sampling schemes.&#xD.

Machine learning diagnostic model for amyotrophic lateral sclerosis analysis using MRI-derived features.

Gil Chong P, Mazon M, Cerdá-Alberich L, Beser Robles M, Carot JM, Vázquez-Costa JF, Martí-Bonmatí L

pubmed logopapersAug 8 2025
Amyotrophic Lateral Sclerosis is a devastating motor neuron disease characterized by its diagnostic difficulty. Currently, no reliable biomarkers exist in the diagnosis process. In this scenario, our purpose is the application of machine learning algorithms to imaging MRI-derived variables for the development of diagnostic models that facilitate and shorten the process. A dataset of 211 patients (114 ALS, 45 mimic, 22 genetic carriers and 30 control) with MRI-derived features of volumetry, cortical thickness and local iron (via T2* mapping, and visual assessment of susceptibility imaging). A binary classification task approach has been taken to classify patients with and without ALS. A sequential modeling methodology, understood from an iterative improvement perspective, has been followed, analyzing each group's performance separately to adequately improve modelling. Feature filtering techniques, dimensionality reduction techniques (PCA, kernel PCA), oversampling techniques (SMOTE, ADASYN) and classification techniques (logistic regression, LASSO, Ridge, ElasticNet, Support Vector Classifier, K-neighbors, random forest) were included. Three subsets of available data have been used for each proposed architecture: a subset containing automatic retrieval MRI-derived data, a subset containing the variables from the visual analysis of the susceptibility imaging and a subset containing all features. The best results have been attained with all the available data through a voting classifier composed of five different classifiers: accuracy = 0.896, AUC = 0.929, sensitivity = 0.886, specificity = 0.929. These results confirm the potential of ML techniques applied to imaging variables of volumetry, cortical thickness, and local iron for the development of diagnostic model as a clinical tool for decision-making support.

Artificial intelligence in forensic neuropathology: A systematic review.

Treglia M, La Russa R, Napoletano G, Ghamlouch A, Del Duca F, Treves B, Frati P, Maiese A

pubmed logopapersAug 7 2025
In recent years, Artificial Intelligence (AI) has gained prominence as a robust tool for clinical decision-making and diagnostics, owing to its capacity to process and analyze large datasets with high accuracy. More specifically, Deep Learning, and its subclasses, have shown significant potential in image processing, including medical imaging and histological analysis. In forensic pathology, AI has been employed for the interpretation of histopathological data, identifying conditions such as myocardial infarction, traumatic injuries, and heart rhythm abnormalities. This review aims to highlight key advances in AI's role, particularly machine learning (ML) and deep learning (DL) techniques, in forensic neuropathology, with a focus on its ability to interpret instrumental and histopathological data to support professional diagnostics. A systematic review of the literature regarding applications of Artificial Intelligence in forensic neuropathology was carried out according to the Preferred Reporting Item for Systematic Review (PRISMA) standards. We selected 34 articles regarding the main applications of AI in this field, dividing them into two categories: those addressing traumatic brain injury (TBI), including intracranial hemorrhage or cerebral microbleeds, and those focusing on epilepsy and SUDEP, including brain disorders and central nervous system neoplasms capable of inducing seizure activity. In both cases, the application of AI techniques demonstrated promising results in the forensic investigation of cerebral pathology, providing a valuable computer-assisted diagnostic tool to aid in post-mortem computed tomography (PMCT) assessments of cause of death and histopathological analyses. In conclusion, this paper presents a comprehensive overview of the key neuropathology areas where the application of artificial intelligence can be valuable in investigating causes of death.

Best Machine Learning Model for Predicting Axial Symptoms After Unilateral Laminoplasty: Based on C2 Spinous Process Muscle Radiomics Features and Sagittal Parameters.

Zheng B, Zhu Z, Liang Y, Liu H

pubmed logopapersAug 7 2025
Study DesignRetrospective study.ObjectiveTo develop a machine learning model for predicting axial symptoms (AS) after unilateral laminoplasty by integrating C2 spinous process muscle radiomics features and cervical sagittal parameters.MethodsIn this retrospective study of 96 cervical myelopathy patients (30 with AS, 66 without) who underwent unilateral laminoplasty between 2018-2022, we extracted radiomics features from preoperative MRI of C2 spinous muscles using PyRadiomics. Clinical data including C2-C7 Cobb angle, cervical sagittal vertical axis (cSVA), T1 slope (T1S) and C2 muscle fat infiltration are collected for clinical model construction. After LASSO regression feature selection, we constructed six machine learning models (SVM, KNN, Random Forest, ExtraTrees, XGBoost, and LightGBM) and evaluated their performance using ROC curves and AUC.ResultsThe AS group demonstrated significantly lower preoperative C2-C7 Cobb angles (12.80° ± 7.49° vs 18.02° ± 8.59°, <i>P</i> = .006), higher cSVA (3.01 cm ± 0.87 vs 2.46 ± 1.19 cm, <i>P</i> = .026), T1S (26.68° ± 5.12° vs 23.66° ± 7.58°, <i>P</i> = .025) and higher C2 muscle fat infiltration (23.73 ± 7.78 vs 20.62 ± 6.93 <i>P</i> = .026). Key radiomics features included local binary pattern texture features and wavelet transform characteristics. The combined model integrating radiomics and clinical parameters achieved the best performance with test AUC of 0.881, sensitivity of 0.833, and specificity of 0.786.ConclusionThe machine learning model based on C2 spinous process muscle radiomics features and clinical parameters (C2-C7 Cobb angle, cSVA, T1S and C2 muscle infiltration) effectively predicts AS occurrence after unilateral laminoplasty, providing clinicians with a valuable tool for preoperative risk assessment and personalized treatment planning.

Clinical Decision Support for Alzheimer's: Challenges in Generalizable Data-Driven Approach.

Gao T, Madanian S, Templeton J, Merkin A

pubmed logopapersAug 7 2025
This paper reviews the current research on Alzheimer's disease and the use of deep learning, particularly 3D-convolutional neural networks (3D-CNN), in analyzing brain images. It presents a predictive model based on MRI and clinical data from the ADNI dataset, showing that deep learning can improve diagnosis accuracy and sensitivity. We also discuss potential applications in biomarker discovery, disease progression prediction, and personalised treatment planning, highlighting the ability to identify sensitive features for early diagnosis.

Coarse-to-Fine Joint Registration of MR and Ultrasound Images via Imaging Style Transfer

Junyi Wang, Xi Zhu, Yikun Guo, Zixi Wang, Haichuan Gao, Le Zhang, Fan Zhang

arxiv logopreprintAug 7 2025
We developed a pipeline for registering pre-surgery Magnetic Resonance (MR) images and post-resection Ultrasound (US) images. Our approach leverages unpaired style transfer using 3D CycleGAN to generate synthetic T1 images, thereby enhancing registration performance. Additionally, our registration process employs both affine and local deformable transformations for a coarse-to-fine registration. The results demonstrate that our approach improves the consistency between MR and US image pairs in most cases.

Novel radiotherapy target definition using AI-driven predictions of glioblastoma recurrence from metabolic and diffusion MRI.

Tran N, Luks TL, Li Y, Jakary A, Ellison J, Liu B, Adegbite O, Nair D, Kakhandiki P, Molinaro AM, Villanueva-Meyer JE, Butowski N, Clarke JL, Chang SM, Braunstein SE, Morin O, Lin H, Lupo JM

pubmed logopapersAug 7 2025
The current standard-of-care (SOC) practice for defining the clinical target volume (CTV) for radiation therapy (RT) in patients with glioblastoma still employs an isotropic 1-2 cm expansion of the T2-hyperintensity lesion, without considering the heterogeneous infiltrative nature of these tumors. This study aims to improve RT CTV definition in patients with glioblastoma by incorporating biologically relevant metabolic and physiologic imaging acquired before RT along with a deep learning model that can predict regions of subsequent tumor progression by either the presence of contrast-enhancement or T2-hyperintensity. The results were compared against two standard CTV definitions. Our multi-parametric deep learning model significantly outperformed the uniform 2 cm expansion of the T2-lesion CTV in terms of specificity (0.89 ± 0.05 vs 0.79 ± 0.11; p = 0.004), while also achieving comparable sensitivity (0.92 ± 0.11 vs 0.95 ± 0.08; p = 0.10), sparing more normal brain. Model performance was significantly enhanced by incorporating lesion size-weighted loss functions during training and including metabolic images as inputs.

Few-Shot Deployment of Pretrained MRI Transformers in Brain Imaging Tasks

Mengyu Li, Guoyao Shen, Chad W. Farris, Xin Zhang

arxiv logopreprintAug 7 2025
Machine learning using transformers has shown great potential in medical imaging, but its real-world applicability remains limited due to the scarcity of annotated data. In this study, we propose a practical framework for the few-shot deployment of pretrained MRI transformers in diverse brain imaging tasks. By utilizing the Masked Autoencoder (MAE) pretraining strategy on a large-scale, multi-cohort brain MRI dataset comprising over 31 million slices, we obtain highly transferable latent representations that generalize well across tasks and datasets. For high-level tasks such as classification, a frozen MAE encoder combined with a lightweight linear head achieves state-of-the-art accuracy in MRI sequence identification with minimal supervision. For low-level tasks such as segmentation, we propose MAE-FUnet, a hybrid architecture that fuses multiscale CNN features with pretrained MAE embeddings. This model consistently outperforms other strong baselines in both skull stripping and multi-class anatomical segmentation under data-limited conditions. With extensive quantitative and qualitative evaluations, our framework demonstrates efficiency, stability, and scalability, suggesting its suitability for low-resource clinical environments and broader neuroimaging applications.

Multimodal Deep Learning Approaches for Early Detection of Alzheimer's Disease: A Comprehensive Systematic Review of Image Processing Techniques.

Amine JM, Mourad M

pubmed logopapersAug 7 2025
Alzheimer's disease (AD) is the most common form of dementia, and it is important to diagnose the disease at an early stage to help people with the condition and their families. Recently, artificial intelligence, especially deep learning approaches applied to medical imaging, has shown potential in enhancing AD diagnosis. This comprehensive review investigates the current state of the art in multimodal deep learning for the early diagnosis of Alzheimer's disease using image processing. The research underpinning this review spanned several months. Numerous deep learning architectures are examined, including CNNs, transfer learning methods, and combined models that use different imaging modalities, such as structural MRI, functional MRI, and amyloid PET. The latest work on explainable AI (XAI) is also reviewed to improve the understandability of the models and identify the particular regions of the brain related to AD pathology. The results indicate that multimodal approaches generally outperform single-modality methods, and three-dimensional (volumetric) data provides a better form of representation compared to two-dimensional images. Current challenges are also discussed, including insufficient and/or poorly prepared datasets, computational expense, and the lack of integration with clinical practice. The findings highlight the potential of applying deep learning approaches for early AD diagnosis and for directing future research pathways. The integration of multimodal imaging with deep learning techniques presents an exciting direction for developing improved AD diagnostic tools. However, significant challenges remain in achieving accurate, reliable, and understandable clinical applications.
Page 6 of 90896 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.