Sort by:
Page 16 of 2332330 results

FetalDenseNet: multi-scale deep learning for enhanced early detection of fetal anatomical planes in prenatal ultrasound.

Dey SK, Howlader A, Haider MS, Saha T, Setu DM, Islam T, Siddiqi UR, Rahman MM

pubmed logopapersSep 24 2025
The study aims to improve the classification of fetal anatomical planes using Deep Learning (DL) methods to enhance the accuracy of fetal ultrasound interpretation. Five Convolutional Neural Network (CNN) architectures, such as VGG16, ResNet50, InceptionV3, DenseNet169, and MobileNetV2, are evaluated on a large-scale, clinically validated dataset of 12,400 ultrasound images from 1,792 patients. Preprocessing methods, including scaling, normalization, label encoding, and augmentation, are applied to the dataset, and the dataset is split into 80 % for training and 20 % for testing. Each model was fine-tuned and evaluated based on its classification accuracy for comparison. DenseNet169 achieved the highest classification accuracy of 92 % among all the tested models. The study shows that CNN-based models, particularly DenseNet169, significantly improve diagnostic accuracy in fetal ultrasound interpretation. This advancement reduces error rates and provides support for clinical decision-making in prenatal care.

A Contrastive Learning Framework for Breast Cancer Detection

Samia Saeed, Khuram Naveed

arxiv logopreprintSep 24 2025
Breast cancer, the second leading cause of cancer-related deaths globally, accounts for a quarter of all cancer cases [1]. To lower this death rate, it is crucial to detect tumors early, as early-stage detection significantly improves treatment outcomes. Advances in non-invasive imaging techniques have made early detection possible through computer-aided detection (CAD) systems which rely on traditional image analysis to identify malignancies. However, there is a growing shift towards deep learning methods due to their superior effectiveness. Despite their potential, deep learning methods often struggle with accuracy due to the limited availability of large-labeled datasets for training. To address this issue, our study introduces a Contrastive Learning (CL) framework, which excels with smaller labeled datasets. In this regard, we train Resnet-50 in semi supervised CL approach using similarity index on a large amount of unlabeled mammogram data. In this regard, we use various augmentation and transformations which help improve the performance of our approach. Finally, we tune our model on a small set of labelled data that outperforms the existing state of the art. Specifically, we observed a 96.7% accuracy in detecting breast cancer on benchmark datasets INbreast and MIAS.

Development and clinical validation of a novel deep learning-based mediastinal endoscopic ultrasound navigation system for quality control: a single-center, randomized controlled trial.

Huang S, Chen X, Tian L, Chen X, Yang Y, Sun Y, Zhou Y, Qu W, Wang R, Wang X

pubmed logopapersSep 24 2025
Endoscopic ultrasound (EUS) is crucial for diagnosing and managing mediastinal diseases but lacks effective quality control. This study developed and evaluated an artificial intelligence (AI) system to assist in anatomical landmark identification and scanning guidance, aiming to improve quality control of mediastinal EUS examinations in clinical practice. The AI system for mediastinal EUS was trained on 11,230 annotated images from 120 patients, validated internally (1,972 images) and externally (824 images from three institutions). A single-center randomized controlled trial was designed to evaluate the effect of quality control, which enrolled patients requiring mediastinal EUS, randomized 1:1 to AI-assisted or control groups. Four endoscopists performed EUS, with the AI group receiving real-time AI feedback. The primary outcome was standard station completeness; secondary outcomes included structure completeness, procedure time, and adverse events. Blinded analysis ensured objectivity. Between 16 September 2023, and 28 February 2025, a total of 148 patients were randomly assigned and analyzed, with 72 patients in the AI-assisted group and 76 in the control group. The overall station completeness was significantly higher in the AI-assisted group than in the control group (1.00 [IQR, 1.00-1.00] vs. 0.80 [IQR, 0.60-0.80]; p < 0.001), with the AI-assisted group also demonstrating significantly higher anatomical structure completeness (1.00 [IQR, 1.00-1.00] vs. 0.85 [IQR, 0.62-0.92]; p < 0.001). However, no significant differences were found for station 2 (subcarinal area) or average procedural time, and no adverse events were reported. The AI system significantly improved the scan completeness and shows promise in enhancing EUS quality control.

A Versatile Foundation Model for AI-enabled Mammogram Interpretation

Fuxiang Huang, Jiayi Zhu, Yunfang Yu, Yu Xie, Yuan Guo, Qingcong Kong, Mingxiang Wu, Xinrui Jiang, Shu Yang, Jiabo Ma, Ziyi Liu, Zhe Xu, Zhixuan Chen, Yujie Tan, Zifan He, Luhui Mao, Xi Wang, Junlin Hou, Lei Zhang, Qiong Luo, Zhenhui Li, Herui Yao, Hao Chen

arxiv logopreprintSep 24 2025
Breast cancer is the most commonly diagnosed cancer and the leading cause of cancer-related mortality in women globally. Mammography is essential for the early detection and diagnosis of breast lesions. Despite recent progress in foundation models (FMs) for mammogram analysis, their clinical translation remains constrained by several fundamental limitations, including insufficient diversity in training data, limited model generalizability, and a lack of comprehensive evaluation across clinically relevant tasks. Here, we introduce VersaMammo, a versatile foundation model for mammograms, designed to overcome these limitations. We curated the largest multi-institutional mammogram dataset to date, comprising 706,239 images from 21 sources. To improve generalization, we propose a two-stage pre-training strategy to develop VersaMammo, a mammogram foundation model. First, a teacher model is trained via self-supervised learning to extract transferable features from unlabeled mammograms. Then, supervised learning combined with knowledge distillation transfers both features and clinical knowledge into VersaMammo. To ensure a comprehensive evaluation, we established a benchmark comprising 92 specific tasks, including 68 internal tasks and 24 external validation tasks, spanning 5 major clinical task categories: lesion detection, segmentation, classification, image retrieval, and visual question answering. VersaMammo achieves state-of-the-art performance, ranking first in 50 out of 68 specific internal tasks and 20 out of 24 external validation tasks, with average ranks of 1.5 and 1.2, respectively. These results demonstrate its superior generalization and clinical utility, offering a substantial advancement toward reliable and scalable breast cancer screening and diagnosis.

Learning neuroimaging models from health system-scale data

Yiwei Lyu, Samir Harake, Asadur Chowdury, Soumyanil Banerjee, Rachel Gologorsky, Shixuan Liu, Anna-Katharina Meissner, Akshay Rao, Chenhui Zhao, Akhil Kondepudi, Cheng Jiang, Xinhai Hou, Rushikesh S. Joshi, Volker Neuschmelting, Ashok Srinivasan, Dawn Kleindorfer, Brian Athey, Vikas Gulani, Aditya Pandey, Honglak Lee, Todd Hollon

arxiv logopreprintSep 23 2025
Neuroimaging is a ubiquitous tool for evaluating patients with neurological diseases. The global demand for magnetic resonance imaging (MRI) studies has risen steadily, placing significant strain on health systems, prolonging turnaround times, and intensifying physician burnout \cite{Chen2017-bt, Rula2024-qp-1}. These challenges disproportionately impact patients in low-resource and rural settings. Here, we utilized a large academic health system as a data engine to develop Prima, the first vision language model (VLM) serving as an AI foundation for neuroimaging that supports real-world, clinical MRI studies as input. Trained on over 220,000 MRI studies, Prima uses a hierarchical vision architecture that provides general and transferable MRI features. Prima was tested in a 1-year health system-wide study that included 30K MRI studies. Across 52 radiologic diagnoses from the major neurologic disorders, including neoplastic, inflammatory, infectious, and developmental lesions, Prima achieved a mean diagnostic area under the ROC curve of 92.0, outperforming other state-of-the-art general and medical AI models. Prima offers explainable differential diagnoses, worklist priority for radiologists, and clinical referral recommendations across diverse patient demographics and MRI systems. Prima demonstrates algorithmic fairness across sensitive groups and can help mitigate health system biases, such as prolonged turnaround times for low-resource populations. These findings highlight the transformative potential of health system-scale VLMs and Prima's role in advancing AI-driven healthcare.

Deep Learning Modeling to Differentiate Multiple Sclerosis From MOG Antibody-Associated Disease.

Cortese R, Sforazzini F, Gentile G, de Mauro A, Luchetti L, Amato MP, Apóstolos-Pereira SL, Arrambide G, Bellenberg B, Bianchi A, Bisecco A, Bodini B, Calabrese M, Camera V, Celius EG, de Medeiros Rimkus C, Duan Y, Durand-Dubief F, Filippi M, Gallo A, Gasperini C, Granziera C, Groppa S, Grothe M, Gueye M, Inglese M, Jacob A, Lapucci C, Lazzarotto A, Liu Y, Llufriu S, Lukas C, Marignier R, Messina S, Müller J, Palace J, Pastó L, Paul F, Prados F, Pröbstel AK, Rovira À, Rocca MA, Ruggieri S, Sastre-Garriga J, Sato DK, Schneider R, Sepulveda M, Sowa P, Stankoff B, Tortorella C, Barkhof F, Ciccarelli O, Battaglini M, De Stefano N

pubmed logopapersSep 23 2025
Multiple sclerosis (MS) is common in adults while myelin oligodendrocyte glycoprotein antibody-associated disease (MOGAD) is rare. Our previous machine-learning algorithm, using clinical variables, ≤6 brain lesions, and no Dawson fingers, achieved 79% accuracy, 78% sensitivity, and 80% specificity in distinguishing MOGAD from MS but lacked validation. The aim of this study was to (1) evaluate the clinical/MRI algorithm for distinguishing MS from MOGAD, (2) develop a deep learning (DL) model, (3) assess the benefit of combining both, and (4) identify key differentiators using probability attention maps (PAMs). This multicenter, retrospective, cross-sectional MAGNIMS study included scans from 19 centers. Inclusion criteria were as follows: adults with non-acute MS and MOGAD, with high-quality T2-fluid-attenuated inversion recovery and T1-weighted scans. Brain scans were scored by 2 readers to assess the performance of the clinical/MRI algorithm on the validation data set. A DL-based classifier using a ResNet-10 convolutional neural network was developed and tested on an independent validation data set. PAMs were generated by averaging correctly classified attention maps from both groups, identifying key differentiating regions. We included 406 MRI scans (218 with relapsing remitting MS [RRMS], mean age: 39 years ±11, 69% F; 188 with MOGAD, age: 41 years ±14, 61% F), split into 2 data sets: a training/testing set (n = 265: 150 with RRMS, age: 39 years ±10, 72% F; 115 with MOGAD, age: 42 years ±13, 61% F) and an independent validation set (n = 141: 68 with RRMS, age: 40 years ±14, 65% F; 73 with MOGAD, age: 40 years ±15, 63% F). The clinical/MRI algorithm predicted RRMS over MOGAD with 75% accuracy (95% CI 67-82), 96% sensitivity (95% CI 88-99), and specificity 56% (95% CI 44-68) in the validation cohort. The DL model achieved 77% accuracy (95% CI 64-89), 73% sensitivity (95% CI 57-89), and 83% specificity (95% CI 65-96) in the training/testing cohort, and 70% accuracy (95% CI 63-77), 67% sensitivity (95% CI 55-79), and 73% specificity (95% CI 61-83) in the validation cohort without retraining. When combined, the classifiers reached 86% accuracy (95% CI 81-92), 84% sensitivity (95% CI 75-92), and 89% specificity (95% CI 81-96). PAMs identified key region volumes: corpus callosum (1872 mm<sup>3</sup>), left precentral gyrus (341 mm<sup>3</sup>), right thalamus (193 mm<sup>3</sup>), and right cingulate cortex (186 mm<sup>3</sup>) for identifying RRMS and brainstem (629 mm<sup>3</sup>), hippocampus (234 mm<sup>3</sup>), and parahippocampal gyrus (147 mm<sup>3</sup>) for identifying MOGAD. Both classifiers effectively distinguished RRMS from MOGAD. The clinical/MRI model showed higher sensitivity while the DL model offered higher specificity, suggesting complementary roles. Their combination improved diagnostic accuracy, and PAMs revealed distinct damage patterns. Future prospective studies should validate these models in diverse, real-world settings. This study provides Class III evidence that both a clinical/MRI algorithm and an MRI-based DL model accurately distinguish RRMS from MOGAD.

Graph-Radiomic Learning (GrRAiL) Descriptor to Characterize Imaging Heterogeneity in Confounding Tumor Pathologies

Dheerendranath Battalapalli, Apoorva Safai, Maria Jaramillo, Hyemin Um, Gustavo Adalfo Pineda Ortiz, Ulas Bagci, Manmeet Singh Ahluwalia, Marwa Ismail, Pallavi Tiwari

arxiv logopreprintSep 23 2025
A significant challenge in solid tumors is reliably distinguishing confounding pathologies from malignant neoplasms on routine imaging. While radiomics methods seek surrogate markers of lesion heterogeneity on CT/MRI, many aggregate features across the region of interest (ROI) and miss complex spatial relationships among varying intensity compositions. We present a new Graph-Radiomic Learning (GrRAiL) descriptor for characterizing intralesional heterogeneity (ILH) on clinical MRI scans. GrRAiL (1) identifies clusters of sub-regions using per-voxel radiomic measurements, then (2) computes graph-theoretic metrics to quantify spatial associations among clusters. The resulting weighted graphs encode higher-order spatial relationships within the ROI, aiming to reliably capture ILH and disambiguate confounding pathologies from malignancy. To assess efficacy and clinical feasibility, GrRAiL was evaluated in n=947 subjects spanning three use cases: differentiating tumor recurrence from radiation effects in glioblastoma (GBM; n=106) and brain metastasis (n=233), and stratifying pancreatic intraductal papillary mucinous neoplasms (IPMNs) into no+low vs high risk (n=608). In a multi-institutional setting, GrRAiL consistently outperformed state-of-the-art baselines - Graph Neural Networks (GNNs), textural radiomics, and intensity-graph analysis. In GBM, cross-validation (CV) and test accuracies for recurrence vs pseudo-progression were 89% and 78% with >10% test-accuracy gains over comparators. In brain metastasis, CV and test accuracies for recurrence vs radiation necrosis were 84% and 74% (>13% improvement). For IPMN risk stratification, CV and test accuracies were 84% and 75%, showing >10% improvement.

Dual-Feature Cross-Fusion Network for Precise Brain Tumor Classification: A Neurocomputational Approach.

M M, G S, Bendre M, Nirmal M

pubmed logopapersSep 23 2025
Brain tumors represent a significant neurological challenge, affecting individuals across all age groups. Accurate and timely diagnosis of tumor types is critical for effective treatment planning. Magnetic Resonance Imaging (MRI) remains a primary diagnostic modality due to its non-invasive nature and ability to provide detailed brain imaging. However, traditional tumor classification relies on expert interpretation, which is time-consuming and prone to subjectivity. This study proposes a novel deep learning architecture, the Dual-Feature Cross-Fusion Network (DF-CFN), for the automated classification of brain tumors using MRI data. The model integrates ConvNeXt for capturing global contextual features and a shallow CNN combined with Feature Channel Attention Network (FcaNet) for extracting local features. These are fused through a cross-feature fusion mechanism for improved classification. The model is trained and validated using a Kaggle dataset encompassing four tumor classes (glioma, meningioma, pituitary, and non-tumor), achieving an accuracy of 99.33%. Its generalizability is further confirmed using the Figshare dataset, yielding 99.22% accuracy. Comparative analyses with baseline and recent models validate the superiority of DF-CFN in terms of precision and robustness. This approach demonstrates strong potential for assisting clinicians in reliable brain tumor classification, thereby improving diagnostic efficiency and reducing the burden on healthcare professionals.

CT-based radiomics deep learning signatures for noninvasive prediction of early recurrence after radical surgery in locally advanced colorectal cancer: A multicenter study.

Zhou Y, Zhao J, Tan Y, Zou F, Fang L, Wei P, Zeng W, Gong L, Liu L, Zhong L

pubmed logopapersSep 23 2025
Preoperative identification of high-risk locally advanced colorectal cancer (LACRC) patients is vital for optimizing treatment and minimizing toxicity. This study aims to develop and validate a combined model of CT-based images and clinical laboratory parameters to noninvasively predict postoperative early recurrence (ER) in LACRC patients. A retrospective cohort of 560 pathologically confirmed LACRC patients collected from three centers between July 2018 and March 2022 and the Gene Expression Omnibus (GEO) dataset was analyzed. We extracted radiomics and deep learning signatures (RDs) using eight machine learning techniques, integrated them with clinical-laboratory parameters to construct a preoperative combined model, and validated it in two external datasets. Its predictive performance was compared with postoperative pathological and TNM staging models. Kaplan-Meier analysis was used to evaluate preoperative risk stratification, and molecular correlations with ER were explored using GEO RNA-sequencing data. The model included five independent prognostic factors: RDs, lymphocyte-to-monocyte ratio, neutrophil-to-lymphocyte ratio, lymphocyte-Albumin, and prognostic nutritional index. It outperformed pathological and TNM models in two external datasets (AUC for test set 1:0.865 vs. 0.766, 0.665; AUC for test set 2: 0.848 vs. 0.754, 0.694). Preoperative risk stratification identified significantly better disease-free survival in low-risk vs. high-risk patients across all subgroups (p < 0.01). High enrichment scores were associated with upregulated tumor proliferation pathways (epithelial-mesenchymal transition [EMT] and inflammatory response pathways) and altered immune cell infiltration patterns in the tumor microenvironment. The preoperative model enables treatment strategy optimization and reduces unnecessary drug toxicity by noninvasively predicting ER in LACRC.

Exploring the role of preprocessing combinations in hyperspectral imaging for deep learning colorectal cancer detection.

Tkachenko M, Huber B, Hamotskyi S, Jansen-Winkeln B, Gockel I, Neumuth T, Köhler H, Maktabi M

pubmed logopapersSep 23 2025
This study compares various preprocessing techniques for hyperspectral deep learning-based cancer diagnostics. The study considers different spectrum scaling and noise reduction options across spatial and spectral axes of hyperspectral datacubes, as well varying levels of blood and light reflections removal. We also examine how the size of the patches extracted from the hyperspectral data affects the models' performance. We additionally explore various strategies to mitigate our dataset's imbalance (where cancerous tissues are underrepresented). Our results indicate that. Scaling: Standardization significantly improves both sensitivity and specificity compared to Normalization. Larger input patch sizes enhance performance by capturing more spatial context. Noise reduction unexpectedly degrades performance. Blood filtering is more effective than filtering reflected light pixels, although neither approach produces significant results. By carefully maintaining consistent testing conditions, we ensure a fair comparison across preprocessing methods and reproducibility. Our findings highlight the necessity of careful preprocessing selection to maximize deep learning performance in medical imaging applications.
Page 16 of 2332330 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.