Sort by:
Page 13 of 1401395 results

Deep-learning-based Radiomics on Mitigating Post-treatment Obesity for Pediatric Craniopharyngioma Patients after Surgery and Proton Therapy

Wenjun Yang, Chia-Ho Hua, Tina Davis, Jinsoo Uh, Thomas E. Merchant

arxiv logopreprintSep 25 2025
Purpose: We developed an artificial neural network (ANN) combining radiomics with clinical and dosimetric features to predict the extent of body mass index (BMI) increase after surgery and proton therapy, with advantage of improved accuracy and integrated key feature selection. Methods and Materials: Uniform treatment protocol composing of limited surgery and proton radiotherapy was given to 84 pediatric craniopharyngioma patients (aged 1-20 years). Post-treatment obesity was classified into 3 groups (<10%, 10-20%, and >20%) based on the normalized BMI increase during a 5-year follow-up. We developed a densely connected 4-layer ANN with radiomics calculated from pre-surgery MRI (T1w, T2w, and FLAIR), combining clinical and dosimetric features as input. Accuracy, area under operative curve (AUC), and confusion matrices were compared with random forest (RF) models in a 5-fold cross-validation. The Group lasso regularization optimized a sparse connection to input neurons to identify key features from high-dimensional input. Results: Classification accuracy of the ANN reached above 0.9 for T1w, T2w, and FLAIR MRI. Confusion matrices showed high true positive rates of above 0.9 while the false positive rates were below 0.2. Approximately 10 key features selected for T1w, T2w, and FLAIR MRI, respectively. The ANN improved classification accuracy by 10% or 5% when compared to RF models without or with radiomic features. Conclusion: The ANN model improved classification accuracy on post-treatment obesity compared to conventional statistics models. The clinical features selected by Group lasso regularization confirmed our practical observation, while the additional radiomic and dosimetric features could serve as imaging markers and mitigation methods on post-treatment obesity for pediatric craniopharyngioma patients.

Artificial Intelligence for Ischemic Stroke Detection in Non-contrast CT: A Systematic Review and Meta-analysis.

Shen W, Peng J, Lu J

pubmed logopapersSep 25 2025
We aim to conduct a systematic review and meta-analysis to objectively assess the diagnostic accuracy of artificial intelligence (AI) models for detecting ischemic stroke (IS) in non-contrast CT (NCCT), and to compare the diagnostic performance between AI and clinicians. Until February 2025, systematic searches were conducted in PubMed, Web of Science, Cochrane, IEEE Xplore, and Embase for studies using AI based on NCCT images from human subjects for IS detection or classification. The risk of bias was evaluated using the prediction model study risk of bias assessment tool (PROBAST). For meta-analysis, the pooled sensitivities, specificities, and hierarchical summary receiver operating characteristic (HSROC) curves were used. A total of 38 studies, with 74 trials extracted from 32 studies were included. For AI performance, the pooled sensitivity and specificity were 91.2% (95%CI: 87.6%-93.8%) and 96.0% (95%CI: 93.6%-97.6%) for internal validation and 59.8% (95%CI:39.9%-76.9%) and 97.3% (95%CI: 93.2%-98.9%) for external validation. For clinicians' performance, the pooled sensitivity and specificity were 44.1% (95%CI: 33.8%-55.0%) and 85.5% (95%CI: 68.4%-94.1%) for internal validation and 46.1% (95%CI: 31.5%-61.3%) and 83.6% (95%CI: 62.8%-93.9%) for external validation. The pooled sensitivity and specificity increased to 83.7% (95%CI: 53.0%-95.9%) and 86.7% (95%CI: 77.1%-92.6%) for clinicians with AI assistance. The subgroup analysis results indicated that higher model sensitivity was associated with the data augmentation (93.9%, 95%CI: 90.2%-96.2%) and transfer learning (94.7%, 95%CI: 92.0%-96.6%). There were 22 of 38 (58%) studies that were judged to have high risk of bias. Sensitive analysis and subgroup analysis identified multiple sources of heterogeneity in the data, including risk of bias and AI model types. Our study reveals that AI has an acceptable performance in detecting IS in NCCT in internal validation, although significant heterogeneity was observed in the meta-analysis. However, the generalizability and practical applicability of AI in real-world clinical settings remain limited due to insufficient external validation.

Artificial Intelligence-Led Whole Coronary Artery OCT Analysis; Validation and Identification of Drug Efficacy and Higher-Risk Plaques.

Jessney B, Chen X, Gu S, Huang Y, Goddard M, Brown A, Obaid D, Mahmoudi M, Garcia Garcia HM, Hoole SP, Räber L, Prati F, Schönlieb CB, Roberts M, Bennett M

pubmed logopapersSep 25 2025
Intracoronary optical coherence tomography (OCT) can identify changes following drug/device treatment and high-risk plaques, but analysis requires expert clinician or core laboratory interpretation, while artifacts and limited sampling markedly impair reproducibility. Assistive technologies such as artificial intelligence-based analysis may therefore aid both detailed OCT interpretation and patient management. We determined if artificial intelligence-based OCT analysis (AutoOCT) can rapidly process, optimize and analyze OCT images, and identify plaque composition changes that predict drug success/failure and high-risk plaques. AutoOCT deep learning artificial intelligence modules were designed to correct segmentation errors from poor-quality or artifact-containing OCT images, identify tissue/plaque composition, classify plaque types, measure multiple parameters including lumen area, lipid and calcium arcs, and fibrous cap thickness, and output segmented images and clinically useful parameters. Model development used 36 212 frames (127 whole pullbacks, 106 patients). Internal validation of tissue and plaque classification and measurements used ex vivo OCT pullbacks from autopsy arteries, while external validation for plaque stabilization and identifying high-risk plaques used core laboratory analysis of IBIS-4 (Integrated Biomarkers and Imaging Study-4) high-intensity statin (83 patients) and CLIMA (Relationship Between Coronary Plaque Morphology of Left Anterior Descending Artery and Long-Term Clinical Outcome Study; 62 patients) studies, respectively. AutoOCT recovered images containing common artifacts with measurements and tissue and plaque classification accuracy of 83% versus histology, equivalent to expert clinician readers. AutoOCT replicated core laboratory plaque composition changes after high-intensity statin, including reduced lesion lipid arc (13.3° versus 12.5°) and increased minimum fibrous cap thickness (18.9 µm versus 24.4 µm). AutoOCT also identified high-risk plaque features leading to patient events including minimal lumen area <3.5 mm<sup>2</sup>, Lipid arc >180°, and fibrous cap thickness <75 µm, similar to the CLIMA core laboratory. AutoOCT-based analysis of whole coronary artery OCT identifies tissue and plaque types and measures features correlating with plaque stabilization and high-risk plaques. Artificial intelligence-based OCT analysis may augment clinician or core laboratory analysis of intracoronary OCT images for trials of drug/device efficacy and identifying high-risk lesions.

AI demonstrates comparable diagnostic performance to radiologists in MRI detection of anterior cruciate ligament tears: a systematic review and meta-analysis.

Gill SS, Haq T, Zhao Y, Ristic M, Amiras D, Gupte CM

pubmed logopapersSep 25 2025
Anterior cruciate ligament (ACL) injuries are among the most common knee injuries, affecting 1 in 3500 people annually. With rising rates of ACL tears, particularly in children, timely diagnosis is critical. This study evaluates artificial intelligence (AI) effectiveness in diagnosing and classifying ACL tears on MRI through a systematic review and meta-analysis, comparing AI performance with clinicians and assessing radiomic and non-radiomic models. Major databases were searched for AI models diagnosing ACL tears via MRIs. 36 studies, representing 52 models, were included. Accuracy, sensitivity, and specificity metrics were extracted. Pooled estimates were calculated using a random-effects model. Subgroup analyses compared MRI sequences, ground truths, AI versus clinician performance, and radiomic versus non-radiomic models. This study was conducted in line with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocols. AI demonstrated strong diagnostic performance, with pooled accuracy, sensitivity, and specificity of 87.37%, 90.73%, and 91.34%, respectively. Classification models achieved pooled metrics of 90.46%, 88.68%, and 94.08%. Radiomic models outperformed non-radiomic models, and AI demonstrated comparable performance to clinicians in key metrics. Three-dimensional (3D) proton density fat suppression (PDFS) sequences with < 2 mm slice depth yielded the most promising results, despite small sample sizes, favouring arthroscopic benchmarks. Despite high heterogeneity (I² > 90%). AI models demonstrate diagnostic performance comparable to clinicians and may serve as valuable adjuncts in ACL tear detection, pending prospective validation. However, substantial heterogeneity and limited interpretability remain key challenges. Further research and standardised evaluation frameworks are needed to support clinical integration. Question Is AI effective and accurate in diagnosing and classifying anterior cruciate ligament (ACL) tears on MRI? Findings AI demonstrated high accuracy (87.37%), sensitivity (90.73%), and specificity (91.34%) in ACL tear diagnosis, matching or surpassing clinicians. Radiomic models outperformed non-radiomic approaches. Clinical relevance AI can enhance the accuracy of ACL tear diagnosis, reducing misdiagnoses and supporting clinicians, especially in resource-limited settings. Its integration into clinical workflows may streamline MRI interpretation, reduce diagnostic delays, and improve patient outcomes by optimising management.

Decipher-MR: A Vision-Language Foundation Model for 3D MRI Representations

Zhijian Yang, Noel DSouza, Istvan Megyeri, Xiaojian Xu, Amin Honarmandi Shandiz, Farzin Haddadpour, Krisztian Koos, Laszlo Rusko, Emanuele Valeriano, Bharadwaj Swaninathan, Lei Wu, Parminder Bhatia, Taha Kass-Hout, Erhan Bas

arxiv logopreprintSep 25 2025
Magnetic Resonance Imaging (MRI) is a critical medical imaging modality in clinical diagnosis and research, yet its complexity and heterogeneity pose challenges for automated analysis, particularly in scalable and generalizable machine learning applications. While foundation models have revolutionized natural language and vision tasks, their application to MRI remains limited due to data scarcity and narrow anatomical focus. In this work, we present Decipher-MR, a 3D MRI-specific vision-language foundation model trained on a large-scale dataset comprising 200,000 MRI series from over 22,000 studies spanning diverse anatomical regions, sequences, and pathologies. Decipher-MR integrates self-supervised vision learning with report-guided text supervision to build robust, generalizable representations, enabling effective adaptation across broad applications. To enable robust and diverse clinical tasks with minimal computational overhead, Decipher-MR supports a modular design that enables tuning of lightweight, task-specific decoders attached to a frozen pretrained encoder. Following this setting, we evaluate Decipher-MR across diverse benchmarks including disease classification, demographic prediction, anatomical localization, and cross-modal retrieval, demonstrating consistent performance gains over existing foundation models and task-specific approaches. Our results establish Decipher-MR as a scalable and versatile foundation for MRI-based AI, facilitating efficient development across clinical and research domains.

Automated and Interpretable Survival Analysis from Multimodal Data

Mafalda Malafaia, Peter A. N. Bosman, Coen Rasch, Tanja Alderliesten

arxiv logopreprintSep 25 2025
Accurate and interpretable survival analysis remains a core challenge in oncology. With growing multimodal data and the clinical need for transparent models to support validation and trust, this challenge increases in complexity. We propose an interpretable multimodal AI framework to automate survival analysis by integrating clinical variables and computed tomography imaging. Our MultiFIX-based framework uses deep learning to infer survival-relevant features that are further explained: imaging features are interpreted via Grad-CAM, while clinical variables are modeled as symbolic expressions through genetic programming. Risk estimation employs a transparent Cox regression, enabling stratification into groups with distinct survival outcomes. Using the open-source RADCURE dataset for head and neck cancer, MultiFIX achieves a C-index of 0.838 (prediction) and 0.826 (stratification), outperforming the clinical and academic baseline approaches and aligning with known prognostic markers. These results highlight the promise of interpretable multimodal AI for precision oncology with MultiFIX.

Deep learning powered breast ultrasound to improve characterization of breast masses: a prospective study.

Singla V, Garg D, Negi S, Mehta N, Pallavi T, Choudhary S, Dhiman A

pubmed logopapersSep 25 2025
BackgroundThe diagnostic performance of ultrasound (US) is heavily reliant on the operator's expertise. Advances in artificial intelligence (AI) have introduced deep learning (DL) tools that detect morphology beyond human perception, providing automated interpretations.PurposeTo evaluate Smart-Detect (S-Detect), a DL tool, for its potential to enhance diagnostic precision and standardize US assessments among radiologists with varying levels of experience.Material and MethodsThis prospective observational study was conducted between May and November 2024. US and S-Detect analyses were performed by a breast imaging fellow. Images were independently analyzed by five radiologists with varying experience in breast imaging (<1 year-15 years). Each radiologist assessed the images twice: without and with S-Detect. ROC analyses compared the diagnostic performance. True downgrades and upgrades were calculated to determine the biopsy reduction with AI assistance. Kappa statistics assessed radiologist agreement before and after incorporating S-Detect.ResultsThis study analyzed 230 breast masses from 216 patients. S-Detect demonstrated high specificity (92.7%), PPV (92.9%), NPV (87.9%), and accuracy (90.4%). It enhanced less experienced radiologists' performance, increasing the sensitivity (85% to 93.33%), specificity (54.5% to 73.64%), and accuracy (70.43% to 83.91%; <i>P</i> <0.001). AUC significantly increased for the less experienced radiologists (0.698 to 0.835 <i>P</i> <0.001), with no significant gains for the expert radiologist. It also reduced variability in assessment between radiologists with an increase in kappa agreement (0.459-0.696) and enabled significant downgrades, reducing unnecessary biopsies.ConclusionThe DL tool improves diagnostic accuracy, bridges the expertise gap, reduces reliance on invasive procedures, and enhances consistency in clinical decisions among radiologists.

A Versatile Foundation Model for AI-enabled Mammogram Interpretation

Fuxiang Huang, Jiayi Zhu, Yunfang Yu, Yu Xie, Yuan Guo, Qingcong Kong, Mingxiang Wu, Xinrui Jiang, Shu Yang, Jiabo Ma, Ziyi Liu, Zhe Xu, Zhixuan Chen, Yujie Tan, Zifan He, Luhui Mao, Xi Wang, Junlin Hou, Lei Zhang, Qiong Luo, Zhenhui Li, Herui Yao, Hao Chen

arxiv logopreprintSep 24 2025
Breast cancer is the most commonly diagnosed cancer and the leading cause of cancer-related mortality in women globally. Mammography is essential for the early detection and diagnosis of breast lesions. Despite recent progress in foundation models (FMs) for mammogram analysis, their clinical translation remains constrained by several fundamental limitations, including insufficient diversity in training data, limited model generalizability, and a lack of comprehensive evaluation across clinically relevant tasks. Here, we introduce VersaMammo, a versatile foundation model for mammograms, designed to overcome these limitations. We curated the largest multi-institutional mammogram dataset to date, comprising 706,239 images from 21 sources. To improve generalization, we propose a two-stage pre-training strategy to develop VersaMammo, a mammogram foundation model. First, a teacher model is trained via self-supervised learning to extract transferable features from unlabeled mammograms. Then, supervised learning combined with knowledge distillation transfers both features and clinical knowledge into VersaMammo. To ensure a comprehensive evaluation, we established a benchmark comprising 92 specific tasks, including 68 internal tasks and 24 external validation tasks, spanning 5 major clinical task categories: lesion detection, segmentation, classification, image retrieval, and visual question answering. VersaMammo achieves state-of-the-art performance, ranking first in 50 out of 68 specific internal tasks and 20 out of 24 external validation tasks, with average ranks of 1.5 and 1.2, respectively. These results demonstrate its superior generalization and clinical utility, offering a substantial advancement toward reliable and scalable breast cancer screening and diagnosis.

SHMoAReg: Spark Deformable Image Registration via Spatial Heterogeneous Mixture of Experts and Attention Heads

Yuxi Zheng, Jianhui Feng, Tianran Li, Marius Staring, Yuchuan Qiao

arxiv logopreprintSep 24 2025
Encoder-Decoder architectures are widely used in deep learning-based Deformable Image Registration (DIR), where the encoder extracts multi-scale features and the decoder predicts deformation fields by recovering spatial locations. However, current methods lack specialized extraction of features (that are useful for registration) and predict deformation jointly and homogeneously in all three directions. In this paper, we propose a novel expert-guided DIR network with Mixture of Experts (MoE) mechanism applied in both encoder and decoder, named SHMoAReg. Specifically, we incorporate Mixture of Attention heads (MoA) into encoder layers, while Spatial Heterogeneous Mixture of Experts (SHMoE) into the decoder layers. The MoA enhances the specialization of feature extraction by dynamically selecting the optimal combination of attention heads for each image token. Meanwhile, the SHMoE predicts deformation fields heterogeneously in three directions for each voxel using experts with varying kernel sizes. Extensive experiments conducted on two publicly available datasets show consistent improvements over various methods, with a notable increase from 60.58% to 65.58% in Dice score for the abdominal CT dataset. Furthermore, SHMoAReg enhances model interpretability by differentiating experts' utilities across/within different resolution layers. To the best of our knowledge, we are the first to introduce MoE mechanism into DIR tasks. The code will be released soon.

Deep Learning-based Automated Detection of Pulmonary Embolism: Is It Reliable?

Babacan Ö, Karkaş AY, Durak G, Uysal E, Durak Ü, Shrestha R, Bingöl Z, Okumuş G, Medetalibeyoğlu A, Ertürk ŞM

pubmed logopapersSep 24 2025
To assess the diagnostic accuracy and clinical applicability of the artificial intelligence (AI) program "Canon Automation Platform" for the automated detection and localization of pulmonary embolisms (PEs) in chest computed tomography pulmonary angiograms (CTPAs). A total of 1474 CTPAs suspected of PEs were retrospectively evaluated by 2 senior radiology residents with 5 years of experience. The final diagnosis was verified through radiology reports by 2 thoracic radiologists with 20 and 25 years of experience, along with the patients' clinical records and histories. The images were transferred to the Canon Automation Platform, which integrates with the picture archiving and communication system (PACS), and the diagnostic success of the platform was evaluated. This study examined all anatomic levels of the pulmonary arteries, including the left pulmonary artery, right pulmonary artery, and interlobar, segmental, and subsegmental branches. The confusion matrix data obtained at all anatomic levels considered in our study were as follows: AUC-ROC score of 0.945 to 0.996, accuracy of 95.4% to 99.7%, sensitivity of 81.4% to 99.1%, specificity of 98.7% to 100%, PPV of 89.1% to 100%, NPV of 95.6% to 99.9%, F1 score of 0.868 to 0.987, and Cohen Kappa of 0.842 to 0.986. Notably, sensitivity in the subsegmental branches was lower (81.4% to 84.7%) compared with more central locations, whereas specificity remained consistent (98.7% to 98.9%). The results showed that the chest pain package of the Canon Automation Platform accurately provides rapid automatic PE detection in chest CTPAs by leveraging deep learning algorithms to facilitate the clinical workflow. This study demonstrates that AI can provide physicians with robust diagnostic support for acute PE, particularly in hospitals without 24/7 access to radiology specialists.
Page 13 of 1401395 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.