Sort by:
Page 1 of 4244239 results
Next

Machine learning prediction of effective radiation doses in various computed tomography applications: a virtual human phantom study.

Tanyildizi-Kokkulunk H

pubmed logopapersAug 26 2025
In this work, it was aimed to employ machine learning (ML) algorithms to accurately forecast the radiation doses for phantoms while accounting for the most popular CT protocols. A cloud-based software was utilized to calculate the effective doses from different CT protocols. To simulate a range of adult patients with different weights, eight entire body mesh-based computational phantom sets were used. The head, neck, and chest-abdomen-pelvis CT scan characteristics were combined to create a dataset with 33 rows for each phantom and 792 rows total. At the ML stage, linear (LR), random forest (RF) and support vector regression (SVR) were used. Mean absolute error, mean squared error and accuracy were used to evaluate the performances. The female phantoms received higher doses (7.8 %) than males. Furthermore, an average of 11 % more dose was taken to the normal weight phantom than to the overweight, the overweight in comparison to the obese I, and the obese I in comparison to the obese II. Among the ML algorithms, the LR showed 0 error rate and 100 % accuracy in predicting CT doses. The LR was shown to be the best approach out of those used in the ML estimation of CT-induced doses.

Application of artificial intelligence in medical imaging for tumor diagnosis and treatment: a comprehensive approach.

Huang J, Xiang Y, Gan S, Wu L, Yan J, Ye D, Zhang J

pubmed logopapersAug 26 2025
This narrative review provides a comprehensive and structured overview of recent advances in the application of artificial intelligence (AI) to medical imaging for tumor diagnosis and treatment. By synthesizing evidence from recent literature and clinical reports, we highlight the capabilities, limitations, and translational potential of AI techniques across key imaging modalities such as CT, MRI, and PET. Deep learning (DL) and radiomics have facilitated automated lesion detection, tumour segmentation, and prognostic assessments, improving early cancer detection across various malignancies, including breast, lung, and prostate cancers. AI-driven multi-modal imaging fusion integrates radiomics, genomics, and clinical data, refining precision oncology strategies. Additionally, AI-assisted radiotherapy planning and adaptive dose optimisation have enhanced therapeutic efficacy while minimising toxicity. However, challenges persist regarding data heterogeneity, model generalisability, regulatory constraints, and ethical concerns. The lack of standardised datasets and explainable AI (XAI) frameworks hinders clinical adoption. Future research should focus on improving AI interpretability, fostering multi-centre dataset interoperability, and integrating AI with molecular imaging and real-time clinical decision support. Addressing these challenges will ensure AI's seamless integration into clinical oncology, optimising cancer diagnosis, prognosis, and treatment outcomes.

ESR Essentials: artificial intelligence in breast imaging-practice recommendations by the European Society of Breast Imaging.

Schiaffino S, Bernardi D, Healy N, Marino MA, Romeo V, Sechopoulos I, Mann RM, Pinker K

pubmed logopapersAug 26 2025
Artificial intelligence (AI) can enhance the diagnostic performance of breast cancer imaging and improve workflow optimization, potentially mitigating excessive radiologist workload and suboptimal diagnostic accuracy. AI can also boost imaging capabilities through individual risk prediction, molecular subtyping, and neoadjuvant therapy response predictions. Evidence demonstrates AI's potential across multiple modalities. The most robust data come from mammographic screening, where AI models improve diagnostic accuracy and optimize workflow, but rigorous post-market surveillance is required before any implementation strategy in this field. Commercial tools for digital breast tomosynthesis and ultrasound, potentially able to reduce interpretation time and improve accuracy, are also available, but post-implementation evaluation studies are likewise lacking. Besides basic tools for breast MRI with limited proven clinical benefit, AI applications for other modalities are not yet commercially available. Applications in contrast-enhanced mammography are still in the research stage, especially for radiomics-based molecular subtype classification. Applications of Large Language Models (LLMs) are in their infancy, and there are currently no clinical applications. Consequently, and despite their promise, all commercially available AI tools for breast imaging should currently still be regarded as techniques that, at best, aid radiologists in image evaluation. Their use is therefore optional, and the findings may always be overruled. KEY POINTS: AI systems improve diagnostic accuracy and efficiency of mammography screening, but long-term outcomes data are lacking. Commercial tools for digital breast tomosynthesis and ultrasound are available, but post-implementation evaluation studies are lacking. AI tools for breast imaging should still be regarded as a non-obligatory aid to radiologists for image interpretation.

[Comparison of diagnostic performance between artificial intelligence-assisted automated breast ultrasound and handheld ultrasound in breast cancer screening].

Yi DS, Sun WY, Song HP, Zhao XL, Hu SY, Gu X, Gao Y, Zhao FH

pubmed logopapersAug 26 2025
<b>Objective:</b> To compare the diagnostic performance of artificial intelligence-assisted automated breast ultrasound (AI-ABUS) with traditional handheld ultrasound (HHUS) in breast cancer screening. <b>Methods:</b> A total of 36 171 women undergoing breast cancer ultrasound screening in Futian District, Shenzhen, between July 1, 2023 and June 30, 2024 were prospectively recruited and assigned to either the AI-ABUS or HHUS group based on the screening modality used. In the AI-ABUS group, image acquisition was performed on-site by technicians, and two ultrasound physicians conducted remote diagnoses with AI assistance, supported by a follow-up management system. In the HHUS group, one ultrasound physician conducted both image acquisition and diagnosis on-site, and follow-up was led by clinical physicians. Based on the reported malignancy rates of different BI-RADS categories, the number of undiagnosed breast cancer cases in individuals without pathology was estimated, and adjusted detection rates were calculated. Primary outcomes included screening positive rate, biopsy rate, cancer detection rate, loss-to-follow-up rate, specificity, and sensitivity. <b>Results:</b> The median age [interquartile range, <i>M</i> (<i>Q</i><sub>1</sub>, <i>Q</i><sub>3</sub>)] of the 36 171 women was 43.8 (36.6, 50.8) years. A total of 14 766 women (40.82%) were screened with AI-ABUS and 21 405 (59.18%) with HHUS. Baseline characteristics showed no significant differences between the groups (all <i>P</i>>0.05). The AI-ABUS group had a lower screening positive rate [0.59% (87/14 766) vs 1.94% (416/21 405)], but higher biopsy rate [47.13% (41/87) vs 16.10% (67/416)], higher cancer detection rate [1.69‰ (25/14 766) vs 0.47‰ (10/21 428)], and lower loss-to-follow-up rate (6.90% vs 71.39%) compared to the HHUS group (all <i>P</i><0.05). There was no statistically significant difference in the distribution of breast cancer pathological stages among those who underwent biopsy between the two groups (<i>P</i>>0.05). The specificity of AI-ABUS was higher than that of HHUS [89.77% (13, 231/14 739) vs 74.12% (15, 858/21 394), <i>P</i><0.05], while sensitivity did not differ significantly [92.59% (25/27) vs 90.91% (10/11), <i>P</i>>0.05]. After estimating undiagnosed cancer cases among participants without pathology, the adjusted detection rate was 2.30‰ (34/14 766) in the AI-ABUS group and ranged from 1.17‰ to 2.75‰ [(25-59)/21 428] in the HHUS group. In the minimum estimation scenario, the detection rate in the AI-ABUS group was significantly higher (<i>P</i><0.05); in the maximum estimation scenario, the difference was not statistically significant (<i>P</i>>0.05). <b>Conclusions:</b> The AI-ABUS model, combined with an intelligent follow-up management system, enables a higher breast cancer detection rate with a lower screening positive rate, improved specificity, and reduced loss to follow-up. This suggests AI-ABUS is a promising alternative model for breast cancer screening.

Optimizing meningioma grading with radiomics and deep features integration, attention mechanisms, and reproducibility analysis.

Albadr RJ, Sur D, Yadav A, Rekha MM, Jain B, Jayabalan K, Kubaev A, Taher WM, Alwan M, Jawad MJ, Al-Nuaimi AMA, Mohammadifard M, Farhood B, Akhavan-Sigari R

pubmed logopapersAug 26 2025
This study aims to develop a robust and clinically applicable framework for preoperative grading of meningiomas using T1-contrast-enhanced and T2-weighted MRI images. The approach integrates radiomic feature extraction, attention-guided deep learning models, and reproducibility assessment to achieve high diagnostic accuracy, model interpretability, and clinical reliability. We analyzed MRI scans from 2546 patients with histopathologically confirmed meningiomas (1560 low-grade, 986 high-grade). High-quality T1-contrast and T2-weighted images were preprocessed through harmonization, normalization, resizing, and augmentation. Tumor segmentation was performed using ITK-SNAP, and inter-rater reliability of radiomic features was evaluated using the intraclass correlation coefficient (ICC). Radiomic features were extracted via the SERA software, while deep features were derived from pre-trained models (ResNet50 and EfficientNet-B0), with attention mechanisms enhancing focus on tumor-relevant regions. Feature fusion and dimensionality reduction were conducted using PCA and LASSO. Ensemble models employing Random Forest, XGBoost, and LightGBM were implemented to optimize classification performance using both radiomic and deep features. Reproducibility analysis showed that 52% of radiomic features demonstrated excellent reliability (ICC > 0.90). Deep features from EfficientNet-B0 outperformed ResNet50, achieving AUCs of 94.12% (T1) and 93.17% (T2). Hybrid models combining radiomic and deep features further improved performance, with XGBoost reaching AUCs of 95.19% (T2) and 96.87% (T1). Ensemble models incorporating both deep architectures achieved the highest classification performance, with AUCs of 96.12% (T2) and 96.80% (T1), demonstrating superior robustness and accuracy. This work introduces a comprehensive and clinically meaningful AI framework that significantly enhances the preoperative grading of meningiomas. The model's high accuracy, interpretability, and reproducibility support its potential to inform surgical planning, reduce reliance on invasive diagnostics, and facilitate more personalized therapeutic decision-making in routine neuro-oncology practice. Not applicable.

Validation of an Automated CT Image Analysis in the Prevention of Urinary Stones with Hydration Trial.

Tasian GE, Maalouf NM, Harper JD, Sivalingam S, Logan J, Al-Khalidi HR, Lieske JC, Selman-Fermin A, Desai AC, Lai H, Kirkali Z, Scales CD, Fan Y

pubmed logopapersAug 26 2025
<b><i>Introduction and Objective:</i></b> Kidney stone growth and new stone formation are common clinical trial endpoints and are associated with future symptomatic events. To date, a manual review of CT scans has been required to assess stone growth and new stone formation, which is laborious. We validated the performance of a software algorithm that automatically identified, registered, and measured stones over longitudinal CT studies. <b><i>Methods:</i></b> We validated the performance of a pretrained machine learning algorithm to classify stone outcomes on longitudinal CT scan images at baseline and at the end of the 2-year follow-up period for 62 participants aged >18 years in the Prevention of Urinary Stones with Hydration (PUSH) randomized controlled trial. Stones were defined as an area of voxels with a minimum linear dimension of 2 mm that was higher in density than the mean plus 4 standard deviations of all nonnegative HU values within the kidney. The four outcomes assessed were: (1) growth of at least one existing stone by ≥2 mm, (2) formation of at least one new ≥2 mm stone, (3) no stone growth or new stone formation, and (4) loss of at least one stone. The accuracy of the algorithm was determined by comparing its outcomes to the gold standard of independent review of the CT images by at least two expert clinicians. <b><i>Results:</i></b> The algorithm correctly classified outcomes for 61 paired scans (98.4%). One pair that the algorithm incorrectly classified as stone growth was a new renal artery calcification on end-of-study CT. <b><i>Conclusions:</i></b> An automated image analysis method validated for the prospective PUSH trial was highly accurate for determining clinical outcomes of new stone formation, stone growth, stable stone size, and stone loss on longitudinal CT images. This method has the potential to improve the accuracy and efficiency of clinical care and endpoint determination for future clinical trials.

Random forest-based out-of-distribution detection for robust lung cancer segmentation

Aneesh Rangnekar, Harini Veeraraghavan

arxiv logopreprintAug 26 2025
Accurate detection and segmentation of cancerous lesions from computed tomography (CT) scans is essential for automated treatment planning and cancer treatment response assessment. Transformer-based models with self-supervised pretraining can produce reliably accurate segmentation from in-distribution (ID) data but degrade when applied to out-of-distribution (OOD) datasets. We address this challenge with RF-Deep, a random forest classifier that utilizes deep features from a pretrained transformer encoder of the segmentation model to detect OOD scans and enhance segmentation reliability. The segmentation model comprises a Swin Transformer encoder, pretrained with masked image modeling (SimMIM) on 10,432 unlabeled 3D CT scans covering cancerous and non-cancerous conditions, with a convolution decoder, trained to segment lung cancers in 317 3D scans. Independent testing was performed on 603 3D CT public datasets that included one ID dataset and four OOD datasets comprising chest CTs with pulmonary embolism (PE) and COVID-19, and abdominal CTs with kidney cancers and healthy volunteers. RF-Deep detected OOD cases with a FPR95 of 18.26%, 27.66%, and less than 0.1% on PE, COVID-19, and abdominal CTs, consistently outperforming established OOD approaches. The RF-Deep classifier provides a simple and effective approach to enhance reliability of cancer segmentation in ID and OOD scenarios.

Evaluating the diagnostic accuracy of AI in ischemic and hemorrhagic stroke: A comprehensive meta-analysis.

Gul N, Fatima Y, Shaikh HS, Raheel M, Ali A, Hasan SU

pubmed logopapersAug 25 2025
Stroke poses a significant health challenge, with ischemic and hemorrhagic subtypes requiring timely and accurate diagnosis for effective management. Traditional imaging techniques like CT have limitations, particularly in early ischemic stroke detection. Recent advancements in artificial intelligence (AI) offer potential improvements in stroke diagnosis by enhancing imaging interpretation. This meta-analysis aims to evaluate the diagnostic accuracy of AI systems compared to human experts in detecting ischemic and hemorrhagic strokes. The review was conducted following PRISMA-DTA guidelines. Studies included stroke patients evaluated in emergency settings using AI-Based models on CT or MRI imaging, with human radiologists as the reference standard. Databases searched were MEDLINE, Scopus, and Cochrane Central, up to January 1, 2024. The primary outcome measured was diagnostic accuracy, including sensitivity, specificity, and AUROC and the methodological quality was assessed using QUADAS-2. Nine studies met the inclusion criteria and were included. The pooled analysis for ischemic stroke revealed a mean sensitivity of 86.9% (95% CI: 69.9%-95%) and specificity of 88.6% (95% CI: 77.8%-94.5%). For hemorrhagic stroke, the pooled sensitivity and specificity were 90.6% (95% CI: 86.2%-93.6%) and 93.9% (95% CI: 87.6%-97.2%), respectively. The diagnostic odds ratios indicated strong diagnostic efficacy, particularly for hemorrhagic stroke (DOR: 148.8, 95% CI: 79.9-277.2). AI-Based systems exhibit high diagnostic accuracy for both ischemic and hemorrhagic strokes, closely approaching that of human radiologists. These findings underscore the potential of AI to improve diagnostic precision and expedite clinical decision-making in acute stroke settings.

Radiomics-Driven Diffusion Model and Monte Carlo Compression Sampling for Reliable Medical Image Synthesis.

Zhao J, Li S

pubmed logopapersAug 25 2025
Reliable medical image synthesis is crucial for clinical applications and downstream tasks, where high-quality anatomical structure and predictive confidence are essential. Existing studies have made significant progress by embedding prior conditional knowledge, such as conditional images or textual information, to synthesize natural images. However, medical image synthesis remains a challenging task due to: 1) Data scarcity: High-quality medical text prompt are extremely rare and require specialized expertise. 2) Insufficient uncertainty estimation: The uncertainty estimation is critical for evaluating the confidence of reliable medical image synthesis. This paper presents a novel approach for medical image synthesis, driven by radiomics prompts and combined with Monte Carlo Compression Sampling (MCCS) to ensure reliability. For the first time, our method leverages clinically focused radiomics prompts to condition the generation process, guiding the model to produce reliable medical images. Furthermore, the innovative MCCS algorithm employs Monte Carlo methods to randomly select and compress sampling steps within the denoising diffusion implicit models (DDIM), enabling efficient uncertainty quantification. Additionally, we introduce a MambaTrans architecture to model long-range dependencies in medical images and embed prior conditions (e.g., radiomics prompts). Extensive experiments on benchmark medical imaging datasets demonstrate that our approach significantly improves image quality and reliability, outperforming SoTA methods in both qualitative and quantitative evaluations.

TransSeg: Leveraging Transformer with Channel-Wise Attention and Semantic Memory for Semi-Supervised Ultrasound Segmentation.

Lyu J, Li L, Al-Hazzaa SAF, Wang C, Hossain MS

pubmed logopapersAug 25 2025
During labor, transperineal ultrasound imaging can acquire real-time midsagittal images, through which the pubic symphysis and fetal head can be accurately identified, and the angle of progression (AoP) between them can be calculated, thereby quantitatively evaluating the descent and position of the fetal head in the birth canal in real time. However, current segmentation methods based on convolutional neural networks (CNNs) and Transformers generally depend heavily on large-scale manually annotated data, which limits their adoption in practical applications. In light of this limitation, this paper develops a new Transformer-based Semi-supervised Segmentation Network (TransSeg). This method employs a Vision Transformer as the backbone network and introduces a Channel-wise Cross Attention (CCA) mechanism to effectively reconstruct the features of unlabeled samples into the labeled feature space, promoting architectural innovation in semi-supervised segmentation and eliminating the need for complex training strategies. In addition, we design a Semantic Information Storage (S-InfoStore) module and a Channel Semantic Update (CSU) strategy to dynamically store and update feature representations of unlabeled samples, thereby continuously enhancing their expressiveness in the feature space and significantly improving the model's utilization of unlabeled data. We conduct a systematic evaluation of the proposed method on the FH-PS-AoP dataset. Experimental results demonstrate that TransSeg outperforms existing mainstream methods across all evaluation metrics, verifying its effectiveness and advancement in semi-supervised semantic segmentation tasks.
Page 1 of 4244239 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.