Sort by:
Page 224 of 6546537 results

Rosen, K. L., Mandl, K. D.

medrxiv logopreprintAug 27 2025
ImportancePredetermined Change Control Plans (PCCPs) are a recent regulatory innovation by the U.S. Food and Drug Administration (FDA) introduced to enable dynamic oversight of artificial intelligence and machine learning (AI/ML)-enabled medical devices. ObjectiveTo characterize FDA program of PCCPs among AI/ML-enabled medical devices, including device characteristics, preapproval testing, planned modifications, and post-clearance update mechanisms. DesignThis cross-sectional study reviewed FDA-cleared or approved AI/ML-enabled medical devices with authorized PCCPs. SettingAI/ML-enabled devices approved or cleared prior to May 30, 2025 were identified from an FDA-maintained public list and their characteristics extracted from FDA approval databases. ParticipantsN/A Main Outcome(s) and Measure(s)Primary outcomes included (1) prevalence and characteristics of devices with authorized PCCPs, (2) types of FDA-authorized modifications, (3) presence and nature of preapproval testing, such as study design and subgroup testing, and (4) postmarket device update mechanisms and transparency. ResultsAmong 26 identified AI/ML-enabled medical devices with authorized PCCPs, 92% were cleared via the 510(k) pathway, and all were classified as moderate risk. Devices were primarily intended for use in diagnosis or clinical assessment, and six had consumer-facing components. Authorized modifications spanned the product lifecycle, most commonly allowing model retraining (69% of devices), logic updates (42% of devices), and expansion of input sources (35% of devices). Preapproval testing was limited with seven devices prospectively evaluated and thirteen undergoing human factors testing. Subgroup analyses were reported for eleven devices and none included patient outcomes data. No postmarket studies or recalls were identified. User manuals could be identified online for 54% of devices, though many lacked performance details or mentioned PCCPs. Conclusions and RelevanceFDA authorization of PCCPs grants manufacturers substantial flexibility to modify AI/ML-enabled devices postmarket, while preapproval testing and postmarket transparency are limited. These findings highlight the need for strengthened oversight mechanisms to ensure ongoing safety and effectiveness of rapidly evolving AI/ML-enabled technologies in clinical care.

Bergsland, N., Burnham, A., Dwyer, M. G., Bartnik, A., Schweser, F., Kennedy, C., Tranquille, A., Semy, M., Schnee, E., Young-Hong, D., Eckert, S., Hojnacki, D., Reilly, C., Benedict, R. H., Weinstock-Guttman, B., Zivadinov, R.

medrxiv logopreprintAug 27 2025
BackgroundSevere multiple sclerosis (MS) presents challenges for clinical research due to mobility constraints and specialized care needs. Traditional MRI studies often exclude this population, limiting understanding of severe MS progression. Portable, ultra-low-field MRI enables bedside imaging. ObjectivesTo (i) assess the feasibility of portable MRI in severe MS, (ii) compare measurement approaches for automated tissue volumetry from ultra-low-field MRI. MethodsThis prospective study enrolled 40 progressive MS patients (24 severely disabled, 16 less severe) from academic and skilled nursing settings. Participants underwent 0.064T MRI for tissue volumetry using conventional and artificial intelligence (AI)-driven segmentation. Clinical assessments included physical disability and cognition. Group comparisons and MRI-clinical associations were assessed. ResultsMRI passed rigorous quality control, reflecting complete brain coverage and lack of motion artifact, in 38/40 participants. In terms of severe versus less severe disease, the largest effect sizes were obtained with conventionally-calculated gray matter (GM) volume (partial 2=0.360), cortical GM volume (partial 2=0.349), and whole brain volume (partial 2=0.290) while an AI-based approach yielded the highest effect size for white matter volume (partial 2=0.209). For clinical outcomes, the most consistent associations were found using conventional processing while AI-based methods were dependent on algorithm and input image, especially for cortical GM volume. ConclusionPortable, ultralow-field MRI is a feasible bedside tool that can provide insights into late-stage neurodegeneration in individuals living with severe MS. However, careful consideration is required in implementing tissue volumetry pipelines as findings are heavily dependent on the choice of algorithm and input.

Tanyildizi-Kokkulunk H

pubmed logopapersAug 26 2025
In this work, it was aimed to employ machine learning (ML) algorithms to accurately forecast the radiation doses for phantoms while accounting for the most popular CT protocols. A cloud-based software was utilized to calculate the effective doses from different CT protocols. To simulate a range of adult patients with different weights, eight entire body mesh-based computational phantom sets were used. The head, neck, and chest-abdomen-pelvis CT scan characteristics were combined to create a dataset with 33 rows for each phantom and 792 rows total. At the ML stage, linear (LR), random forest (RF) and support vector regression (SVR) were used. Mean absolute error, mean squared error and accuracy were used to evaluate the performances. The female phantoms received higher doses (7.8 %) than males. Furthermore, an average of 11 % more dose was taken to the normal weight phantom than to the overweight, the overweight in comparison to the obese I, and the obese I in comparison to the obese II. Among the ML algorithms, the LR showed 0 error rate and 100 % accuracy in predicting CT doses. The LR was shown to be the best approach out of those used in the ML estimation of CT-induced doses.

Huang J, Xiang Y, Gan S, Wu L, Yan J, Ye D, Zhang J

pubmed logopapersAug 26 2025
This narrative review provides a comprehensive and structured overview of recent advances in the application of artificial intelligence (AI) to medical imaging for tumor diagnosis and treatment. By synthesizing evidence from recent literature and clinical reports, we highlight the capabilities, limitations, and translational potential of AI techniques across key imaging modalities such as CT, MRI, and PET. Deep learning (DL) and radiomics have facilitated automated lesion detection, tumour segmentation, and prognostic assessments, improving early cancer detection across various malignancies, including breast, lung, and prostate cancers. AI-driven multi-modal imaging fusion integrates radiomics, genomics, and clinical data, refining precision oncology strategies. Additionally, AI-assisted radiotherapy planning and adaptive dose optimisation have enhanced therapeutic efficacy while minimising toxicity. However, challenges persist regarding data heterogeneity, model generalisability, regulatory constraints, and ethical concerns. The lack of standardised datasets and explainable AI (XAI) frameworks hinders clinical adoption. Future research should focus on improving AI interpretability, fostering multi-centre dataset interoperability, and integrating AI with molecular imaging and real-time clinical decision support. Addressing these challenges will ensure AI's seamless integration into clinical oncology, optimising cancer diagnosis, prognosis, and treatment outcomes.

Schiaffino S, Bernardi D, Healy N, Marino MA, Romeo V, Sechopoulos I, Mann RM, Pinker K

pubmed logopapersAug 26 2025
Artificial intelligence (AI) can enhance the diagnostic performance of breast cancer imaging and improve workflow optimization, potentially mitigating excessive radiologist workload and suboptimal diagnostic accuracy. AI can also boost imaging capabilities through individual risk prediction, molecular subtyping, and neoadjuvant therapy response predictions. Evidence demonstrates AI's potential across multiple modalities. The most robust data come from mammographic screening, where AI models improve diagnostic accuracy and optimize workflow, but rigorous post-market surveillance is required before any implementation strategy in this field. Commercial tools for digital breast tomosynthesis and ultrasound, potentially able to reduce interpretation time and improve accuracy, are also available, but post-implementation evaluation studies are likewise lacking. Besides basic tools for breast MRI with limited proven clinical benefit, AI applications for other modalities are not yet commercially available. Applications in contrast-enhanced mammography are still in the research stage, especially for radiomics-based molecular subtype classification. Applications of Large Language Models (LLMs) are in their infancy, and there are currently no clinical applications. Consequently, and despite their promise, all commercially available AI tools for breast imaging should currently still be regarded as techniques that, at best, aid radiologists in image evaluation. Their use is therefore optional, and the findings may always be overruled. KEY POINTS: AI systems improve diagnostic accuracy and efficiency of mammography screening, but long-term outcomes data are lacking. Commercial tools for digital breast tomosynthesis and ultrasound are available, but post-implementation evaluation studies are lacking. AI tools for breast imaging should still be regarded as a non-obligatory aid to radiologists for image interpretation.

Yi DS, Sun WY, Song HP, Zhao XL, Hu SY, Gu X, Gao Y, Zhao FH

pubmed logopapersAug 26 2025
<b>Objective:</b> To compare the diagnostic performance of artificial intelligence-assisted automated breast ultrasound (AI-ABUS) with traditional handheld ultrasound (HHUS) in breast cancer screening. <b>Methods:</b> A total of 36 171 women undergoing breast cancer ultrasound screening in Futian District, Shenzhen, between July 1, 2023 and June 30, 2024 were prospectively recruited and assigned to either the AI-ABUS or HHUS group based on the screening modality used. In the AI-ABUS group, image acquisition was performed on-site by technicians, and two ultrasound physicians conducted remote diagnoses with AI assistance, supported by a follow-up management system. In the HHUS group, one ultrasound physician conducted both image acquisition and diagnosis on-site, and follow-up was led by clinical physicians. Based on the reported malignancy rates of different BI-RADS categories, the number of undiagnosed breast cancer cases in individuals without pathology was estimated, and adjusted detection rates were calculated. Primary outcomes included screening positive rate, biopsy rate, cancer detection rate, loss-to-follow-up rate, specificity, and sensitivity. <b>Results:</b> The median age [interquartile range, <i>M</i> (<i>Q</i><sub>1</sub>, <i>Q</i><sub>3</sub>)] of the 36 171 women was 43.8 (36.6, 50.8) years. A total of 14 766 women (40.82%) were screened with AI-ABUS and 21 405 (59.18%) with HHUS. Baseline characteristics showed no significant differences between the groups (all <i>P</i>>0.05). The AI-ABUS group had a lower screening positive rate [0.59% (87/14 766) vs 1.94% (416/21 405)], but higher biopsy rate [47.13% (41/87) vs 16.10% (67/416)], higher cancer detection rate [1.69‰ (25/14 766) vs 0.47‰ (10/21 428)], and lower loss-to-follow-up rate (6.90% vs 71.39%) compared to the HHUS group (all <i>P</i><0.05). There was no statistically significant difference in the distribution of breast cancer pathological stages among those who underwent biopsy between the two groups (<i>P</i>>0.05). The specificity of AI-ABUS was higher than that of HHUS [89.77% (13, 231/14 739) vs 74.12% (15, 858/21 394), <i>P</i><0.05], while sensitivity did not differ significantly [92.59% (25/27) vs 90.91% (10/11), <i>P</i>>0.05]. After estimating undiagnosed cancer cases among participants without pathology, the adjusted detection rate was 2.30‰ (34/14 766) in the AI-ABUS group and ranged from 1.17‰ to 2.75‰ [(25-59)/21 428] in the HHUS group. In the minimum estimation scenario, the detection rate in the AI-ABUS group was significantly higher (<i>P</i><0.05); in the maximum estimation scenario, the difference was not statistically significant (<i>P</i>>0.05). <b>Conclusions:</b> The AI-ABUS model, combined with an intelligent follow-up management system, enables a higher breast cancer detection rate with a lower screening positive rate, improved specificity, and reduced loss to follow-up. This suggests AI-ABUS is a promising alternative model for breast cancer screening.

Albadr RJ, Sur D, Yadav A, Rekha MM, Jain B, Jayabalan K, Kubaev A, Taher WM, Alwan M, Jawad MJ, Al-Nuaimi AMA, Mohammadifard M, Farhood B, Akhavan-Sigari R

pubmed logopapersAug 26 2025
This study aims to develop a robust and clinically applicable framework for preoperative grading of meningiomas using T1-contrast-enhanced and T2-weighted MRI images. The approach integrates radiomic feature extraction, attention-guided deep learning models, and reproducibility assessment to achieve high diagnostic accuracy, model interpretability, and clinical reliability. We analyzed MRI scans from 2546 patients with histopathologically confirmed meningiomas (1560 low-grade, 986 high-grade). High-quality T1-contrast and T2-weighted images were preprocessed through harmonization, normalization, resizing, and augmentation. Tumor segmentation was performed using ITK-SNAP, and inter-rater reliability of radiomic features was evaluated using the intraclass correlation coefficient (ICC). Radiomic features were extracted via the SERA software, while deep features were derived from pre-trained models (ResNet50 and EfficientNet-B0), with attention mechanisms enhancing focus on tumor-relevant regions. Feature fusion and dimensionality reduction were conducted using PCA and LASSO. Ensemble models employing Random Forest, XGBoost, and LightGBM were implemented to optimize classification performance using both radiomic and deep features. Reproducibility analysis showed that 52% of radiomic features demonstrated excellent reliability (ICC > 0.90). Deep features from EfficientNet-B0 outperformed ResNet50, achieving AUCs of 94.12% (T1) and 93.17% (T2). Hybrid models combining radiomic and deep features further improved performance, with XGBoost reaching AUCs of 95.19% (T2) and 96.87% (T1). Ensemble models incorporating both deep architectures achieved the highest classification performance, with AUCs of 96.12% (T2) and 96.80% (T1), demonstrating superior robustness and accuracy. This work introduces a comprehensive and clinically meaningful AI framework that significantly enhances the preoperative grading of meningiomas. The model's high accuracy, interpretability, and reproducibility support its potential to inform surgical planning, reduce reliance on invasive diagnostics, and facilitate more personalized therapeutic decision-making in routine neuro-oncology practice. Not applicable.

Tasian GE, Maalouf NM, Harper JD, Sivalingam S, Logan J, Al-Khalidi HR, Lieske JC, Selman-Fermin A, Desai AC, Lai H, Kirkali Z, Scales CD, Fan Y

pubmed logopapersAug 26 2025
<b><i>Introduction and Objective:</i></b> Kidney stone growth and new stone formation are common clinical trial endpoints and are associated with future symptomatic events. To date, a manual review of CT scans has been required to assess stone growth and new stone formation, which is laborious. We validated the performance of a software algorithm that automatically identified, registered, and measured stones over longitudinal CT studies. <b><i>Methods:</i></b> We validated the performance of a pretrained machine learning algorithm to classify stone outcomes on longitudinal CT scan images at baseline and at the end of the 2-year follow-up period for 62 participants aged >18 years in the Prevention of Urinary Stones with Hydration (PUSH) randomized controlled trial. Stones were defined as an area of voxels with a minimum linear dimension of 2 mm that was higher in density than the mean plus 4 standard deviations of all nonnegative HU values within the kidney. The four outcomes assessed were: (1) growth of at least one existing stone by ≥2 mm, (2) formation of at least one new ≥2 mm stone, (3) no stone growth or new stone formation, and (4) loss of at least one stone. The accuracy of the algorithm was determined by comparing its outcomes to the gold standard of independent review of the CT images by at least two expert clinicians. <b><i>Results:</i></b> The algorithm correctly classified outcomes for 61 paired scans (98.4%). One pair that the algorithm incorrectly classified as stone growth was a new renal artery calcification on end-of-study CT. <b><i>Conclusions:</i></b> An automated image analysis method validated for the prospective PUSH trial was highly accurate for determining clinical outcomes of new stone formation, stone growth, stable stone size, and stone loss on longitudinal CT images. This method has the potential to improve the accuracy and efficiency of clinical care and endpoint determination for future clinical trials.

Aneesh Rangnekar, Harini Veeraraghavan

arxiv logopreprintAug 26 2025
Accurate detection and segmentation of cancerous lesions from computed tomography (CT) scans is essential for automated treatment planning and cancer treatment response assessment. Transformer-based models with self-supervised pretraining can produce reliably accurate segmentation from in-distribution (ID) data but degrade when applied to out-of-distribution (OOD) datasets. We address this challenge with RF-Deep, a random forest classifier that utilizes deep features from a pretrained transformer encoder of the segmentation model to detect OOD scans and enhance segmentation reliability. The segmentation model comprises a Swin Transformer encoder, pretrained with masked image modeling (SimMIM) on 10,432 unlabeled 3D CT scans covering cancerous and non-cancerous conditions, with a convolution decoder, trained to segment lung cancers in 317 3D scans. Independent testing was performed on 603 3D CT public datasets that included one ID dataset and four OOD datasets comprising chest CTs with pulmonary embolism (PE) and COVID-19, and abdominal CTs with kidney cancers and healthy volunteers. RF-Deep detected OOD cases with a FPR95 of 18.26%, 27.66%, and less than 0.1% on PE, COVID-19, and abdominal CTs, consistently outperforming established OOD approaches. The RF-Deep classifier provides a simple and effective approach to enhance reliability of cancer segmentation in ID and OOD scenarios.

Shahriari A, Ghazanafar Ahari S, Mousavi A, Sadeghi M, Abbasi M, Hosseinpour M, Mir A, Zohouri Zanganeh D, Gharedaghi H, Ezati S, Sareminia A, Seyedi D, Shokouhfar M, Darzi A, Ghaedamini A, Zamani S, Khosravi F, Asadi Anar M

pubmed logopapersAug 26 2025
Machine learning (ML) applied to radiomics has revolutionized neuro-oncological imaging, yet the diagnostic performance of ML models based specifically on ^18F-FDG PET features in glioma remains poorly characterized. To systematically evaluate and quantitatively synthesize the diagnostic accuracy of ML models trained on ^18F-FDG PET radiomics for glioma classification. We conducted a PRISMA-compliant systematic review and meta-analysis registered on OSF ( https://doi.org/10.17605/OSF.IO/XJG6P ). PubMed, Scopus, and Web of Science were searched up to January 2025. Studies were included if they applied ML algorithms to ^18F-FDG PET radiomic features for glioma classification and reported at least one performance metric. Data extraction included demographics, imaging protocols, feature types, ML models, and validation design. Meta-analysis was performed using random-effects models with pooled estimates of accuracy, sensitivity, specificity, AUC, F1 score, and precision. Heterogeneity was explored via meta-regression and Galbraith plots. Twelve studies comprising 2,321 patients were included. Pooled diagnostic metrics were: accuracy 92.6% (95% CI: 91.3-93.9%), AUC 0.95 (95% CI: 0.94-0.95), sensitivity 85.4%, specificity 89.7%, F1 score 0.78, and precision 0.90. Heterogeneity was high across all domains (I² >75%). Meta-regression identified ML model type and validation strategy as partial moderators. Models using CNNs or PET/MRI integration achieved superior performance. ML models based on ^18F-FDG PET radiomics demonstrate strong and balanced diagnostic performance for glioma classification. However, methodological heterogeneity underscores the need for standardized pipelines, external validation, and transparent reporting before clinical integration.
Page 224 of 6546537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.