Sort by:
Page 8 of 25249 results

Impact of three-dimensional prostate models during robot-assisted radical prostatectomy on surgical margins and functional outcomes.

Khan N, Prezzi D, Raison N, Shepherd A, Antonelli M, Byrne N, Heath M, Bunton C, Seneci C, Hyde E, Diaz-Pinto A, Macaskill F, Challacombe B, Noel J, Brown C, Jaffer A, Cathcart P, Ciabattini M, Stabile A, Briganti A, Gandaglia G, Montorsi F, Ourselin S, Dasgupta P, Granados A

pubmed logopapersJul 13 2025
Robot-assisted radical prostatectomy (RARP) is the standard surgical procedure for the treatment of prostate cancer. RARP requires a trade-off between performing a wider resection in order to reduce the risk of positive surgical margins (PSMs) and performing minimal resection of the nerve bundles that determine functional outcomes, such as incontinence and potency, which affect patients' quality of life. In order to achieve favourable outcomes, a precise understanding of the three-dimensional (3D) anatomy of the prostate, nerve bundles and tumour lesion is needed. This is the protocol for a single-centre feasibility study including a prospective two-arm interventional group (a 3D virtual and a 3D printed prostate model), and a prospective control group. The primary endpoint will be PSM status and the secondary endpoint will be functional outcomes, including incontinence and sexual function. The study will consist of a total of 270 patients: 54 patients will be included in each of the interventional groups (3D virtual, 3D printed models), 54 in the retrospective control group and 108 in the prospective control group. Automated segmentation of prostate gland and lesions will be conducted on multiparametric magnetic resonance imaging (mpMRI) using 'AutoProstate' and 'AutoLesion' deep learning approaches, while manual annotation of the neurovascular bundles, urethra and external sphincter will be conducted on mpMRI by a radiologist. This will result in masks that will be post-processed to generate 3D printed/virtual models. Patients will be allocated to either interventional arm and the surgeon will be given either a 3D printed or a 3D virtual model at the start of the RARP procedure. At the 6-week follow-up, the surgeon will meet with the patient to present PSM status and capture functional outcomes from the patient via questionnaires. We will capture these measures as endpoints for analysis. These questionnaires will be re-administered at 3, 6 and 12 months postoperatively.

Diabetic Tibial Neuropathy Prediction: Improving interpretability of Various Machine-Learning Models Based on Multimodal-Ultrasound Features Using SHAP Methodology.

Chen Y, Sun Z, Zhong H, Chen Y, Wu X, Su L, Lai Z, Zheng T, Lyu G, Su Q

pubmed logopapersJul 12 2025
This study aimed to develop and evaluate eight machine learning models based on multimodal ultrasound to precisely predict of diabetic tibial neuropathy (DTN) in patients. Additionally, the SHapley Additive exPlanations(SHAP)framework was introduced to quantify the importance of each feature variable, providing a precise and noninvasive assessment tool for DTN patients, optimizing clinical management strategies, and enhancing patient prognosis. A prospective analysis was conducted using multimodal ultrasound and clinical data from 255 suspected DTN patients who visited the Second Affiliated Hospital of Fujian Medical University between January 2024 and November 2024. Key features were selected using Least Absolute Shrinkage and Selection Operator (LASSO) regression. Predictive models were constructed using Extreme Gradient Boosting (XGB), Logistic Regression, Support Vector Machines, k-Nearest Neighbors, Random Forest, Decision Tree, Naïve Bayes, and Neural Network. The SHAP method was employed to refine model interpretability. Furthermore, in order to verify the generalization degree of the model, this study also collected 135 patients from three other tertiary hospitals for external test. LASSO regression identified Echo intensity(EI), Cross-sectional area (CSA), Mean elasticity value(Emean), Superb microvascular imaging(SMI), and History of smoking were key features for DTN prediction. The XGB model achieved an Area Under the Curve (AUC) of 0.94, 0.83 and 0.79 in the training, internal test and external test sets, respectively. SHAP analysis highlighted the ranking significance of EI, CSA, Emean, SMI, and History of smoking. Personalized prediction explanations provided by theSHAP values demonstrated the contribution of each feature to the final prediction, and enhancing model interpretability. Furthermore, decision plots depicted how different features influenced mispredictions, thereby facilitating further model optimization or feature adjustment. This study proposed a DTN prediction model based on machine-learning algorithms applied to multimodal ultrasound data. The results indicated the superior performance of the XGB model and its interpretability was enhanced using SHAP analysis. This cost-effective and user-friendly approach provides potential support for personalized treatment and precision medicine for DTN.

Accelerated brain magnetic resonance imaging with deep learning reconstruction: a comparative study on image quality in pediatric neuroimaging.

Choi JW, Cho YJ, Lee SB, Lee S, Hwang JY, Choi YH, Cheon JE, Lee J

pubmed logopapersJul 12 2025
Magnetic resonance imaging (MRI) is crucial in pediatric radiology; however, the prolonged scan time is a major drawback that often requires sedation. Deep learning reconstruction (DLR) is a promising method for accelerating MRI acquisition. To evaluate the clinical feasibility of accelerated brain MRI with DLR in pediatric neuroimaging, focusing on image quality compared to conventional MRI. In this retrospective study, 116 pediatric participants (mean age 7.9 ± 5.4 years) underwent routine brain MRI with three reconstruction methods: conventional MRI without DLR (C-MRI), conventional MRI with DLR (DLC-MRI), and accelerated MRI with DLR (DLA-MRI). Two pediatric radiologists independently assessed the overall image quality, sharpness, artifacts, noise, and lesion conspicuity. Quantitative image analysis included the measurement of image noise and coefficient of variation (CoV). DLA-MRI reduced the scan time by 43% compared with C-MRI. Compared with C-MRI, DLA-MRI demonstrated higher scores for overall image quality, noise, and artifacts, as well as similar or higher scores for lesion conspicuity, but similar or lower scores for sharpness. DLC-MRI demonstrated the highest scores for all the parameters. Despite variations in image quality and lesion conspicuity, the lesion detection rates were 100% across all three reconstructions. Quantitative analysis revealed lower noise and CoV for DLA-MRI than those for C-MRI. Interobserver agreement was substantial to almost perfect (weighted Cohen's kappa = 0.72-0.97). DLR enabled faster MRI with improved image quality compared with conventional MRI, highlighting its potential to address prolonged MRI scan times in pediatric neuroimaging and optimize clinical workflows.

A View-Agnostic Deep Learning Framework for Comprehensive Analysis of 2D-Echocardiography

Anisuzzaman, D. M., Malins, J. G., Jackson, J. I., Lee, E., Naser, J. A., Rostami, B., Bird, J. G., Spiegelstein, D., Amar, T., Ngo, C. C., Oh, J. K., Pellikka, P. A., Thaden, J. J., Lopez-Jimenez, F., Poterucha, T. J., Friedman, P. A., Pislaru, S., Kane, G. C., Attia, Z. I.

medrxiv logopreprintJul 11 2025
Echocardiography traditionally requires experienced operators to select and interpret clips from specific viewing angles. Clinical decision-making is therefore limited for handheld cardiac ultrasound (HCU), which is often collected by novice users. In this study, we developed a view-agnostic deep learning framework to estimate left ventricular ejection fraction (LVEF), patient age, and patient sex from any of several views containing the left ventricle. Model performance was: (1) consistently strong across retrospective transthoracic echocardiography (TTE) datasets; (2) comparable between prospective HCU versus TTE (625 patients; LVEF r2 0.80 vs. 0.86, LVEF [> or [≤]40%] AUC 0.981 vs. 0.993, age r2 0.85 vs. 0.87, sex classification AUC 0.985 vs. 0.996); (3) comparable between prospective HCU data collected by experts versus novice users (100 patients; LVEF r2 0.78 vs. 0.66, LVEF AUC 0.982 vs. 0.966). This approach may broaden the clinical utility of echocardiography by lessening the need for user expertise in image acquisition.

Enhanced Detection of Prostate Cancer Lesions on Biparametric MRI Using Artificial Intelligence: A Multicenter, Fully-crossed, Multi-reader Multi-case Trial.

Xing Z, Chen J, Pan L, Huang D, Qiu Y, Sheng C, Zhang Y, Wang Q, Cheng R, Xing W, Ding J

pubmed logopapersJul 11 2025
To assess artificial intelligence (AI)'s added value in detecting prostate cancer lesions on MRI by comparing radiologists' performance with and without AI assistance. A fully-crossed multi-reader multi-case clinical trial was conducted across three institutions with 10 non-expert radiologists. Biparametric MRI cases comprising T2WI, diffusion-weighted images, and apparent diffusion coefficient were retrospectively collected. Three reading modes were evaluated: AI alone, radiologists alone (unaided), and radiologists with AI (aided). Aided and unaided readings were compared using the Dorfman-Berbaum-Metz method. Reference standards were established by senior radiologists based on pathological reports. Performance was quantified via sensitivity, specificity, and area under the alternative free-response receiver operating characteristic curve (AFROC-AUC). Among 407 eligible male patients (69.5±9.3years), aided reading significantly improved lesion-level sensitivity from 67.3% (95% confidence intervals [CI]: 58.8%, 75.8%) to 85.5% (95% CI: 81.3%, 89.7%), with a substantial difference of 18.2% (95% CI: 10.7%, 25.7%, p<0.001). Case-level specificity increased from 75.9% (95% CI: 68.7%, 83.1%) to 79.5% (95% CI: 74.1%, 84.8%), demonstrating non-inferiority (p<0.001). AFROC-AUC was also higher for aided than unaided reading (86.9% vs 76.1%, p<0.001). AI alone achieved robust performance (AFROC-AUC=83.1%, 95%CI: 79.7%, 86.6%), with lesion-level sensitivity of 88.4% (95% CI: 84.0%, 92.0%) and case-level specificity of 77.8% (95% CI: 71.5%, 83.3%). Subgroup analysis revealed improved detection for lesions with smaller size and lower prostate imaging reporting and data system scores. AI-aided reading significantly enhances lesion detection compared to unaided reading, while AI alone also demonstrates high diagnostic accuracy.

Multiparametric ultrasound techniques are superior to AI-assisted ultrasound for assessment of solid thyroid nodules: a prospective study.

Li Y, Li X, Yan L, Xiao J, Yang Z, Zhang M, Luo Y

pubmed logopapersJul 10 2025
To evaluate the diagnostic performance of multiparametric ultrasound (mpUS) and AI-assisted B-mode ultrasound (AI-US), and their potential to reduce unnecessary biopsies to B-mode for solid thyroid nodules. This prospective study enrolled 226 solid thyroid nodules with 145 malignant and 81 benign pathological results from 189 patients (35 men and 154 women; age range, 19-73 years; mean age, 45 years). Each nodule was examined using B-mode, microvascular flow imaging (MVFI), elastography with elasticity contrast index (ECI), and an AI system. Image data were recorded for each modality. Ten readers with different experience levels independently evaluated the B-mode images of each nodule to make a "benign" or "malignant" diagnosis in both an unblinded and blinded manner to the AI reports. The most accurate ECI value and MVFI mode were selected and combined with the dichotomous prediction of all readers. Descriptive statistics and AUCs were used to evaluate the diagnostic performances of mpUS and AI-US. Triple mpUS with B-mode, MVFI, and ECI exhibited the highest diagnostic performance (average AUC = 0.811 vs. 0.677 for B-mode, p = 0.001), followed by AI-US (average AUC = 0.718, p = 0.315). Triple mpUS significantly reduced the unnecessary biopsy rate by up to 12% (p = 0.007). AUC and specificity were significantly higher for triple mpUS than for AI-US mode (both p < 0.05). Compared to AI-US, triple mpUS (B-mode, MVFI, and ECI) exhibited better diagnostic performance for thyroid cancer diagnosis, and resulted in a significant reduction in unnecessary biopsy rate. AI systems are expected to take advantage of multi-modal information to facilitate diagnoses.

Feasibility study of "double-low" scanning protocol combined with artificial intelligence iterative reconstruction algorithm for abdominal computed tomography enhancement in patients with obesity.

Ji MT, Wang RR, Wang Q, Li HS, Zhao YX

pubmed logopapersJul 9 2025
To evaluate the efficacy of the "double-low" scanning protocol combined with the artificial intelligence iterative reconstruction (AIIR) algorithm for abdominal computed tomography (CT) enhancement in obese patients and to identify the optimal AIIR algorithm level. Patients with a body mass index ≥ 30.00 kg/m<sup>2</sup> who underwent abdominal CT enhancement were randomly assigned to groups A or B. Group A underwent conventional protocol with the Karl 3D iterative reconstruction algorithm at levels 3-5. Group B underwent the "double-low" protocol with AIIR algorithm at levels 1-5. Radiation dose, total iodine intake, along with subjective and objective image quality were recorded. The optimal reconstruction levels for arterial-phase and portal-venous-phase images were identified. Comparisons were made in terms of radiation dose, iodine intake, and image quality. Overall, 150 patients with obesity were collected, and each group consisted of 75 cases. Karl 3D level 5 was the optimal algorithm level for group A, while AIIR level 4 was the optimal algorithm level for group B. AIIR level 4 images in group B exhibited significantly superior subjective and objective image quality than those in Karl 3D level 5 images in group A (P < 0.001). Group B showed reductions in mean CT dose index values, dose-length product, size-specific dose estimate based on water-equivalent diameter, and total iodine intake, compared with group A (P < 0.001). The "double-low" scanning protocol combined with the AIIR algorithm significantly reduces radiation dose and iodine intake during abdominal CT enhancement in obese patients. AIIR level 4 is the optimal reconstruction level for arterial-phase and portal-venous-phase in this patient population.

Population-scale cross-sectional observational study for AI-powered TB screening on one million CXRs.

Munjal P, Mahrooqi AA, Rajan R, Jeremijenko A, Ahmad I, Akhtar MI, Pimentel MAF, Khan S

pubmed logopapersJul 9 2025
Traditional tuberculosis (TB) screening involves radiologists manually reviewing chest X-rays (CXR), which is time-consuming, error-prone, and limited by workforce shortages. Our AI model, AIRIS-TB (AI Radiology In Screening TB), aims to address these challenges by automating the reporting of all X-rays without any findings. AIRIS-TB was evaluated on over one million CXRs, achieving an AUC of 98.51% and overall false negative rate (FNR) of 1.57%, outperforming radiologists (1.85%) while maintaining a 0% TB-FNR. By selectively deferring only cases with findings to radiologists, the model has the potential to automate up to 80% of routine CXR reporting. Subgroup analysis revealed insignificant performance disparities across age, sex, HIV status, and region of origin, with sputum tests for suspected TB showing a strong correlation with model predictions. This large-scale validation demonstrates AIRIS-TB's safety and efficiency in high-volume TB screening programs, reducing radiologist workload without compromising diagnostic accuracy.

An Institutional Large Language Model for Musculoskeletal MRI Improves Protocol Adherence and Accuracy.

Patrick Decourcy Hallinan JT, Leow NW, Low YX, Lee A, Ong W, Zhou Chan MD, Devi GK, He SS, De-Liang Loh D, Wei Lim DS, Low XZ, Teo EC, Furqan SM, Yang Tham WW, Tan JH, Kumar N, Makmur A, Yonghan T

pubmed logopapersJul 8 2025
Privacy-preserving large language models (PP-LLMs) hold potential for assisting clinicians with documentation. We evaluated a PP-LLM to improve the clinical information on radiology request forms for musculoskeletal magnetic resonance imaging (MRI) and to automate protocoling, which ensures that the most appropriate imaging is performed. The present retrospective study included musculoskeletal MRI radiology request forms that had been randomly collected from June to December 2023. Studies without electronic medical record (EMR) entries were excluded. An institutional PP-LLM (Claude Sonnet 3.5) augmented the original radiology request forms by mining EMRs, and, in combination with rule-based processing of the LLM outputs, suggested appropriate protocols using institutional guidelines. Clinical information on the original and PP-LLM radiology request forms were compared with use of the RI-RADS (Reason for exam Imaging Reporting and Data System) grading by 2 musculoskeletal (MSK) radiologists independently (MSK1, with 13 years of experience, and MSK2, with 11 years of experience). These radiologists established a consensus reference standard for protocoling, against which the PP-LLM and of 2 second-year board-certified radiologists (RAD1 and RAD2) were compared. Inter-rater reliability was assessed with use of the Gwet AC1, and the percentage agreement with the reference standard was calculated. Overall, 500 musculoskeletal MRI radiology request forms were analyzed for 407 patients (202 women and 205 men with a mean age [and standard deviation] of 50.3 ± 19.5 years) across a range of anatomical regions, including the spine/pelvis (143 MRI scans; 28.6%), upper extremity (169 scans; 33.8%) and lower extremity (188 scans; 37.6%). Two hundred and twenty-two (44.4%) of the 500 MRI scans required contrast. The clinical information provided in the PP-LLM-augmented radiology request forms was rated as superior to that in the original requests. Only 0.4% to 0.6% of PP-LLM radiology request forms were rated as limited/deficient, compared with 12.4% to 22.6% of the original requests (p < 0.001). Almost-perfect inter-rater reliability was observed for LLM-enhanced requests (AC1 = 0.99; 95% confidence interval [CI], 0.99 to 1.0), compared with substantial agreement for the original forms (AC1 = 0.62; 95% CI, 0.56 to 0.67). For protocoling, MSK1 and MSK2 showed almost-perfect agreement on the region/coverage (AC1 = 0.96; 95% CI, 0.95 to 0.98) and contrast requirement (AC1 = 0.98; 95% CI, 0.97 to 0.99). Compared with the consensus reference standard, protocoling accuracy for the PP-LLM was 95.8% (95% CI, 94.0% to 97.6%), which was significantly higher than that for both RAD1 (88.6%; 95% CI, 85.8% to 91.4%) and RAD2 (88.2%; 95% CI, 85.4% to 91.0%) (p < 0.001 for both). Musculoskeletal MRI request form augmentation with an institutional LLM provided superior clinical information and improved protocoling accuracy compared with clinician requests and non-MSK-trained radiologists. Institutional adoption of such LLMs could enhance the appropriateness of MRI utilization and patient care. Diagnostic Level III. See Instructions for Authors for a complete description of levels of evidence.

Impact of a computed tomography-based artificial intelligence software on radiologists' workflow for detecting acute intracranial hemorrhage.

Kim J, Jang J, Oh SW, Lee HY, Min EJ, Choi JW, Ahn KJ

pubmed logopapersJul 7 2025
To assess the impact of a commercially available computed tomography (CT)-based artificial intelligence (AI) software for detecting acute intracranial hemorrhage (AIH) on radiologists' diagnostic performance and workflow in a real-world clinical setting. This retrospective study included a total of 956 non-contrast brain CT scans obtained over a 70-day period, interpreted independently by 2 board-certified general radiologists. Of these, 541 scans were interpreted during the initial 35 days before the implementation of AI software, and the remaining 415 scans were interpreted during the subsequent 35 days, with reference to AIH probability scores generated by the software. To assess the software's impact on radiologists' performance in detecting AIH, performance before and after implementation was compared. Additionally, to evaluate the software's effect on radiologists' workflow, Kendall's Tau was used to assess the correlation between the daily chronological order of CT scans and the radiologists' reading order before and after implementation. The early diagnosis rate for AIH (defined as the proportion of AIH cases read within the first quartile by radiologists) and the median reading order of AIH cases were also compared before and after implementation. A total of 956 initial CT scans from 956 patients [mean age: 63.14 ± 18.41 years; male patients: 447 (47%)] were included. There were no significant differences in accuracy [from 0.99 (95% confidence interval: 0.99-1.00) to 0.99 (0.98-1.00), <i>P</i> = 0.343], sensitivity [from 1.00 (0.99-1.00) to 1.00 (0.99-1.00), <i>P</i> = 0.859], or specificity [from 1.00 (0.99-1.00) to 0.99 (0.97-1.00), <i>P</i> = 0.252] following the implementation of the AI software. However, the daily correlation between the chronological order of CT scans and the radiologists' reading order significantly decreased [Kendall's Tau, from 0.61 (0.48-0.73) to 0.01 (0.00-0.26), <i>P</i> < 0.001]. Additionally, the early diagnosis rate significantly increased [from 0.49 (0.34-0.63) to 0.76 (0.60-0.93), <i>P</i> = 0.013], and the daily median reading order of AIH cases significantly decreased [from 7.25 (Q1-Q3: 3-10.75) to 1.5 (1-3), <i>P</i> < 0.001] after the implementation. After the implementation of CT-based AI software for detecting AIH, the radiologists' daily reading order was considerably reprioritized to allow more rapid interpretation of AIH cases without compromising diagnostic performance in a real-world clinical setting. With the increasing number of CT scans and the growing burden on radiologists, optimizing the workflow for diagnosing AIH through CT-based AI software integration may enhance the prompt and efficient treatment of patients with AIH.
Page 8 of 25249 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.