Sort by:
Page 21 of 25249 results

Leveraging GPT-4 enables patient comprehension of radiology reports.

van Driel MHE, Blok N, van den Brand JAJG, van de Sande D, de Vries M, Eijlers B, Smits F, Visser JJ, Gommers D, Verhoef C, van Genderen ME, Grünhagen DJ, Hilling DE

pubmed logopapersJun 1 2025
To assess the feasibility of using GPT-4 to simplify radiology reports into B1-level Dutch for enhanced patient comprehension. This study utilised GPT-4, optimised through prompt engineering in Microsoft Azure. The researchers iteratively refined prompts to ensure accurate and comprehensive translations of radiology reports. Two radiologists assessed the simplified outputs for accuracy, completeness, and patient suitability. A third radiologist independently validated the final versions. Twelve colorectal cancer patients were recruited from two hospitals in the Netherlands. Semi-structured interviews were conducted to evaluate patients' comprehension and satisfaction with AI-generated reports. The optimised GPT-4 tool produced simplified reports with high accuracy (mean score 3.33/4). Patient comprehension improved significantly from 2.00 (original reports) to 3.28 (simplified reports) and 3.50 (summaries). Correct classification of report outcomes increased from 63.9% to 83.3%. Patient satisfaction was high (mean 8.30/10), with most preferring the long simplified report. RADiANT successfully enhances patient understanding and satisfaction through automated AI-driven report simplification, offering a scalable solution for patient-centred communication in clinical practice. This tool reduces clinician workload and supports informed patient decision-making, demonstrating the potential of LLMs beyond English-based healthcare contexts.

Healthcare resource utilization for the management of neonatal head shape deformities: a propensity-matched analysis of AI-assisted and conventional approaches.

Shin J, Caron G, Stoltz P, Martin JE, Hersh DS, Bookland MJ

pubmed logopapersJun 1 2025
Overuse of radiography studies and underuse of conservative therapies for cranial deformities in neonates is a known inefficiency in pediatric craniofacial healthcare. This study sought to establish whether the introduction of artificial intelligence (AI)-generated craniometrics and craniometric interpretations into craniofacial clinical workflow improved resource utilization patterns in the initial evaluation and management of neonatal cranial deformities. A retrospective chart review of pediatric patients referred for head shape concerns between January 2019 and June 2023 was conducted. Patient demographics, final encounter diagnosis, review of an AI analysis, and provider orders were documented. Patients were divided based on whether an AI cranial deformity analysis was documented as reviewed during the index evaluation, then both groups were propensity matched. Rates of index-encounter radiology studies, physical therapy (PT), orthotic therapy, and craniofacial specialist follow-up evaluations were compared using logistic regression and ANOVA analyses. One thousand patient charts were reviewed (663 conventional encounters, 337 AI-assisted encounters). One-to-one propensity matching was performed between these groups. AI models were significantly more likely to be reviewed during telemedicine encounters and advanced practice provider (APP) visits (54.8% telemedicine vs 11.4% in-person, p < 0.0001; 12.3% physician vs 44.4% APP, p < 0.0001). All AI diagnoses of craniosynostosis versus benign deformities were congruent with final diagnoses. AI model review was associated with a significant increase in the use of orthotic therapies for neonatal cranial deformities (31.5% vs 38.6%, p = 0.0132) but not PT or specialist follow-up evaluations. Radiology ordering rates did not correlate with AI-interpreted data review. As neurosurgeons and pediatricians continue to work to limit neonatal radiation exposure and contain healthcare costs, AI-assisted clinical care could be a cheap and easily scalable diagnostic adjunct for reducing reliance on radiography and encouraging adherence to established clinical guidelines. In practice, however, providers appear to default to preexisting diagnostic biases and underweight AI-generated data and interpretations, ultimately negating any potential advantages offered by AI. AI engineers and specialty leadership should prioritize provider education and user interface optimization to improve future adoption of validated AI diagnostic tools.

Deep Learning in Knee MRI: A Prospective Study to Enhance Efficiency, Diagnostic Confidence and Sustainability.

Reschke P, Gotta J, Gruenewald LD, Bachir AA, Strecker R, Nickel D, Booz C, Martin SS, Scholtz JE, D'Angelo T, Dahm D, Solim LA, Konrad P, Mahmoudi S, Bernatz S, Al-Saleh S, Hong QAL, Sommer CM, Eichler K, Vogl TJ, Haberkorn SM, Koch V

pubmed logopapersJun 1 2025
The objective of this study was to evaluate a combination of deep learning (DL)-reconstructed parallel acquisition technique (PAT) and simultaneous multislice (SMS) acceleration imaging in comparison to conventional knee imaging. Adults undergoing knee magnetic resonance imaging (MRI) with DL-enhanced acquisitions were prospectively analyzed from December 2023 to April 2024. The participants received T1 without fat saturation and fat-suppressed PD-weighted TSE pulse sequences using conventional two-fold PAT (P2) and either DL-enhanced four-fold PAT (P4) or a combination of DL-enhanced four-fold PAT with two-fold SMS acceleration (P4S2). Three independent readers assessed image quality, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and radiomics features. 34 participants (mean age 45±17years; 14 women) were included who underwent P4S2, P4, and P2 imaging. Both P4S2 and P4 demonstrated higher CNR and SNR values compared to P2 (P<.001). P4 was diagnostically inferior to P2 only in the visualization of cartilage damage (P<.005), while P4S2 consistently outperformed P2 in anatomical delineation across all evaluated structures and raters (P<.05). Radiomics analysis revealed significant differences in contrast and gray-level characteristics among P2, P4, and P4S2 (P<.05). P4 reduced time by 31% and P4S2 by 41% compared to P2 (P<.05). P4S2 DL acceleration offers significant advancements over P4 and P2 in knee MRI, combining superior image quality and improved anatomical delineation at significant time reduction. Its improvements in anatomical delineation, energy consumption, and workforce optimization make P4S2 a significant step forward.

Accelerated High-resolution T1- and T2-weighted Breast MRI with Deep Learning Super-resolution Reconstruction.

Mesropyan N, Katemann C, Leutner C, Sommer A, Isaak A, Weber OM, Peeters JM, Dell T, Bischoff L, Kuetting D, Pieper CC, Lakghomi A, Luetkens JA

pubmed logopapersJun 1 2025
To assess the performance of an industry-developed deep learning (DL) algorithm to reconstruct low-resolution Cartesian T1-weighted dynamic contrast-enhanced (T1w) and T2-weighted turbo-spin-echo (T2w) sequences and compare them to standard sequences. Female patients with indications for breast MRI were included in this prospective study. The study protocol at 1.5 Tesla MRI included T1w and T2w. Both sequences were acquired in standard resolution (T1<sub>S</sub> and T2<sub>S</sub>) and in low-resolution with following DL reconstructions (T1<sub>DL</sub> and T2<sub>DL</sub>). For DL reconstruction, two convolutional networks were used: (1) Adaptive-CS-Net for denoising with compressed sensing, and (2) Precise-Image-Net for resolution upscaling of previously downscaled images. Overall image quality was assessed using 5-point-Likert scale (from 1=non-diagnostic to 5=excellent). Apparent signal-to-noise (aSNR) and contrast-to-noise (aCNR) ratios were calculated. Breast Imaging Reporting and Data System (BI-RADS) agreement between different sequence types was assessed. A total of 47 patients were included (mean age, 58±11 years). Acquisition time for T1<sub>DL</sub> and T2<sub>DL</sub> were reduced by 51% (44 vs. 90 s per dynamic phase) and 46% (102 vs. 192 s), respectively. T1<sub>DL</sub> and T2<sub>DL</sub> showed higher overall image quality (e.g., 4 [IQR, 4-4] for T1<sub>S</sub> vs. 5 [IQR, 5-5] for T1<sub>DL</sub>, P<0.001). Both, T1<sub>DL</sub> and T2<sub>DL</sub> revealed higher aSNR and aCNR than T1<sub>S</sub> and T2<sub>S</sub> (e.g., aSNR: 32.35±10.23 for T2<sub>S</sub> vs. 27.88±6.86 for T2<sub>DL</sub>, P=0.014). Cohen k agreement by BI-RADS assessment was excellent (0.962, P<0.001). DL for denoising and resolution upscaling reduces acquisition time and improves image quality for T1w and T2w breast MRI.

A new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation.

Sagberg K, Lie T, F Peterson H, Hillestad V, Eskild A, Bø LE

pubmed logopapersJun 1 2025
Placental volume measurements can potentially identify high-risk pregnancies. We aimed to develop and validate a new method for placental volume measurements using tracked 2D ultrasound and automatic image segmentation. We included 43 pregnancies at gestational week 27 and acquired placental images using a 2D ultrasound probe with position tracking, and trained a convolutional neural network (CNN) for automatic image segmentation. The automatically segmented 2D images were combined with tracking data to calculate placental volume. For 15 of the included pregnancies, placental volume was also estimated based on MRI examinations, 3D ultrasound and manually segmented 2D ultrasound images. The ultrasound methods were compared to MRI (gold standard). The CNN demonstrated good performance in automatic image segmentation (F1-score 0.84). The correlation with MRI-based placental volume was similar for tracked 2D ultrasound using automatically segmented images (absolute agreement intraclass correlation coefficient [ICC] 0.58, 95% CI 0.13-0.84) and manually segmented images (ICC 0.59, 95% CI 0.13-0.84). The 3D ultrasound method showed lower ICC (0.35, 95% CI -0.11 to 0.74) than the methods based on tracked 2D ultrasound. Tracked 2D ultrasound with automatic image segmentation is a promising new method for placental volume measurements and has potential for further improvement.

CCTA-Derived coronary plaque burden offers enhanced prognostic value over CAC scoring in suspected CAD patients.

Dahdal J, Jukema RA, Maaniitty T, Nurmohamed NS, Raijmakers PG, Hoek R, Driessen RS, Twisk JWR, Bär S, Planken RN, van Royen N, Nijveldt R, Bax JJ, Saraste A, van Rosendael AR, Knaapen P, Knuuti J, Danad I

pubmed logopapersMay 30 2025
To assess the prognostic utility of coronary artery calcium (CAC) scoring and coronary computed tomography angiography (CCTA)-derived quantitative plaque metrics for predicting adverse cardiovascular outcomes. The study enrolled 2404 patients with suspected coronary artery disease (CAD) but without a prior history of CAD. All participants underwent CAC scoring and CCTA, with plaque metrics quantified using an artificial intelligence (AI)-based tool (Cleerly, Inc). Percent atheroma volume (PAV) and non-calcified plaque volume percentage (NCPV%), reflecting total plaque burden and the proportion of non-calcified plaque volume normalized to vessel volume, were evaluated. The primary endpoint was a composite of all-cause mortality and non-fatal myocardial infarction (MI). Cox proportional hazard models, adjusted for clinical risk factors and early revascularization, were employed for analysis. During a median follow-up of 7.0 years, 208 patients (8.7%) experienced the primary endpoint, including 73 cases of MI (3%). The model incorporating PAV demonstrated superior discriminatory power for the composite endpoint (AUC = 0.729) compared to CAC scoring (AUC = 0.706, P = 0.016). In MI prediction, PAV (AUC = 0.791) significantly outperformed CAC (AUC = 0.699, P < 0.001), with NCPV% showing the highest prognostic accuracy (AUC = 0.814, P < 0.001). AI-driven assessment of coronary plaque burden enhances prognostic accuracy for future adverse cardiovascular events, highlighting the critical role of comprehensive plaque characterization in refining risk stratification strategies.

Real-time brain tumor detection in intraoperative ultrasound: From model training to deployment in the operating room.

Cepeda S, Esteban-Sinovas O, Romero R, Singh V, Shett P, Moiyadi A, Zemmoura I, Giammalva GR, Del Bene M, Barbotti A, DiMeco F, West TR, Nahed BV, Arrese I, Hornero R, Sarabia R

pubmed logopapersMay 30 2025
Intraoperative ultrasound (ioUS) is a valuable tool in brain tumor surgery due to its versatility, affordability, and seamless integration into the surgical workflow. However, its adoption remains limited, primarily because of the challenges associated with image interpretation and the steep learning curve required for effective use. This study aimed to enhance the interpretability of ioUS images by developing a real-time brain tumor detection system deployable in the operating room. We collected 2D ioUS images from the BraTioUS and ReMIND datasets, annotated with expert-refined tumor labels. Using the YOLO11 architecture and its variants, we trained object detection models to identify brain tumors. The dataset included 1732 images from 192 patients, divided into training, validation, and test sets. Data augmentation expanded the training set to 11,570 images. In the test dataset, YOLO11s achieved the best balance of precision and computational efficiency, with a mAP@50 of 0.95, mAP@50-95 of 0.65, and a processing speed of 34.16 frames per second. The proposed solution was prospectively validated in a cohort of 20 consecutively operated patients diagnosed with brain tumors. Neurosurgeons confirmed its seamless integration into the surgical workflow, with real-time predictions accurately delineating tumor regions. These findings highlight the potential of real-time object detection algorithms to enhance ioUS-guided brain tumor surgery, addressing key challenges in interpretation and providing a foundation for future development of computer vision-based tools for neuro-oncological surgery.

Deep learning reconstruction improves computer-aided pulmonary nodule detection and measurement accuracy for ultra-low-dose chest CT.

Wang J, Zhu Z, Pan Z, Tan W, Han W, Zhou Z, Hu G, Ma Z, Xu Y, Ying Z, Sui X, Jin Z, Song L, Song W

pubmed logopapersMay 30 2025
To compare the image quality and pulmonary nodule detectability and measurement accuracy between deep learning reconstruction (DLR) and hybrid iterative reconstruction (HIR) of chest ultra-low-dose CT (ULDCT). Participants who underwent chest standard-dose CT (SDCT) followed by ULDCT from October 2020 to January 2022 were prospectively included. ULDCT images reconstructed with HIR and DLR were compared with SDCT images to evaluate image quality, nodule detection rate, and measurement accuracy using a commercially available deep learning-based nodule evaluation system. Wilcoxon signed-rank test was used to evaluate the percentage errors of nodule size and nodule volume between HIR and DLR images. Eighty-four participants (54 ± 13 years; 26 men) were finally enrolled. The effective radiation doses of ULDCT and SDCT were 0.16 ± 0.02 mSv and 1.77 ± 0.67 mSv, respectively (P < 0.001). The mean ± standard deviation of the lung tissue noises was 61.4 ± 3.0 HU for SDCT, 61.5 ± 2.8 HU and 55.1 ± 3.4 HU for ULDCT reconstructed with HIR-Strong setting (HIR-Str) and DLR-Strong setting (DLR-Str), respectively (P < 0.001). A total of 535 nodules were detected. The nodule detection rates of ULDCT HIR-Str and ULDCT DLR-Str were 74.0% and 83.4%, respectively (P < 0.001). The absolute percentage error in nodule volume from that of SDCT was 19.5% in ULDCT HIR-Str versus 17.9% in ULDCT DLR-Str (P < 0.001). Compared with HIR, DLR reduced image noise, increased nodule detection rate, and improved measurement accuracy of nodule volume at chest ULDCT. Not applicable.

Diagnostic Efficiency of an Artificial Intelligence-Based Technology in Dental Radiography.

Obrubov AA, Solovykh EA, Nadtochiy AG

pubmed logopapersMay 30 2025
We present results of the development of Dentomo artificial intelligence model based on two neural networks. The model includes a database and a knowledge base harmonized with SNOMED CT that allows processing and interpreting the results of cone beam computed tomography (CBCT) scans of the dental system, in particular, identifying and classifying teeth, identifying CT signs of pathology and previous treatments. Based on these data, artificial intelligence can draw conclusions and generate medical reports, systematize the data, and learn from the results. The diagnostic effectiveness of Dentomo was evaluated. The first results of the study have demonstrated that the model based on neural networks and artificial intelligence is a valuable tool for analyzing CBCT scans in clinical practice and optimizing the dentist workflow.

Deep learning enables fast and accurate quantification of MRI-guided near-infrared spectral tomography for breast cancer diagnosis.

Feng J, Tang Y, Lin S, Jiang S, Xu J, Zhang W, Geng M, Dang Y, Wei C, Li Z, Sun Z, Jia K, Pogue BW, Paulsen KD

pubmed logopapersMay 29 2025
The utilization of magnetic resonance (MR) im-aging to guide near-infrared spectral tomography (NIRST) shows significant potential for improving the specificity and sensitivity of breast cancer diagnosis. However, the ef-ficiency and accuracy of NIRST image reconstruction have been limited by the complexities of light propagation mod-eling and MRI image segmentation. To address these chal-lenges, we developed and evaluated a deep learning-based approach for MR-guided 3D NIRST image reconstruction (DL-MRg-NIRST). Using a network trained on synthetic data, the DL-MRg-NIRST system reconstructed images from data acquired during 38 clinical imaging exams of pa-tients with breast abnormalities. Statistical analysis of the results demonstrated a sensitivity of 87.5%, a specificity of 92.9%, and a diagnostic accuracy of 89.5% in distinguishing pathologically defined benign from malignant lesions. Ad-ditionally, the combined use of MRI and DL-MRg-NIRST di-agnoses achieved an area under the receiver operating characteristic (ROC) curve of 0.98. Remarkably, the DL-MRg-NIRST image reconstruction process required only 1.4 seconds, significantly faster than state-of-the-art MR-guided NIRST methods.
Page 21 of 25249 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.