Sort by:
Page 2 of 15147 results

Artificial Intelligence in Prostate Cancer Diagnosis on Magnetic Resonance Imaging: Time for a New PARADIGM.

Ng AB, Giganti F, Kasivisvanathan V

pubmed logopapersJul 1 2025
Artificial intelligence (AI) may provide a solution for improving access to expert, timely, and accurate magnetic resonance imaging (MRI) interpretation. The PARADIGM trial will provide level 1 evidence on the role of AI in the diagnosis of prostate cancer on MRI.

Assessment of AI-accelerated T2-weighted brain MRI, based on clinical ratings and image quality evaluation.

Nonninger JN, Kienast P, Pogledic I, Mallouhi A, Barkhof F, Trattnig S, Robinson SD, Kasprian G, Haider L

pubmed logopapersJul 1 2025
To compare clinical ratings and signal-to-noise ratio (SNR) measures of a commercially available Deep Learning-based MRI reconstruction method (T2<sub>(DR)</sub>) against conventional T2- turbo spin echo brain MRI (T2<sub>(CN)</sub>). 100 consecutive patients with various neurological conditions underwent both T2<sub>(DR)</sub> and T2<sub>(CN)</sub> on a Siemens Vida 3 T scanner with a 64-channel head coil in the same examination. Acquisition times were 3.33 min for T2<sub>(CN)</sub> and 1.04 min for T2<sub>(DR)</sub>. Four neuroradiologists evaluated overall image quality (OIQ), diagnostic safety (DS), and image artifacts (IA), blinded to the acquisition mode. SNR and SNR<sub>eff</sub> (adjusted for acquisition time) were calculated for air, grey- and white matter, and cerebrospinal fluid. The mean patient age was 43.6 years (SD 20.3), with 54 females. The distribution of non-diagnostic ratings did not differ significantly between T2<sub>(CN)</sub> and T2<sub>(DR)</sub> (IA p = 0.108; OIQ: p = 0.700 and DS: p = 0.652). However, when considering the full spectrum of ratings, significant differences favouring T2<sub>(CN)</sub> emerged in OIQ (p = 0.003) and IA (p < 0.001). T2<sub>(CN)</sub> had higher SNR (157.9, SD 123.4) than T2<sub>(DR)</sub> (112.8, SD 82.7), p < 0.001, but T2<sub>(DR)</sub> demonstrated superior SNR<sub>eff</sub> (14.1, SD 10.3) compared to T2<sub>(CN)</sub> (10.8, SD 8.5), p < 0.001. Our results suggest that while T2<sub>(DR)</sub> may be clinically applicable for a diagnostic setting, it does not fully match the quality of high-standard conventional T2<sub>(CN)</sub>, MRI acquisitions.

Adoption of artificial intelligence in healthcare: survey of health system priorities, successes, and challenges.

Poon EG, Lemak CH, Rojas JC, Guptill J, Classen D

pubmed logopapersJul 1 2025
The US healthcare system faces significant challenges, including clinician burnout, operational inefficiencies, and concerns about patient safety. Artificial intelligence (AI), particularly generative AI, has the potential to address these challenges, but its adoption, effectiveness, and barriers to implementation are not well understood. To evaluate the current state of AI adoption in US healthcare systems, assess successes and barriers to implementation during the early generative AI era. This cross-sectional survey was conducted in Fall 2024, and included 67 health systems members of the Scottsdale Institute, a collaborative of US non-profit healthcare organizations. Forty-three health systems completed the survey (64% response rate). Respondents provided data on the deployment status and perceived success of 37 AI use cases across 10 categories. The primary outcomes were the extent of AI use case development, piloting, or deployment, the degree of reported success for AI use cases, and the most significant barriers to adoption. Across the 43 responding health systems, AI adoption and perceptions of success varied significantly. Ambient Notes, a generative AI tool for clinical documentation, was the only use case with 100% of respondents reporting adoption activities, and 53% reported a high degree of success with using AI for Clinical Documentation. Imaging and radiology emerged as the most widely deployed clinical AI use case, with 90% of organizations reporting at least partial deployment, although successes with diagnostic use cases were limited. Similarly, many organizations have deployed AI for clinical risk stratification such as early sepsis detection, but only 38% report high success in this area. Immature AI tools were identified a significant barrier to adoption, cited by 77% of respondents, followed by financial concerns (47%) and regulatory uncertainty (40%). Ambient Notes is rapidly advancing in US healthcare systems and demonstrating early success. Other AI use cases show varying degrees of adoption and success, constrained by barriers such as immature AI tools, financial concerns, and regulatory uncertainty. Addressing these challenges through robust evaluations, shared strategies, and governance models will be essential to ensure effective integration and adoption of AI into healthcare practice.

Deep learning-assisted detection of meniscus and anterior cruciate ligament combined tears in adult knee magnetic resonance imaging: a crossover study with arthroscopy correlation.

Behr J, Nich C, D'Assignies G, Zavastin C, Zille P, Herpe G, Triki R, Grob C, Pujol N

pubmed logopapersJul 1 2025
We aimed to compare the diagnostic performance of physicians in the detection of arthroscopically confirmed meniscus and anterior cruciate ligament (ACL) tears on knee magnetic resonance imaging (MRI), with and without assistance from a deep learning (DL) model. We obtained preoperative MR images from 88 knees of patients who underwent arthroscopic meniscal repair, with or without ACL reconstruction. Ninety-eight MR images of knees without signs of meniscus or ACL tears were obtained from a publicly available database after matching on age and ACL status (normal or torn), resulting in a global dataset of 186 MRI examinations. The Keros<sup>®</sup> (Incepto, Paris) DL algorithm, previously trained for the detection and characterization of meniscus and ACL tears, was used for MRI assessment. Magnetic resonance images were individually, and blindly annotated by three physicians and the DL algorithm. After three weeks, the three human raters repeated image assessment with model assistance, performed in a different order. The Keros<sup>®</sup> algorithm achieved an area under the curve (AUC) of 0.96 (95% CI 0.93, 0.99), 0.91 (95% CI 0.85, 0.96), and 0.99 (95% CI 0.98, 0.997) in the detection of medial meniscus, lateral meniscus and ACL tears, respectively. With model assistance, physicians achieved higher sensitivity (91% vs. 83%, p = 0.04) and similar specificity (91% vs. 87%, p = 0.09) in the detection of medial meniscus tears. Regarding lateral meniscus tears, sensitivity and specificity were similar with/without model assistance. Regarding ACL tears, physicians achieved higher specificity when assisted by the algorithm (70% vs. 51%, p = 0.01) but similar sensitivity with/without model assistance (93% vs. 96%, p = 0.13). The current model consistently helped physicians in the detection of medial meniscus and ACL tears, notably when they were combined. Diagnostic study, Level III.

Dual-type deep learning-based image reconstruction for advanced denoising and super-resolution processing in head and neck T2-weighted imaging.

Fujima N, Shimizu Y, Ikebe Y, Kameda H, Harada T, Tsushima N, Kano S, Homma A, Kwon J, Yoneyama M, Kudo K

pubmed logopapersJul 1 2025
To assess the utility of dual-type deep learning (DL)-based image reconstruction with DL-based image denoising and super-resolution processing by comparing images reconstructed with the conventional method in head and neck fat-suppressed (Fs) T2-weighted imaging (T2WI). We retrospectively analyzed the cases of 43 patients who underwent head/neck Fs-T2WI for the assessment of their head and neck lesions. All patients underwent two sets of Fs-T2WI scans with conventional- and DL-based reconstruction. The Fs-T2WI with DL-based reconstruction was acquired based on a 30% reduction of its spatial resolution in both the x- and y-axes with a shortened scan time. Qualitative and quantitative assessments were performed with both the conventional method- and DL-based reconstructions. For the qualitative assessment, we visually evaluated the overall image quality, visibility of anatomical structures, degree of artifact(s), lesion conspicuity, and lesion edge sharpness based on five-point grading. In the quantitative assessment, we measured the signal-to-noise ratio (SNR) of the lesion and the contrast-to-noise ratio (CNR) between the lesion and the adjacent or nearest muscle. In the qualitative analysis, significant differences were observed between the Fs-T2WI with the conventional- and DL-based reconstruction in all of the evaluation items except the degree of the artifact(s) (p < 0.001). In the quantitative analysis, significant differences were observed in the SNR between the Fs-T2WI with conventional- (21.4 ± 14.7) and DL-based reconstructions (26.2 ± 13.5) (p < 0.001). In the CNR assessment, the CNR between the lesion and adjacent or nearest muscle in the DL-based Fs-T2WI (16.8 ± 11.6) was significantly higher than that in the conventional Fs-T2WI (14.2 ± 12.9) (p < 0.001). Dual-type DL-based image reconstruction by an effective denoising and super-resolution process successfully provided high image quality in head and neck Fs-T2WI with a shortened scan time compared to the conventional imaging method.

Added value of artificial intelligence for the detection of pelvic and hip fractures.

Jaillat A, Cyteval C, Baron Sarrabere MP, Ghomrani H, Maman Y, Thouvenin Y, Pastor M

pubmed logopapersJul 1 2025
To assess the added value of artificial intelligence (AI) for radiologists and emergency physicians in the radiographic detection of pelvic fractures. In this retrospective study, one junior radiologist reviewed 940 X-rays of patients admitted to emergency for a fall with suspicion of pelvic fracture between March 2020 and June 2021. The radiologist analyzed the X-rays alone and then using an AI system (BoneView). In a random sample of 100 exams, the same procedure was repeated alongside five other readers (three radiologists and two emergency physicians with 3-30 years of experience). The reference diagnosis was based on the patient's full set of medical imaging exams and medical records in the months following emergency admission. A total of 633 confirmed pelvic fractures (64.8% from hip and 35.2% from pelvic ring) in 940 patients and 68 pelvic fractures (60% from hip and 40% from pelvic ring) in the 100-patient sample were included. In the whole dataset, the junior radiologist achieved a significant sensitivity improvement with AI assistance (Se<sub>-PELVIC</sub> = 77.25% to 83.73%; p < 0.001, Se<sub>-HIP</sub> 93.24 to 96.49%; p < 0.001 and Se<sub>-PELVIC RING</sub> 54.60% to 64.50%; p < 0.001). However, there was a significant decrease in specificity with AI assistance (Spe<sub>-PELVIC</sub> = 95.24% to 93.25%; p = 0.005 and Spe<sub>-HIP</sub> = 98.30% to 96.90%; p = 0.005). In the 100-patient sample, the two emergency physicians obtained an improvement in fracture detection sensitivity across the pelvic area + 14.70% (p = 0.0011) and + 10.29% (p < 0.007) respectively without a significant decrease in specificity. For hip fractures, E1's sensitivity increased from 59.46% to 70.27% (p = 0.04), and E2's sensitivity increased from 78.38% to 86.49% (p = 0.08). For pelvic ring fractures, E1's sensitivity increased from 12.90% to 32.26% (p = 0.012), and E2's sensitivity increased from 19.35% to 32.26% (p = 0.043). AI improved the diagnostic performance for emergency physicians and radiologists with limited experience in pelvic fracture screening.

Radiation and contrast dose reduction in coronary CT angiography for slender patients with 70 kV tube voltage and deep learning image reconstruction.

Ren Z, Shen L, Zhang X, He T, Yu N, Zhang M

pubmed logopapersJul 1 2025
To evaluate the radiation and contrast dose reduction potential of combining 70 kV with deep learning image reconstruction (DLIR) in coronary computed tomography angiography (CCTA) for slender patients with body-mass-index (BMI) ≤25 kg/m2. Sixty patients for CCTA were randomly divided into 2 groups: group A with 120 kV and contrast agent dose of 0.8 mL/kg, and group B with 70 kV and contrast agent dose of 0.5 mL/kg. Group A used adaptive statistical iterative reconstruction-V (ASIR-V) with 50% strength level (50%ASIR-V) while group B used 50% ASIR-V, DLIR of low level (DLIR-L), DLIR of medium level (DLIR-M), and DLIR of high level (DLIR-H) for image reconstruction. The CT values and SD values of coronary arteries and pericardial fat were measured, and signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated. The image quality was subjectively evaluated by 2 radiologists using a five-point scoring system. The effective radiation dose (ED) and contrast dose were calculated and compared. Group B significantly reduced radiation dose by 75.6% and contrast dose by 32.9% compared to group A. Group B exhibited higher CT values of coronary arteries than group A, and DLIR-L, DLIR-M, and DLIR-H in group B provided higher SNR values and CNR values and subjective scores, among which DLIR-H had the lowest noise and highest subjective scores. Using 70 kV combined with DLIR significantly reduces radiation and contrast dose while improving image quality in CCTA for slender patients with DLIR-H having the best effect on improving image quality. The 70 kV and DLIR-H may be used in CCTA for slender patients to significantly reduce radiation dose and contrast dose while improving image quality.

Assessment of biventricular cardiac function using free-breathing artificial intelligence cine with motion correction: Comparison with standard multiple breath-holding cine.

Ran L, Yan X, Zhao Y, Yang Z, Chen Z, Jia F, Song X, Huang L, Xia L

pubmed logopapersJul 1 2025
To assess the image quality and biventricular function utilizing a free-breathing artificial intelligence cine method with motion correction (FB AI MOCO). A total of 72 participants (mean age 38.3 ± 15.4 years, 40 males) prospectively enrolled in this single-center, cross-sectional study underwent cine scans using standard breath-holding (BH) cine and FB AI MOCO cine at 3.0 Tesla. The image quality of the cine images was evaluated with a 5-point Ordinal Likert scale based on blood-pool to myocardium contrast, endocardial edge definition, and artifacts, and overall quality score was calculated by the equal weight average of all three criteria, apparent signal to noise ratio (aSNR), estimated contrast to noise ratio (eCNR) were assessed. Biventricular functional parameters including Left Ventricular (LV), Right Ventricular (RV) End-Diastolic Volume (EDV), End-Systolic Volume (ESV), Stroke Volume (SV), Ejection Fraction (EF), and LV End-Diastolic Mass (LVEDM) were also assessed. Comparison between two sequences was assessed using paired t-test and Wilcoxon signed-rank test, correlation using Pearson correlation. The agreement of quantitative parameters was assessed using intraclass correlation coefficient (ICC) and Bland-Altman analysis. P < 0.05 was statistically significant. The total acquisition time of the entire stack for FB AI MOCO cine (14.7 s ± 1.9 s) was notably shorter than that for standard BH cine (82.6 s ± 11.9 s, P < 0.001). The aSNR between FB AI MOCO cine and standard BH cine has no significantly difference (76.7 ± 20.7 vs. 79.8 ± 20.7, P = 0.193). The eCNR of FB AI MOCO cine was higher than standard BH cine (191.6 ± 54.0 vs. 155.8 ± 68.4, P < 0.001), as was the scores of blood-pool to myocardium contrast (4.6 ± 0.5 vs. 4.4 ± 0.6, P = 0.003). Qualitative scores including endocardial edge definition (4.2 ± 0.5 vs. 4.3 ± 0.7, P = 0.123), artifact presence (4.3 ± 0.6 vs. 4.1 ± 0.8, P = 0.085), and overall image quality (4.4 ± 0.4 vs. 4.3 ± 0.6, P = 0.448), showed no significant differences between the two methods. Representative RV and LV functional parameters - including RVEDV (102.2 (86.4, 120.4) ml vs. 104.0 (88.5, 120.3) ml, P = 0.294), RVEF (31.0 ± 11.1 % vs. 31.2 ± 11.0 %, P = 0.570), and LVEDV (106.2 (86.7, 131.3) ml vs. 105.8 (84.4, 130.3) ml, P = 0.450) - also did not differ significantly between the two methods. Strong correlations (r > 0.900) and excellent agreement (ICC > 0.900) were found for all biventricular functional parameters between the two sequences. In subgroups with reduced LVEF (<50 %, n = 24) or elevated heart rate (≥80  bpm, n = 17), no significant differences were observed in any biventricular functional metrics (P > 0.05 for all) between the two sequences. In comparison to multiple BH cine, the FB AI MOCO cine achieved comparable image quality and biventricular functional parameters with shorter scan times, suggesting its promising potential for clinical applications.

Artificial Intelligence Iterative Reconstruction for Dose Reduction in Pediatric Chest CT: A Clinical Assessment via Below 3 Years Patients With Congenital Heart Disease.

Zhang F, Peng L, Zhang G, Xie R, Sun M, Su T, Ge Y

pubmed logopapersJul 1 2025
To assess the performance of a newly introduced deep learning-based reconstruction algorithm, namely the artificial intelligence iterative reconstruction (AIIR), in reducing the dose of pediatric chest CT by using the image data of below 3-year-old patients with congenital heart disease (CHD). The lung image available from routine-dose cardiac CT angiography (CTA) on below 3 years patients with CHD was employed as a reference for evaluating the paired low-dose chest CT. A total of 191 subjects were prospectively enrolled, where the dose for chest CT was reduced to ~0.1 mSv while the cardiac CTA protocol was kept unchanged. The low-dose chest CT images, obtained with the AIIR and the hybrid iterative reconstruction (HIR), were compared in image quality, ie, overall image quality and lung structure depiction, and in diagnostic performance, ie, severity assessment of pneumonia and airway stenosis. Compared with the reference, lung image quality was not found significantly different on low-dose AIIR images (all P >0.05) but obviously inferior with the HIR (all P <0.05). Compared with the HIR, low-dose AIIR images also achieved a closer pneumonia severity index (AIIR 4.32±3.82 vs. Ref 4.37±3.84, P >0.05; HIR 5.12±4.06 vs. Ref 4.37±3.84, P <0.05) and airway stenosis grading (consistently graded: AIIR 88.5% vs. HIR 56.5% ) to the reference. AIIR has the potential for large dose reduction in chest CT of patients below 3 years of age while preserving image quality and achieving diagnostic results nearly equivalent to routine dose scans.

Deep learning algorithm enables automated Cobb angle measurements with high accuracy.

Hayashi D, Regnard NE, Ventre J, Marty V, Clovis L, Lim L, Nitche N, Zhang Z, Tournier A, Ducarouge A, Kompel AJ, Tannoury C, Guermazi A

pubmed logopapersJul 1 2025
To determine the accuracy of automatic Cobb angle measurements by deep learning (DL) on full spine radiographs. Full spine radiographs of patients aged > 2 years were screened using the radiology reports to identify radiographs for performing Cobb angle measurements. Two senior musculoskeletal radiologists and one senior orthopedic surgeon independently annotated Cobb angles exceeding 7° indicating the angle location as either proximal thoracic (apices between T3 and T5), main thoracic (apices between T6 and T11), or thoraco-lumbar (apices between T12 and L4). If at least two readers agreed on the number of angles, location of the angles, and difference between comparable angles was < 8°, then the ground truth was defined as the mean of their measurements. Otherwise, the radiographs were reviewed by the three annotators in consensus. The DL software (BoneMetrics, Gleamer) was evaluated against the manual annotation in terms of mean absolute error (MAE). A total of 345 patients were included in the study (age 33 ± 24 years, 221 women): 179 pediatric patients (< 22 years old) and 166 adult patients (22 to 85 years old). Fifty-three cases were reviewed in consensus. The MAE of the DL algorithm for the main curvature was 2.6° (95% CI [2.0; 3.3]). For the subgroup of pediatric patients, the MAE was 1.9° (95% CI [1.6; 2.2]) versus 3.3° (95% CI [2.2; 4.8]) for adults. The DL algorithm predicted the Cobb angle of scoliotic patients with high accuracy.
Page 2 of 15147 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.