Sort by:
Page 1 of 14140 results
Next

Assessment of biventricular cardiac function using free-breathing artificial intelligence cine with motion correction: Comparison with standard multiple breath-holding cine.

Ran L, Yan X, Zhao Y, Yang Z, Chen Z, Jia F, Song X, Huang L, Xia L

pubmed logopapersJul 1 2025
To assess the image quality and biventricular function utilizing a free-breathing artificial intelligence cine method with motion correction (FB AI MOCO). A total of 72 participants (mean age 38.3 ± 15.4 years, 40 males) prospectively enrolled in this single-center, cross-sectional study underwent cine scans using standard breath-holding (BH) cine and FB AI MOCO cine at 3.0 Tesla. The image quality of the cine images was evaluated with a 5-point Ordinal Likert scale based on blood-pool to myocardium contrast, endocardial edge definition, and artifacts, and overall quality score was calculated by the equal weight average of all three criteria, apparent signal to noise ratio (aSNR), estimated contrast to noise ratio (eCNR) were assessed. Biventricular functional parameters including Left Ventricular (LV), Right Ventricular (RV) End-Diastolic Volume (EDV), End-Systolic Volume (ESV), Stroke Volume (SV), Ejection Fraction (EF), and LV End-Diastolic Mass (LVEDM) were also assessed. Comparison between two sequences was assessed using paired t-test and Wilcoxon signed-rank test, correlation using Pearson correlation. The agreement of quantitative parameters was assessed using intraclass correlation coefficient (ICC) and Bland-Altman analysis. P < 0.05 was statistically significant. The total acquisition time of the entire stack for FB AI MOCO cine (14.7 s ± 1.9 s) was notably shorter than that for standard BH cine (82.6 s ± 11.9 s, P < 0.001). The aSNR between FB AI MOCO cine and standard BH cine has no significantly difference (76.7 ± 20.7 vs. 79.8 ± 20.7, P = 0.193). The eCNR of FB AI MOCO cine was higher than standard BH cine (191.6 ± 54.0 vs. 155.8 ± 68.4, P < 0.001), as was the scores of blood-pool to myocardium contrast (4.6 ± 0.5 vs. 4.4 ± 0.6, P = 0.003). Qualitative scores including endocardial edge definition (4.2 ± 0.5 vs. 4.3 ± 0.7, P = 0.123), artifact presence (4.3 ± 0.6 vs. 4.1 ± 0.8, P = 0.085), and overall image quality (4.4 ± 0.4 vs. 4.3 ± 0.6, P = 0.448), showed no significant differences between the two methods. Representative RV and LV functional parameters - including RVEDV (102.2 (86.4, 120.4) ml vs. 104.0 (88.5, 120.3) ml, P = 0.294), RVEF (31.0 ± 11.1 % vs. 31.2 ± 11.0 %, P = 0.570), and LVEDV (106.2 (86.7, 131.3) ml vs. 105.8 (84.4, 130.3) ml, P = 0.450) - also did not differ significantly between the two methods. Strong correlations (r > 0.900) and excellent agreement (ICC > 0.900) were found for all biventricular functional parameters between the two sequences. In subgroups with reduced LVEF (<50 %, n = 24) or elevated heart rate (≥80  bpm, n = 17), no significant differences were observed in any biventricular functional metrics (P > 0.05 for all) between the two sequences. In comparison to multiple BH cine, the FB AI MOCO cine achieved comparable image quality and biventricular functional parameters with shorter scan times, suggesting its promising potential for clinical applications.

Adoption of artificial intelligence in healthcare: survey of health system priorities, successes, and challenges.

Poon EG, Lemak CH, Rojas JC, Guptill J, Classen D

pubmed logopapersJul 1 2025
The US healthcare system faces significant challenges, including clinician burnout, operational inefficiencies, and concerns about patient safety. Artificial intelligence (AI), particularly generative AI, has the potential to address these challenges, but its adoption, effectiveness, and barriers to implementation are not well understood. To evaluate the current state of AI adoption in US healthcare systems, assess successes and barriers to implementation during the early generative AI era. This cross-sectional survey was conducted in Fall 2024, and included 67 health systems members of the Scottsdale Institute, a collaborative of US non-profit healthcare organizations. Forty-three health systems completed the survey (64% response rate). Respondents provided data on the deployment status and perceived success of 37 AI use cases across 10 categories. The primary outcomes were the extent of AI use case development, piloting, or deployment, the degree of reported success for AI use cases, and the most significant barriers to adoption. Across the 43 responding health systems, AI adoption and perceptions of success varied significantly. Ambient Notes, a generative AI tool for clinical documentation, was the only use case with 100% of respondents reporting adoption activities, and 53% reported a high degree of success with using AI for Clinical Documentation. Imaging and radiology emerged as the most widely deployed clinical AI use case, with 90% of organizations reporting at least partial deployment, although successes with diagnostic use cases were limited. Similarly, many organizations have deployed AI for clinical risk stratification such as early sepsis detection, but only 38% report high success in this area. Immature AI tools were identified a significant barrier to adoption, cited by 77% of respondents, followed by financial concerns (47%) and regulatory uncertainty (40%). Ambient Notes is rapidly advancing in US healthcare systems and demonstrating early success. Other AI use cases show varying degrees of adoption and success, constrained by barriers such as immature AI tools, financial concerns, and regulatory uncertainty. Addressing these challenges through robust evaluations, shared strategies, and governance models will be essential to ensure effective integration and adoption of AI into healthcare practice.

Assessment of AI-accelerated T2-weighted brain MRI, based on clinical ratings and image quality evaluation.

Nonninger JN, Kienast P, Pogledic I, Mallouhi A, Barkhof F, Trattnig S, Robinson SD, Kasprian G, Haider L

pubmed logopapersJul 1 2025
To compare clinical ratings and signal-to-noise ratio (SNR) measures of a commercially available Deep Learning-based MRI reconstruction method (T2<sub>(DR)</sub>) against conventional T2- turbo spin echo brain MRI (T2<sub>(CN)</sub>). 100 consecutive patients with various neurological conditions underwent both T2<sub>(DR)</sub> and T2<sub>(CN)</sub> on a Siemens Vida 3 T scanner with a 64-channel head coil in the same examination. Acquisition times were 3.33 min for T2<sub>(CN)</sub> and 1.04 min for T2<sub>(DR)</sub>. Four neuroradiologists evaluated overall image quality (OIQ), diagnostic safety (DS), and image artifacts (IA), blinded to the acquisition mode. SNR and SNR<sub>eff</sub> (adjusted for acquisition time) were calculated for air, grey- and white matter, and cerebrospinal fluid. The mean patient age was 43.6 years (SD 20.3), with 54 females. The distribution of non-diagnostic ratings did not differ significantly between T2<sub>(CN)</sub> and T2<sub>(DR)</sub> (IA p = 0.108; OIQ: p = 0.700 and DS: p = 0.652). However, when considering the full spectrum of ratings, significant differences favouring T2<sub>(CN)</sub> emerged in OIQ (p = 0.003) and IA (p < 0.001). T2<sub>(CN)</sub> had higher SNR (157.9, SD 123.4) than T2<sub>(DR)</sub> (112.8, SD 82.7), p < 0.001, but T2<sub>(DR)</sub> demonstrated superior SNR<sub>eff</sub> (14.1, SD 10.3) compared to T2<sub>(CN)</sub> (10.8, SD 8.5), p < 0.001. Our results suggest that while T2<sub>(DR)</sub> may be clinically applicable for a diagnostic setting, it does not fully match the quality of high-standard conventional T2<sub>(CN)</sub>, MRI acquisitions.

Artificial Intelligence in Prostate Cancer Diagnosis on Magnetic Resonance Imaging: Time for a New PARADIGM.

Ng AB, Giganti F, Kasivisvanathan V

pubmed logopapersJul 1 2025
Artificial intelligence (AI) may provide a solution for improving access to expert, timely, and accurate magnetic resonance imaging (MRI) interpretation. The PARADIGM trial will provide level 1 evidence on the role of AI in the diagnosis of prostate cancer on MRI.

Deep learning-assisted detection of meniscus and anterior cruciate ligament combined tears in adult knee magnetic resonance imaging: a crossover study with arthroscopy correlation.

Behr J, Nich C, D'Assignies G, Zavastin C, Zille P, Herpe G, Triki R, Grob C, Pujol N

pubmed logopapersJul 1 2025
We aimed to compare the diagnostic performance of physicians in the detection of arthroscopically confirmed meniscus and anterior cruciate ligament (ACL) tears on knee magnetic resonance imaging (MRI), with and without assistance from a deep learning (DL) model. We obtained preoperative MR images from 88 knees of patients who underwent arthroscopic meniscal repair, with or without ACL reconstruction. Ninety-eight MR images of knees without signs of meniscus or ACL tears were obtained from a publicly available database after matching on age and ACL status (normal or torn), resulting in a global dataset of 186 MRI examinations. The Keros<sup>®</sup> (Incepto, Paris) DL algorithm, previously trained for the detection and characterization of meniscus and ACL tears, was used for MRI assessment. Magnetic resonance images were individually, and blindly annotated by three physicians and the DL algorithm. After three weeks, the three human raters repeated image assessment with model assistance, performed in a different order. The Keros<sup>®</sup> algorithm achieved an area under the curve (AUC) of 0.96 (95% CI 0.93, 0.99), 0.91 (95% CI 0.85, 0.96), and 0.99 (95% CI 0.98, 0.997) in the detection of medial meniscus, lateral meniscus and ACL tears, respectively. With model assistance, physicians achieved higher sensitivity (91% vs. 83%, p = 0.04) and similar specificity (91% vs. 87%, p = 0.09) in the detection of medial meniscus tears. Regarding lateral meniscus tears, sensitivity and specificity were similar with/without model assistance. Regarding ACL tears, physicians achieved higher specificity when assisted by the algorithm (70% vs. 51%, p = 0.01) but similar sensitivity with/without model assistance (93% vs. 96%, p = 0.13). The current model consistently helped physicians in the detection of medial meniscus and ACL tears, notably when they were combined. Diagnostic study, Level III.

Association between muscle mass assessed by an artificial intelligence-based ultrasound imaging system and quality of life in patients with cancer-related malnutrition.

de Luis D, Cebria A, Primo D, Izaola O, Godoy EJ, Gomez JJL

pubmed logopapersJul 1 2025
Emerging evidence suggests that diminished skeletal muscle mass is associated with lower health-related quality of life (HRQOL) in individuals with cancer. There are no studies that we know of in the literature that use ultrasound system to evaluate muscle mass and its relationship with HRQOL. The aim of our study was to evaluate the relationship between HRQOL determined by the EuroQol-5D tool and muscle mass determined by an artificial intelligence-based ultrasound system at the rectus femoris (RF) level in outpatients with cancer. Anthropometric data by bioimpedance (BIA), muscle mass by ultrasound by an artificial intelligence-based at the RF level, biochemistry determination, dynamometry and HRQOL were measured. A total of 158 patients with cancer were included with a mean age of 70.6 ±9.8 years. The mean body mass index was 24.4 ± 4.1 kg/m<sup>2</sup> with a mean body weight of 63.9 ± 11.7 kg (38% females and 62% males). A total of 57 patients had a severe degree of malnutrition (36.1%). The distribution of the location of the tumors was 66 colon-rectum cancer (41.7%), 56 esophageal-stomach cancer (35.4%), 16 pancreatic cancer (10.1%), and 20.2% other locations. A positive correlation cross-sectional area (CSA), muscle thickness (MT), pennation angle, (BIA) parameters, and muscle strength was detected. Patients in the groups below the median for the visual scale and the EuroQol-5D index had lower CSA and MT, BIA, and muscle strength values. CSA (beta 4.25, 95% CI 2.03-6.47) remained in the multivariate model as dependent variable (visual scale) and muscle strength (beta 0.008, 95% CI 0.003-0.14) with EuroQol-5D index. Muscle strength and pennation angle by US were associated with better score in dimensions of mobility, self-care, and daily activities. CSA, MT, and pennation angle of RF determined by an artificial intelligence-based muscle ultrasound system in outpatients with cancer were related to HRQOL determined by EuroQol-5D.

Radiation and contrast dose reduction in coronary CT angiography for slender patients with 70 kV tube voltage and deep learning image reconstruction.

Ren Z, Shen L, Zhang X, He T, Yu N, Zhang M

pubmed logopapersJul 1 2025
To evaluate the radiation and contrast dose reduction potential of combining 70 kV with deep learning image reconstruction (DLIR) in coronary computed tomography angiography (CCTA) for slender patients with body-mass-index (BMI) ≤25 kg/m2. Sixty patients for CCTA were randomly divided into 2 groups: group A with 120 kV and contrast agent dose of 0.8 mL/kg, and group B with 70 kV and contrast agent dose of 0.5 mL/kg. Group A used adaptive statistical iterative reconstruction-V (ASIR-V) with 50% strength level (50%ASIR-V) while group B used 50% ASIR-V, DLIR of low level (DLIR-L), DLIR of medium level (DLIR-M), and DLIR of high level (DLIR-H) for image reconstruction. The CT values and SD values of coronary arteries and pericardial fat were measured, and signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated. The image quality was subjectively evaluated by 2 radiologists using a five-point scoring system. The effective radiation dose (ED) and contrast dose were calculated and compared. Group B significantly reduced radiation dose by 75.6% and contrast dose by 32.9% compared to group A. Group B exhibited higher CT values of coronary arteries than group A, and DLIR-L, DLIR-M, and DLIR-H in group B provided higher SNR values and CNR values and subjective scores, among which DLIR-H had the lowest noise and highest subjective scores. Using 70 kV combined with DLIR significantly reduces radiation and contrast dose while improving image quality in CCTA for slender patients with DLIR-H having the best effect on improving image quality. The 70 kV and DLIR-H may be used in CCTA for slender patients to significantly reduce radiation dose and contrast dose while improving image quality.

CZT-based photon-counting-detector CT with deep-learning reconstruction: image quality and diagnostic confidence for lung tumor assessment.

Sasaki T, Kuno H, Nomura K, Muramatsu Y, Aokage K, Samejima J, Taki T, Goto E, Wakabayashi M, Furuya H, Taguchi H, Kobayashi T

pubmed logopapersJul 1 2025
This is a preliminary analysis of one of the secondary endpoints in the prospective study cohort. The aim of this study is to assess the image quality and diagnostic confidence for lung cancer of CT images generated by using cadmium-zinc-telluride (CZT)-based photon-counting-detector-CT (PCD-CT) and comparing these super-high-resolution (SHR) images with conventional normal-resolution (NR) CT images. Twenty-five patients (median age 75 years, interquartile range 66-78 years, 18 men and 7 women) with 29 lung nodules overall (including two patients with 4 and 2 nodules, respectively) were enrolled to undergo PCD-CT. Three types of images were reconstructed: a 512 × 512 matrix with adaptive iterative dose reduction 3D (AIDR 3D) as the NR<sub>AIDR3D</sub> image, a 1024 × 1024 matrix with AIDR 3D as the SHR<sub>AIDR3D</sub> image, and a 1024 × 1024 matrix with deep-learning reconstruction (DLR) as the SHR<sub>DLR</sub> image. For qualitative analysis, two radiologists evaluated the matched reconstructed series twice (NR<sub>AIDR3D</sub> vs. SHR<sub>AIDR3D</sub> and SHR<sub>AIDR3D</sub> vs. SHR<sub>DLR</sub>) and scored the presence of imaging findings, such as spiculation, lobulation, appearance of ground-glass opacity or air bronchiologram, image quality, and diagnostic confidence, using a 5-point Likert scale. For quantitative analysis, contrast-to-noise ratios (CNRs) of the three images were compared. In the qualitative analysis, compared to NR<sub>AIDR3D</sub>, SHR<sub>AIDR3D</sub> yielded higher image quality and diagnostic confidence, except for image noise (all P < 0.01). In comparison with SHR<sub>AIDR3D</sub>, SHR<sub>DLR</sub> yielded higher image quality and diagnostic confidence (all P < 0.01). In the quantitative analysis, CNRs in the modified NR<sub>AIDR3D</sub> and SHR<sub>DLR</sub> groups were higher than those in the SHR<sub>AIDR3D</sub> group (P = 0.003, <0.001, respectively). In PCD-CT, SHR<sub>DLR</sub> images provided the highest image quality and diagnostic confidence for lung tumor evaluation, followed by SHR<sub>AIDR3D</sub> and NR<sub>AIDR3D</sub> images. DLR demonstrated superior noise reduction compared to other reconstruction methods.

Added value of artificial intelligence for the detection of pelvic and hip fractures.

Jaillat A, Cyteval C, Baron Sarrabere MP, Ghomrani H, Maman Y, Thouvenin Y, Pastor M

pubmed logopapersJul 1 2025
To assess the added value of artificial intelligence (AI) for radiologists and emergency physicians in the radiographic detection of pelvic fractures. In this retrospective study, one junior radiologist reviewed 940 X-rays of patients admitted to emergency for a fall with suspicion of pelvic fracture between March 2020 and June 2021. The radiologist analyzed the X-rays alone and then using an AI system (BoneView). In a random sample of 100 exams, the same procedure was repeated alongside five other readers (three radiologists and two emergency physicians with 3-30 years of experience). The reference diagnosis was based on the patient's full set of medical imaging exams and medical records in the months following emergency admission. A total of 633 confirmed pelvic fractures (64.8% from hip and 35.2% from pelvic ring) in 940 patients and 68 pelvic fractures (60% from hip and 40% from pelvic ring) in the 100-patient sample were included. In the whole dataset, the junior radiologist achieved a significant sensitivity improvement with AI assistance (Se<sub>-PELVIC</sub> = 77.25% to 83.73%; p < 0.001, Se<sub>-HIP</sub> 93.24 to 96.49%; p < 0.001 and Se<sub>-PELVIC RING</sub> 54.60% to 64.50%; p < 0.001). However, there was a significant decrease in specificity with AI assistance (Spe<sub>-PELVIC</sub> = 95.24% to 93.25%; p = 0.005 and Spe<sub>-HIP</sub> = 98.30% to 96.90%; p = 0.005). In the 100-patient sample, the two emergency physicians obtained an improvement in fracture detection sensitivity across the pelvic area + 14.70% (p = 0.0011) and + 10.29% (p < 0.007) respectively without a significant decrease in specificity. For hip fractures, E1's sensitivity increased from 59.46% to 70.27% (p = 0.04), and E2's sensitivity increased from 78.38% to 86.49% (p = 0.08). For pelvic ring fractures, E1's sensitivity increased from 12.90% to 32.26% (p = 0.012), and E2's sensitivity increased from 19.35% to 32.26% (p = 0.043). AI improved the diagnostic performance for emergency physicians and radiologists with limited experience in pelvic fracture screening.

Dynamic glucose enhanced imaging using direct water saturation.

Knutsson L, Yadav NN, Mohammed Ali S, Kamson DO, Demetriou E, Seidemo A, Blair L, Lin DD, Laterra J, van Zijl PCM

pubmed logopapersJul 1 2025
Dynamic glucose enhanced (DGE) MRI studies employ CEST or spin lock (CESL) to study glucose uptake. Currently, these methods are hampered by low effect size and sensitivity to motion. To overcome this, we propose to utilize exchange-based linewidth (LW) broadening of the direct water saturation (DS) curve of the water saturation spectrum (Z-spectrum) during and after glucose infusion (DS-DGE MRI). To estimate the glucose-infusion-induced LW changes (ΔLW), Bloch-McConnell simulations were performed for normoglycemia and hyperglycemia in blood, gray matter (GM), white matter (WM), CSF, and malignant tumor tissue. Whole-brain DS-DGE imaging was implemented at 3 T using dynamic Z-spectral acquisitions (1.2 s per offset frequency, 38 s per spectrum) and assessed on four brain tumor patients using infusion of 35 g of D-glucose. To assess ΔLW, a deep learning-based Lorentzian fitting approach was used on voxel-based DS spectra acquired before, during, and post-infusion. Area-under-the-curve (AUC) images, obtained from the dynamic ΔLW time curves, were compared qualitatively to perfusion-weighted imaging parametric maps. In simulations, ΔLW was 1.3%, 0.30%, 0.29/0.34%, 7.5%, and 13% in arterial blood, venous blood, GM/WM, malignant tumor tissue, and CSF, respectively. In vivo, ΔLW was approximately 1% in GM/WM, 5% to 20% for different tumor types, and 40% in CSF. The resulting DS-DGE AUC maps clearly outlined lesion areas. DS-DGE MRI is highly promising for assessing D-glucose uptake. Initial results in brain tumor patients show high-quality AUC maps of glucose-induced line broadening and DGE-based lesion enhancement similar and/or complementary to perfusion-weighted imaging.
Page 1 of 14140 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.