Sort by:
Page 12 of 25249 results

Assessment of AI-accelerated T2-weighted brain MRI, based on clinical ratings and image quality evaluation.

Nonninger JN, Kienast P, Pogledic I, Mallouhi A, Barkhof F, Trattnig S, Robinson SD, Kasprian G, Haider L

pubmed logopapersJul 1 2025
To compare clinical ratings and signal-to-noise ratio (SNR) measures of a commercially available Deep Learning-based MRI reconstruction method (T2<sub>(DR)</sub>) against conventional T2- turbo spin echo brain MRI (T2<sub>(CN)</sub>). 100 consecutive patients with various neurological conditions underwent both T2<sub>(DR)</sub> and T2<sub>(CN)</sub> on a Siemens Vida 3 T scanner with a 64-channel head coil in the same examination. Acquisition times were 3.33 min for T2<sub>(CN)</sub> and 1.04 min for T2<sub>(DR)</sub>. Four neuroradiologists evaluated overall image quality (OIQ), diagnostic safety (DS), and image artifacts (IA), blinded to the acquisition mode. SNR and SNR<sub>eff</sub> (adjusted for acquisition time) were calculated for air, grey- and white matter, and cerebrospinal fluid. The mean patient age was 43.6 years (SD 20.3), with 54 females. The distribution of non-diagnostic ratings did not differ significantly between T2<sub>(CN)</sub> and T2<sub>(DR)</sub> (IA p = 0.108; OIQ: p = 0.700 and DS: p = 0.652). However, when considering the full spectrum of ratings, significant differences favouring T2<sub>(CN)</sub> emerged in OIQ (p = 0.003) and IA (p < 0.001). T2<sub>(CN)</sub> had higher SNR (157.9, SD 123.4) than T2<sub>(DR)</sub> (112.8, SD 82.7), p < 0.001, but T2<sub>(DR)</sub> demonstrated superior SNR<sub>eff</sub> (14.1, SD 10.3) compared to T2<sub>(CN)</sub> (10.8, SD 8.5), p < 0.001. Our results suggest that while T2<sub>(DR)</sub> may be clinically applicable for a diagnostic setting, it does not fully match the quality of high-standard conventional T2<sub>(CN)</sub>, MRI acquisitions.

Feasibility/clinical utility of half-Fourier single-shot turbo spin echo imaging combined with deep learning reconstruction in gynecologic magnetic resonance imaging.

Kirita M, Himoto Y, Kurata Y, Kido A, Fujimoto K, Abe H, Matsumoto Y, Harada K, Morita S, Yamaguchi K, Nickel D, Mandai M, Nakamoto Y

pubmed logopapersJul 1 2025
When antispasmodics are unavailable, the periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER; called BLADE by Siemens Healthineers) or half Fourier single-shot turbo spin echo (HASTE) is clinically used in gynecologic MRI. However, their imaging qualities are limited compared to Turbo Spin Echo (TSE) with antispasmodics. Even with antispasmodics, TSE can be artifact-affected, necessitating a rapid backup sequence. This study aimed to investigate the utility of HASTE with deep learning reconstruction and variable flip angle evolution (iHASTE) compared to conventional sequences with and without antispasmodics. This retrospective study included MRI scans without antispasmodics for 79 patients who underwent iHASTE, HASTE, and BLADE and MRI scans with antispasmodics for 79 case-control matched patients who underwent TSE. Three radiologists qualitatively evaluated image quality, robustness to artifacts, tissue contrast, and uterine lesion margins. Tissue contrast was also quantitatively evaluated. Quantitative evaluations revealed that iHASTE exhibited significantly superior tissue contrast to HASTE and BLADE. Qualitative evaluations indicated that iHASTE outperformed HASTE in overall quality. Two of three radiologists judged iHASTE to be significantly superior to BLADE, while two of three judged TSE to be significantly superior to iHASTE. iHASTE demonstrated greater robustness to artifacts than both BLADE and TSE. Lesion margins in iHASTE had lower scores than BLADE and TSE. iHASTE is a viable clinical option in patients undergoing gynecologic MRI with anti-spasmodics. iHASTE may also be considered as a useful add-on sequence in patients undergoing MRI with antispasmodics.

Added value of artificial intelligence for the detection of pelvic and hip fractures.

Jaillat A, Cyteval C, Baron Sarrabere MP, Ghomrani H, Maman Y, Thouvenin Y, Pastor M

pubmed logopapersJul 1 2025
To assess the added value of artificial intelligence (AI) for radiologists and emergency physicians in the radiographic detection of pelvic fractures. In this retrospective study, one junior radiologist reviewed 940 X-rays of patients admitted to emergency for a fall with suspicion of pelvic fracture between March 2020 and June 2021. The radiologist analyzed the X-rays alone and then using an AI system (BoneView). In a random sample of 100 exams, the same procedure was repeated alongside five other readers (three radiologists and two emergency physicians with 3-30 years of experience). The reference diagnosis was based on the patient's full set of medical imaging exams and medical records in the months following emergency admission. A total of 633 confirmed pelvic fractures (64.8% from hip and 35.2% from pelvic ring) in 940 patients and 68 pelvic fractures (60% from hip and 40% from pelvic ring) in the 100-patient sample were included. In the whole dataset, the junior radiologist achieved a significant sensitivity improvement with AI assistance (Se<sub>-PELVIC</sub> = 77.25% to 83.73%; p < 0.001, Se<sub>-HIP</sub> 93.24 to 96.49%; p < 0.001 and Se<sub>-PELVIC RING</sub> 54.60% to 64.50%; p < 0.001). However, there was a significant decrease in specificity with AI assistance (Spe<sub>-PELVIC</sub> = 95.24% to 93.25%; p = 0.005 and Spe<sub>-HIP</sub> = 98.30% to 96.90%; p = 0.005). In the 100-patient sample, the two emergency physicians obtained an improvement in fracture detection sensitivity across the pelvic area + 14.70% (p = 0.0011) and + 10.29% (p < 0.007) respectively without a significant decrease in specificity. For hip fractures, E1's sensitivity increased from 59.46% to 70.27% (p = 0.04), and E2's sensitivity increased from 78.38% to 86.49% (p = 0.08). For pelvic ring fractures, E1's sensitivity increased from 12.90% to 32.26% (p = 0.012), and E2's sensitivity increased from 19.35% to 32.26% (p = 0.043). AI improved the diagnostic performance for emergency physicians and radiologists with limited experience in pelvic fracture screening.

CZT-based photon-counting-detector CT with deep-learning reconstruction: image quality and diagnostic confidence for lung tumor assessment.

Sasaki T, Kuno H, Nomura K, Muramatsu Y, Aokage K, Samejima J, Taki T, Goto E, Wakabayashi M, Furuya H, Taguchi H, Kobayashi T

pubmed logopapersJul 1 2025
This is a preliminary analysis of one of the secondary endpoints in the prospective study cohort. The aim of this study is to assess the image quality and diagnostic confidence for lung cancer of CT images generated by using cadmium-zinc-telluride (CZT)-based photon-counting-detector-CT (PCD-CT) and comparing these super-high-resolution (SHR) images with conventional normal-resolution (NR) CT images. Twenty-five patients (median age 75 years, interquartile range 66-78 years, 18 men and 7 women) with 29 lung nodules overall (including two patients with 4 and 2 nodules, respectively) were enrolled to undergo PCD-CT. Three types of images were reconstructed: a 512 × 512 matrix with adaptive iterative dose reduction 3D (AIDR 3D) as the NR<sub>AIDR3D</sub> image, a 1024 × 1024 matrix with AIDR 3D as the SHR<sub>AIDR3D</sub> image, and a 1024 × 1024 matrix with deep-learning reconstruction (DLR) as the SHR<sub>DLR</sub> image. For qualitative analysis, two radiologists evaluated the matched reconstructed series twice (NR<sub>AIDR3D</sub> vs. SHR<sub>AIDR3D</sub> and SHR<sub>AIDR3D</sub> vs. SHR<sub>DLR</sub>) and scored the presence of imaging findings, such as spiculation, lobulation, appearance of ground-glass opacity or air bronchiologram, image quality, and diagnostic confidence, using a 5-point Likert scale. For quantitative analysis, contrast-to-noise ratios (CNRs) of the three images were compared. In the qualitative analysis, compared to NR<sub>AIDR3D</sub>, SHR<sub>AIDR3D</sub> yielded higher image quality and diagnostic confidence, except for image noise (all P < 0.01). In comparison with SHR<sub>AIDR3D</sub>, SHR<sub>DLR</sub> yielded higher image quality and diagnostic confidence (all P < 0.01). In the quantitative analysis, CNRs in the modified NR<sub>AIDR3D</sub> and SHR<sub>DLR</sub> groups were higher than those in the SHR<sub>AIDR3D</sub> group (P = 0.003, <0.001, respectively). In PCD-CT, SHR<sub>DLR</sub> images provided the highest image quality and diagnostic confidence for lung tumor evaluation, followed by SHR<sub>AIDR3D</sub> and NR<sub>AIDR3D</sub> images. DLR demonstrated superior noise reduction compared to other reconstruction methods.

Deep learning algorithm enables automated Cobb angle measurements with high accuracy.

Hayashi D, Regnard NE, Ventre J, Marty V, Clovis L, Lim L, Nitche N, Zhang Z, Tournier A, Ducarouge A, Kompel AJ, Tannoury C, Guermazi A

pubmed logopapersJul 1 2025
To determine the accuracy of automatic Cobb angle measurements by deep learning (DL) on full spine radiographs. Full spine radiographs of patients aged > 2 years were screened using the radiology reports to identify radiographs for performing Cobb angle measurements. Two senior musculoskeletal radiologists and one senior orthopedic surgeon independently annotated Cobb angles exceeding 7° indicating the angle location as either proximal thoracic (apices between T3 and T5), main thoracic (apices between T6 and T11), or thoraco-lumbar (apices between T12 and L4). If at least two readers agreed on the number of angles, location of the angles, and difference between comparable angles was < 8°, then the ground truth was defined as the mean of their measurements. Otherwise, the radiographs were reviewed by the three annotators in consensus. The DL software (BoneMetrics, Gleamer) was evaluated against the manual annotation in terms of mean absolute error (MAE). A total of 345 patients were included in the study (age 33 ± 24 years, 221 women): 179 pediatric patients (< 22 years old) and 166 adult patients (22 to 85 years old). Fifty-three cases were reviewed in consensus. The MAE of the DL algorithm for the main curvature was 2.6° (95% CI [2.0; 3.3]). For the subgroup of pediatric patients, the MAE was 1.9° (95% CI [1.6; 2.2]) versus 3.3° (95% CI [2.2; 4.8]) for adults. The DL algorithm predicted the Cobb angle of scoliotic patients with high accuracy.

Radiation Dose Reduction and Image Quality Improvement of UHR CT of the Neck by Novel Deep-learning Image Reconstruction.

Messerle DA, Grauhan NF, Leukert L, Dapper AK, Paul RH, Kronfeld A, Al-Nawas B, Krüger M, Brockmann MA, Othman AE, Altmann S

pubmed logopapersJun 30 2025
We evaluated a dedicated dose-reduced UHR-CT for head and neck imaging, combined with a novel deep learning reconstruction algorithm to assess its impact on image quality and radiation exposure. Retrospective analysis of ninety-eight consecutive patients examined using a new body weight-adapted protocol. Images were reconstructed using adaptive iterative dose reduction and advanced intelligent Clear-IQ engine with an already established (DL-1) and a newly implemented reconstruction algorithm (DL-2). Additional thirty patients were scanned without body-weight-adapted dose reduction (DL-1-SD). Three readers evaluated subjective image quality regarding image quality and assessment of several anatomic regions. For objective image quality, signal-to-noise ratio and contrast-to-noise ratio were calculated for temporalis and masseteric muscle and the floor of the mouth. Radiation dose was evaluated by comparing the computed tomography dose index (CTDIvol) values. Deep learning-based reconstruction algorithms significantly improved subjective image quality (diagnostic acceptability: DL‑1 vs AIDR OR of 25.16 [6.30;38.85], p < 0.001 and DL‑2 vs AIDR 720.15 [410.14;> 999.99], p < 0.001). Although higher doses (DL-1-SD) resulted in significantly enhanced image quality, DL‑2 demonstrated significant superiority over all other techniques across all defined parameters (p < 0.001). Similar results were demonstrated for objective image quality, e.g. image noise (DL‑1 vs AIDR OR of 19.0 [11.56;31.24], p < 0.001 and DL‑2 vs AIDR > 999.9 [825.81;> 999.99], p < 0.001). Using weight-adapted kV reduction, very low radiation doses could be achieved (CTDIvol: 7.4 ± 4.2 mGy). AI-based reconstruction algorithms in ultra-high resolution head and neck imaging provide excellent image quality while achieving very low radiation exposure.

Hybrid strategy of coronary atherosclerosis characterization with T1-weighted MRI and CT angiography to non-invasively predict periprocedural myocardial injury.

Matsumoto H, Higuchi S, Li D, Tanisawa H, Isodono K, Irie D, Ohya H, Kitamura R, Kaneko K, Nakazawa M, Suzuki K, Komori Y, Hondera T, Cadet S, Lee HL, Christodoulou AG, Slomka PJ, Dey D, Xie Y, Shinke T

pubmed logopapersJun 30 2025
Coronary computed tomography angiography (CCTA) and magnetic resonance imaging (MRI) can predict periprocedural myocardial injury (PMI) after percutaneous coronary intervention (PCI). We aimed to investigate whether integrating MRI with CCTA, using the latest imaging and quantitative techniques, improves PMI prediction and to explore a potential hybrid CCTA-MRI strategy. This prospective, multi-centre study conducted coronary atherosclerosis T1-weighted characterization MRI for patients scheduled for elective PCI for an atherosclerotic lesion detected on CCTA without prior revascularization. PMI was defined as post-PCI troponin-T > 5× the upper reference limit. Using deep learning-enabled software, volumes of total plaque, calcified plaque, non-calcified plaque (NCP), and low-attenuation plaque (LAP; < 30 Hounsfield units) were quantified on CCTA. In non-contrast T1-weighted MRI, high-intensity plaque (HIP) volume was quantified as voxels with signal intensity exceeding that of the myocardium, weighted by their respective intensities. Of the 132 lesions from 120 patients, 43 resulted in PMI. In the CCTA-only strategy, LAP volume (P = 0.012) and NCP volume (P = 0.016) were independently associated with PMI. In integrating MRI with CCTA, LAP volume (P = 0.029), and HIP volume (P = 0.024) emerged as independent predictors. MRI integration with CCTA achieved a higher C-statistic value than CCTA alone (0.880 vs. 0.738; P = 0.004). A hybrid CCTA-MRI strategy, incorporating MRI for lesions with intermediate PMI risk based on CCTA, maintained superior diagnostic accuracy over the CCTA-only strategy (0.803 vs. 0.705; P = 0.028). Integrating MRI with CCTA improves PMI prediction compared with CCTA alone.

Cardiac Measurement Calculation on Point-of-Care Ultrasonography with Artificial Intelligence

Mercaldo, S. F., Bizzo, B. C., Sadore, T., Halle, M. A., MacDonald, A. L., Newbury-Chaet, I., L'Italien, E., Schultz, A. S., Tam, V., Hegde, S. M., Mangion, J. R., Mehrotra, P., Zhao, Q., Wu, J., Hillis, J.

medrxiv logopreprintJun 28 2025
IntroductionPoint-of-care ultrasonography (POCUS) enables clinicians to obtain critical diagnostic information at the bedside especially in resource limited settings. This information may include 2D cardiac quantitative data, although measuring the data manually can be time-consuming and subject to user experience. Artificial intelligence (AI) can potentially automate this quantification. This study assessed the interpretation of key cardiac measurements on POCUS images by an AI-enabled device (AISAP Cardio V1.0). MethodsThis retrospective diagnostic accuracy study included 200 POCUS cases from four hospitals (two in Israel and two in the United States). Each case was independently interpreted by three cardiologists and the device for seven measurements (left ventricular (LV) ejection fraction, inferior vena cava (IVC) maximal diameter, left atrial (LA) area, right atrial (RA) area, LV end diastolic diameter, right ventricular (RV) fractional area change and aortic root diameter). The endpoints were the root mean square error (RMSE) of the device compared to the average cardiologist measurement (LV ejection fraction and IVC maximal diameter were primary endpoints; the other measurements were secondary endpoints). Predefined passing criteria were based on the upper bounds of the RMSE 95% confidence intervals (CIs). The inter-cardiologist RMSE was also calculated for reference. ResultsThe device achieved the passing criteria for six of the seven measurements. While not achieving the passing criterion for RV fractional area change, it still achieved a better RMSE than the inter-cardiologist RMSE. The RMSE was 6.20% (95% CI: 5.57 to 6.83; inter-cardiologist RMSE of 8.23%) for LV ejection fraction, 0.25cm (95% CI: 0.20 to 0.29; 0.36cm) for IVC maximal diameter, 2.39cm2 (95% CI: 1.96 to 2.82; 4.39cm2) for LA area, 2.11cm2 (95% CI: 1.75 to 2.47; 3.49cm2) for RA area, 5.06mm (95% CI: 4.58 to 5.55; 4.67mm) for LV end diastolic diameter, 10.17% (95% CI: 9.01 to 11.33; 14.12%) for RV fractional area change and 0.19cm (95% CI: 0.16 to 0.21; 0.24cm) for aortic root diameter. DiscussionThe device accurately calculated these cardiac measurements especially when benchmarked against inter-cardiologist variability. Its use could assist clinicians who utilize POCUS and better enable their clinical decision-making.

Non-contrast computed tomography radiomics model to predict benign and malignant thyroid nodules with lobe segmentation: A dual-center study.

Wang H, Wang X, Du YS, Wang Y, Bai ZJ, Wu D, Tang WL, Zeng HL, Tao J, He J

pubmed logopapersJun 28 2025
Accurate preoperative differentiation of benign and malignant thyroid nodules is critical for optimal patient management. However, conventional imaging modalities present inherent diagnostic limitations. To develop a non-contrast computed tomography-based machine learning model integrating radiomics and clinical features for preoperative thyroid nodule classification. This multicenter retrospective study enrolled 272 patients with thyroid nodules (376 thyroid lobes) from center A (May 2021-April 2024), using histopathological findings as the reference standard. The dataset was stratified into a training cohort (264 lobes) and an internal validation cohort (112 lobes). Additional prospective temporal (97 lobes, May-August 2024, center A) and external multicenter (81 lobes, center B) test cohorts were incorporated to enhance generalizability. Thyroid lobes were segmented along the isthmus midline, with segmentation reliability confirmed by an intraclass correlation coefficient (≥ 0.80). Radiomics feature extraction was performed using Pearson correlation analysis followed by least absolute shrinkage and selection operator regression with 10-fold cross-validation. Seven machine learning algorithms were systematically evaluated, with model performance quantified through the area under the receiver operating characteristic curve (AUC), Brier score, decision curve analysis, and DeLong test for comparison with radiologists interpretations. Model interpretability was elucidated using SHapley Additive exPlanations (SHAP). The extreme gradient boosting model demonstrated robust diagnostic performance across all datasets, achieving AUCs of 0.899 [95% confidence interval (CI): 0.845-0.932] in the training cohort, 0.803 (95%CI: 0.715-0.890) in internal validation, 0.855 (95%CI: 0.775-0.935) in temporal testing, and 0.802 (95%CI: 0.664-0.939) in external testing. These results were significantly superior to radiologists assessments (AUCs: 0.596, 0.529, 0.558, and 0.538, respectively; <i>P</i> < 0.001 by DeLong test). SHAP analysis identified radiomic score, age, tumor size stratification, calcification status, and cystic components as key predictive features. The model exhibited excellent calibration (Brier scores: 0.125-0.144) and provided significant clinical net benefit at decision thresholds exceeding 20%, as evidenced by decision curve analysis. The non-contrast computed tomography-based radiomics-clinical fusion model enables robust preoperative thyroid nodule classification, with SHAP-driven interpretability enhancing its clinical applicability for personalized decision-making.
Page 12 of 25249 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.