Sort by:
Page 3 of 25243 results

Dual-type deep learning-based image reconstruction for advanced denoising and super-resolution processing in head and neck T2-weighted imaging.

Fujima N, Shimizu Y, Ikebe Y, Kameda H, Harada T, Tsushima N, Kano S, Homma A, Kwon J, Yoneyama M, Kudo K

pubmed logopapersJul 1 2025
To assess the utility of dual-type deep learning (DL)-based image reconstruction with DL-based image denoising and super-resolution processing by comparing images reconstructed with the conventional method in head and neck fat-suppressed (Fs) T2-weighted imaging (T2WI). We retrospectively analyzed the cases of 43 patients who underwent head/neck Fs-T2WI for the assessment of their head and neck lesions. All patients underwent two sets of Fs-T2WI scans with conventional- and DL-based reconstruction. The Fs-T2WI with DL-based reconstruction was acquired based on a 30% reduction of its spatial resolution in both the x- and y-axes with a shortened scan time. Qualitative and quantitative assessments were performed with both the conventional method- and DL-based reconstructions. For the qualitative assessment, we visually evaluated the overall image quality, visibility of anatomical structures, degree of artifact(s), lesion conspicuity, and lesion edge sharpness based on five-point grading. In the quantitative assessment, we measured the signal-to-noise ratio (SNR) of the lesion and the contrast-to-noise ratio (CNR) between the lesion and the adjacent or nearest muscle. In the qualitative analysis, significant differences were observed between the Fs-T2WI with the conventional- and DL-based reconstruction in all of the evaluation items except the degree of the artifact(s) (p < 0.001). In the quantitative analysis, significant differences were observed in the SNR between the Fs-T2WI with conventional- (21.4 ± 14.7) and DL-based reconstructions (26.2 ± 13.5) (p < 0.001). In the CNR assessment, the CNR between the lesion and adjacent or nearest muscle in the DL-based Fs-T2WI (16.8 ± 11.6) was significantly higher than that in the conventional Fs-T2WI (14.2 ± 12.9) (p < 0.001). Dual-type DL-based image reconstruction by an effective denoising and super-resolution process successfully provided high image quality in head and neck Fs-T2WI with a shortened scan time compared to the conventional imaging method.

Artificial Intelligence Iterative Reconstruction for Dose Reduction in Pediatric Chest CT: A Clinical Assessment via Below 3 Years Patients With Congenital Heart Disease.

Zhang F, Peng L, Zhang G, Xie R, Sun M, Su T, Ge Y

pubmed logopapersJul 1 2025
To assess the performance of a newly introduced deep learning-based reconstruction algorithm, namely the artificial intelligence iterative reconstruction (AIIR), in reducing the dose of pediatric chest CT by using the image data of below 3-year-old patients with congenital heart disease (CHD). The lung image available from routine-dose cardiac CT angiography (CTA) on below 3 years patients with CHD was employed as a reference for evaluating the paired low-dose chest CT. A total of 191 subjects were prospectively enrolled, where the dose for chest CT was reduced to ~0.1 mSv while the cardiac CTA protocol was kept unchanged. The low-dose chest CT images, obtained with the AIIR and the hybrid iterative reconstruction (HIR), were compared in image quality, ie, overall image quality and lung structure depiction, and in diagnostic performance, ie, severity assessment of pneumonia and airway stenosis. Compared with the reference, lung image quality was not found significantly different on low-dose AIIR images (all P >0.05) but obviously inferior with the HIR (all P <0.05). Compared with the HIR, low-dose AIIR images also achieved a closer pneumonia severity index (AIIR 4.32±3.82 vs. Ref 4.37±3.84, P >0.05; HIR 5.12±4.06 vs. Ref 4.37±3.84, P <0.05) and airway stenosis grading (consistently graded: AIIR 88.5% vs. HIR 56.5% ) to the reference. AIIR has the potential for large dose reduction in chest CT of patients below 3 years of age while preserving image quality and achieving diagnostic results nearly equivalent to routine dose scans.

Using deep feature distances for evaluating the perceptual quality of MR image reconstructions.

Adamson PM, Desai AD, Dominic J, Varma M, Bluethgen C, Wood JP, Syed AB, Boutin RD, Stevens KJ, Vasanawala S, Pauly JM, Gunel B, Chaudhari AS

pubmed logopapersJul 1 2025
Commonly used MR image quality (IQ) metrics have poor concordance with radiologist-perceived diagnostic IQ. Here, we develop and explore deep feature distances (DFDs)-distances computed in a lower-dimensional feature space encoded by a convolutional neural network (CNN)-as improved perceptual IQ metrics for MR image reconstruction. We further explore the impact of distribution shifts between images in the DFD CNN encoder training data and the IQ metric evaluation. We compare commonly used IQ metrics (PSNR and SSIM) to two "out-of-domain" DFDs with encoders trained on natural images, an "in-domain" DFD trained on MR images alone, and two domain-adjacent DFDs trained on large medical imaging datasets. We additionally compare these with several state-of-the-art but less commonly reported IQ metrics, visual information fidelity (VIF), noise quality metric (NQM), and the high-frequency error norm (HFEN). IQ metric performance is assessed via correlations with five expert radiologist reader scores of perceived diagnostic IQ of various accelerated MR image reconstructions. We characterize the behavior of these IQ metrics under common distortions expected during image acquisition, including their sensitivity to acquisition noise. All DFDs and HFEN correlate more strongly with radiologist-perceived diagnostic IQ than SSIM, PSNR, and other state-of-the-art metrics, with correlations being comparable to radiologist inter-reader variability. Surprisingly, out-of-domain DFDs perform comparably to in-domain and domain-adjacent DFDs. A suite of IQ metrics, including DFDs and HFEN, should be used alongside commonly-reported IQ metrics for a more holistic evaluation of MR image reconstruction perceptual quality. We also observe that general vision encoders are capable of assessing visual IQ even for MR images.

Deep learning-based time-of-flight (ToF) enhancement of non-ToF PET scans for different radiotracers.

Mehranian A, Wollenweber SD, Bradley KM, Fielding PA, Huellner M, Iagaru A, Dedja M, Colwell T, Kotasidis F, Johnsen R, Jansen FP, McGowan DR

pubmed logopapersJul 1 2025
To evaluate a deep learning-based time-of-flight (DLToF) model trained to enhance the image quality of non-ToF PET images for different tracers, reconstructed using BSREM algorithm, towards ToF images. A 3D residual U-NET model was trained using 8 different tracers (FDG: 75% and non-FDG: 25%) from 11 sites from US, Europe and Asia. A total of 309 training and 33 validation datasets scanned on GE Discovery MI (DMI) ToF scanners were used for development of DLToF models of three strengths: low (L), medium (M) and high (H). The training and validation pairs consisted of target ToF and input non-ToF BSREM reconstructions using site-preferred regularisation parameters (beta values). The contrast and noise properties of each model were defined by adjusting the beta value of target ToF images. A total of 60 DMI datasets, consisting of a set of 4 tracers (<sup>18</sup>F-FDG, <sup>18</sup>F-PSMA, <sup>68</sup>Ga-PSMA, <sup>68</sup>Ga-DOTATATE) and 15 exams each, were collected for testing and quantitative analysis of the models based on standardized uptake value (SUV) in regions of interest (ROI) placed in lesions, lungs and liver. Each dataset includes 5 image series: ToF and non-ToF BSREM and three DLToF images. The image series (300 in total) were blind scored on a 5-point Likert score by 4 readers based on lesion detectability, diagnostic confidence, and image noise/quality. In lesion SUV<sub>max</sub> quantification with respect to ToF BSREM, DLToF-H achieved the best results among the three models by reducing the non-ToF BSREM errors from -39% to -6% for <sup>18</sup>F-FDG (38 lesions); from -42% to -7% for <sup>18</sup>F-PSMA (35 lesions); from -34% to -4% for <sup>68</sup>Ga-PSMA (23 lesions) and from -34% to -12% for <sup>68</sup>Ga-DOTATATE (32 lesions). Quantification results in liver and lung also showed ToF-like performance of DLToF models. Clinical reader resulted showed that DLToF-H results in an improved lesion detectability on average for all four radiotracers whereas DLToF-L achieved the highest scores for image quality (noise level). The results of DLToF-M however showed that this model results in the best trade-off between lesion detection and noise level and hence achieved the highest score for diagnostic confidence on average for all radiotracers. This study demonstrated that the DLToF models are suitable for both FDG and non-FDG tracers and could be utilized for digital BGO PET/CT scanners to provide an image quality and lesion detectability comparable and close to ToF.

Intraindividual Comparison of Image Quality Between Low-Dose and Ultra-Low-Dose Abdominal CT With Deep Learning Reconstruction and Standard-Dose Abdominal CT Using Dual-Split Scan.

Lee TY, Yoon JH, Park JY, Park SH, Kim H, Lee CM, Choi Y, Lee JM

pubmed logopapersJul 1 2025
The aim of this study was to intraindividually compare the conspicuity of focal liver lesions (FLLs) between low- and ultra-low-dose computed tomography (CT) with deep learning reconstruction (DLR) and standard-dose CT with model-based iterative reconstruction (MBIR) from a single CT using dual-split scan in patients with suspected liver metastasis via a noninferiority design. This prospective study enrolled participants who met the eligibility criteria at 2 tertiary hospitals in South Korea from June 2022 to January 2023. The criteria included ( a ) being aged between 20 and 85 years and ( b ) having suspected or known liver metastases. Dual-source CT scans were conducted, with the standard radiation dose divided in a 2:1 ratio between tubes A and B (67% and 33%, respectively). The voltage settings of 100/120 kVp were selected based on the participant's body mass index (<30 vs ≥30 kg/m 2 ). For image reconstruction, MBIR was utilized for standard-dose (100%) images, whereas DLR was employed for both low-dose (67%) and ultra-low-dose (33%) images. Three radiologists independently evaluated FLL conspicuity, the probability of metastasis, and subjective image quality using a 5-point Likert scale, in addition to quantitative signal-to-noise and contrast-to-noise ratios. The noninferiority margins were set at -0.5 for conspicuity and -0.1 for detection. One hundred thirty-three participants (male = 58, mean body mass index = 23.0 ± 3.4 kg/m 2 ) were included in the analysis. The low- and ultra-low- dose had a lower radiation dose than the standard-dose (median CT dose index volume: 3.75, 1.87 vs 5.62 mGy, respectively, in the arterial phase; 3.89, 1.95 vs 5.84 in the portal venous phase, P < 0.001 for all). Median FLL conspicuity was lower in the low- and ultra-low-dose scans compared with the standard-dose (3.0 [interquartile range, IQR: 2.0, 4.0], 3.0 [IQR: 1.0, 4.0] vs 3.0 [IQR: 2.0, 4.0] in the arterial phase; 4.0 [IQR: 1.0, 5.0], 3.0 [IQR: 1.0, 4.0] vs 4.0 [IQR: 2.0, 5.0] in the portal venous phases), yet within the noninferiority margin ( P < 0.001 for all). FLL detection was also lower but remained within the margin (lesion detection rate: 0.772 [95% confidence interval, CI: 0.727, 0.812], 0.754 [0.708, 0.795], respectively) compared with the standard-dose (0.810 [95% CI: 0.770, 0.844]). Sensitivity for liver metastasis differed between the standard- (80.6% [95% CI: 76.0, 84.5]), low-, and ultra-low-doses (75.7% [95% CI: 70.2, 80.5], 73.7 [95% CI: 68.3, 78.5], respectively, P < 0.001 for both), whereas specificity was similar ( P > 0.05). Low- and ultra-low-dose CT with DLR showed noninferior FLL conspicuity and detection compared with standard-dose CT with MBIR. Caution is needed due to a potential decrease in sensitivity for metastasis ( clinicaltrials.gov/NCT05324046 ).

Automated vs manual cardiac MRI planning: a single-center prospective evaluation of reliability and scan times.

Glessgen C, Crowe LA, Wetzl J, Schmidt M, Yoon SS, Vallée JP, Deux JF

pubmed logopapersJul 1 2025
Evaluating the impact of an AI-based automated cardiac MRI (CMR) planning software on procedure errors and scan times compared to manual planning alone. Consecutive patients undergoing non-stress CMR were prospectively enrolled at a single center (August 2023-February 2024) and randomized into manual, or automated scan execution using prototype software. Patients with pacemakers, targeted indications, or inability to consent were excluded. All patients underwent the same CMR protocol with contrast, in breath-hold (BH) or free breathing (FB). Supervising radiologists recorded procedure errors (plane prescription, forgotten views, incorrect propagation of cardiac planes, and field-of-view mismanagement). Scan times and idle phase (non-acquisition portion) were computed from scanner logs. Most data were non-normally distributed and compared using non-parametric tests. Eighty-two patients (mean age, 51.6 years ± 17.5; 56 men) were included. Forty-four patients underwent automated and 38 manual CMRs. The mean rate of procedure errors was significantly (p = 0.01) lower in the automated (0.45) than in the manual group (1.13). The rate of error-free examinations was higher (p = 0.03) in the automated (31/44; 70.5%) than in the manual group (17/38; 44.7%). Automated studies were shorter than manual studies in FB (30.3 vs 36.5 min, p < 0.001) but had similar durations in BH (42.0 vs 43.5 min, p = 0.42). The idle phase was lower in automated studies for FB and BH strategies (both p < 0.001). An AI-based automated software performed CMR at a clinical level with fewer planning errors and improved efficiency compared to manual planning. Question What is the impact of an AI-based automated CMR planning software on procedure errors and scan times compared to manual planning alone? Findings Software-driven examinations were more reliable (71% error-free) than human-planned ones (45% error-free) and showed improved efficiency with reduced idle time. Clinical relevance CMR examinations require extensive technologist training, and continuous attention, and involve many planning steps. A fully automated software reliably acquired non-stress CMR potentially reducing mistake risk and increasing data homogeneity.

Learning-based motion artifact correction in the Z-spectral domain for chemical exchange saturation transfer MRI.

Singh M, Mahmud SZ, Yedavalli V, Zhou J, Kamson DO, van Zijl P, Heo HY

pubmed logopapersJul 1 2025
To develop and evaluate a physics-driven, saturation contrast-aware, deep-learning-based framework for motion artifact correction in CEST MRI. A neural network was designed to correct motion artifacts directly from a Z-spectrum frequency (Ω) domain rather than an image spatial domain. Motion artifacts were simulated by modeling 3D rigid-body motion and readout-related motion during k-space sampling. A saturation-contrast-specific loss function was added to preserve amide proton transfer (APT) contrast, as well as enforce image alignment between motion-corrected and ground-truth images. The proposed neural network was evaluated on simulation data and demonstrated in healthy volunteers and brain tumor patients. The experimental results showed the effectiveness of motion artifact correction in the Z-spectrum frequency domain (MOCO<sub>Ω</sub>) compared to in the image spatial domain. In addition, a temporal convolution applied to a dynamic saturation image series was able to leverage motion artifacts to improve reconstruction results as a denoising process. The MOCO<sub>Ω</sub> outperformed existing techniques for motion correction in terms of image quality and computational efficiency. At 3 T, human experiments showed that the root mean squared error (RMSE) of APT images decreased from 4.7% to 2.1% at 1 μT and from 6.2% to 3.5% at 1.5 μT in case of "moderate" motion and from 8.7% to 2.8% at 1 μT and from 12.7% to 4.5% at 1.5 μT in case of "severe" motion, after motion artifact correction. The MOCO<sub>Ω</sub> could effectively correct motion artifacts in CEST MRI without compromising saturation transfer contrast.

Generalizable, sequence-invariant deep learning image reconstruction for subspace-constrained quantitative MRI.

Hu Z, Chen Z, Cao T, Lee HL, Xie Y, Li D, Christodoulou AG

pubmed logopapersJul 1 2025
To develop a deep subspace learning network that can function across different pulse sequences. A contrast-invariant component-by-component (CBC) network structure was developed and compared against previously reported spatiotemporal multicomponent (MC) structure for reconstructing MR Multitasking images. A total of 130, 167, and 16 subjects were imaged using T<sub>1</sub>, T<sub>1</sub>-T<sub>2</sub>, and T<sub>1</sub>-T<sub>2</sub>- <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow><msubsup><mi>T</mi> <mn>2</mn> <mo>*</mo></msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> -fat fraction (FF) mapping sequences, respectively. We compared CBC and MC networks in matched-sequence experiments (same sequence for training and testing), then examined their cross-sequence performance and generalizability by unmatched-sequence experiments (different sequences for training and testing). A "universal" CBC network was also evaluated using mixed-sequence training (combining data from all three sequences). Evaluation metrics included image normalized root mean squared error and Bland-Altman analyses of end-diastolic maps, both versus iteratively reconstructed references. The proposed CBC showed significantly better normalized root mean squared error than MC in both matched-sequence and unmatched-sequence experiments (p < 0.001), fewer structural details in quantitative error maps, and tighter limits of agreement. CBC was more generalizable than MC (smaller performance loss; p = 0.006 in T<sub>1</sub> and p < 0.001 in T<sub>1</sub>-T<sub>2</sub> from matched-sequence testing to unmatched-sequence testing) and additionally allowed training of a single universal network to reconstruct images from any of the three pulse sequences. The mixed-sequence CBC network performed similarly to matched-sequence CBC in T<sub>1</sub> (p = 0.178) and T<sub>1</sub>-T<sub>2</sub> (p = 0121), where training data were plentiful, and performed better in T<sub>1</sub>-T<sub>2</sub>- <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow><msubsup><mi>T</mi> <mn>2</mn> <mo>*</mo></msubsup> </mrow> <annotation>$$ {\mathrm{T}}_2^{\ast } $$</annotation></semantics> </math> -FF (p < 0.001) where training data were scarce. Contrast-invariant learning of spatial features rather than spatiotemporal features improves performance and generalizability, addresses data scarcity, and offers a pathway to universal supervised deep subspace learning.

Characterization of hepatocellular carcinoma with CT with deep learning reconstruction compared with iterative reconstruction and 3-Tesla MRI.

Malthiery C, Hossu G, Ayav A, Laurent V

pubmed logopapersJul 1 2025
This study compared the characteristics of lesions suspicious for hepatocellular carcinoma (HCC) and their LI-RADS classifications in adaptive statistical iterative reconstruction (ASIR) and deep learning reconstruction (DLR) to those of MR images, along with radiologist confidence. This prospective single-center trial included patients who underwent four-phase liver CT and multiphasic contrast-enhanced MRI within 7 days from February to August 2023. The lesion characteristics, LI-RADS classifications and confidence scores according to two radiologists on the ASIR, DLR and MRI techniques were compared. If the patient had at least one lesion, he was included in the HCC group, otherwise in the non-HCC group. MRI being the technique with the best sensitivity, concordance of lesions characteristics and LI-RADS classifications were calculated by weighted kappa between the ASIR and MRI and between the DLR and MRI. The confidence scores are expressed as the means and standard deviations. Eighty-nine patients were enrolled, 52 in the HCC group (67 years ± 9 [mean ± SD], 46 men) and 37 in the non-HCC group (68 years ± 9, 33 men). The concordance coefficient between the LI-RADS classification by ASIR and MRI was 0.64 [0.52; 0.76], showing good agreement, that by DLR and MRI was 0.83 [0.73; 0.92], showing excellent agreement. The diagnostic confidence in ASIR was 3.31 ± 0.95 (mean ± SD) and 3.0 ± 1.11, that in the DLR was 3.9 ± 0.88 and 4.11 ± 0.75, that in the MRI was 4.46 ± 0.80 and 4.57 ± 0.80. DLR provided excellent LI-RADS classification concordance with MRI, whereas ASIR provided good concordance. The radiologists' confidence was greater in the DLR than in the ASIR but remained highest in the MR group. Question Does the use of deep learning reconstructions (DLR) improve LI-RADS classification of suspicious hepatocellular carcinoma lesions compared to adaptive statistical iterative reconstructions (ASIR)? Findings DLR demonstrated superior concordance of LI-RADS classification with MRI compared to ASIR. It also provided greater diagnostic confidence than ASIR. Clinical relevance The use of DLR enhances radiologists' ability to visualize and characterize lesions suspected of being HCC, as well as their LI-RADS classification. Moreover, it also boosts their confidence in interpreting these images.

Feasibility/clinical utility of half-Fourier single-shot turbo spin echo imaging combined with deep learning reconstruction in gynecologic magnetic resonance imaging.

Kirita M, Himoto Y, Kurata Y, Kido A, Fujimoto K, Abe H, Matsumoto Y, Harada K, Morita S, Yamaguchi K, Nickel D, Mandai M, Nakamoto Y

pubmed logopapersJul 1 2025
When antispasmodics are unavailable, the periodically rotated overlapping parallel lines with enhanced reconstruction (PROPELLER; called BLADE by Siemens Healthineers) or half Fourier single-shot turbo spin echo (HASTE) is clinically used in gynecologic MRI. However, their imaging qualities are limited compared to Turbo Spin Echo (TSE) with antispasmodics. Even with antispasmodics, TSE can be artifact-affected, necessitating a rapid backup sequence. This study aimed to investigate the utility of HASTE with deep learning reconstruction and variable flip angle evolution (iHASTE) compared to conventional sequences with and without antispasmodics. This retrospective study included MRI scans without antispasmodics for 79 patients who underwent iHASTE, HASTE, and BLADE and MRI scans with antispasmodics for 79 case-control matched patients who underwent TSE. Three radiologists qualitatively evaluated image quality, robustness to artifacts, tissue contrast, and uterine lesion margins. Tissue contrast was also quantitatively evaluated. Quantitative evaluations revealed that iHASTE exhibited significantly superior tissue contrast to HASTE and BLADE. Qualitative evaluations indicated that iHASTE outperformed HASTE in overall quality. Two of three radiologists judged iHASTE to be significantly superior to BLADE, while two of three judged TSE to be significantly superior to iHASTE. iHASTE demonstrated greater robustness to artifacts than both BLADE and TSE. Lesion margins in iHASTE had lower scores than BLADE and TSE. iHASTE is a viable clinical option in patients undergoing gynecologic MRI with anti-spasmodics. iHASTE may also be considered as a useful add-on sequence in patients undergoing MRI with antispasmodics.
Page 3 of 25243 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.