Sort by:
Page 30 of 55541 results

Radiation and contrast dose reduction in coronary CT angiography for slender patients with 70 kV tube voltage and deep learning image reconstruction.

Ren Z, Shen L, Zhang X, He T, Yu N, Zhang M

pubmed logopapersJul 1 2025
To evaluate the radiation and contrast dose reduction potential of combining 70 kV with deep learning image reconstruction (DLIR) in coronary computed tomography angiography (CCTA) for slender patients with body-mass-index (BMI) ≤25 kg/m2. Sixty patients for CCTA were randomly divided into 2 groups: group A with 120 kV and contrast agent dose of 0.8 mL/kg, and group B with 70 kV and contrast agent dose of 0.5 mL/kg. Group A used adaptive statistical iterative reconstruction-V (ASIR-V) with 50% strength level (50%ASIR-V) while group B used 50% ASIR-V, DLIR of low level (DLIR-L), DLIR of medium level (DLIR-M), and DLIR of high level (DLIR-H) for image reconstruction. The CT values and SD values of coronary arteries and pericardial fat were measured, and signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated. The image quality was subjectively evaluated by 2 radiologists using a five-point scoring system. The effective radiation dose (ED) and contrast dose were calculated and compared. Group B significantly reduced radiation dose by 75.6% and contrast dose by 32.9% compared to group A. Group B exhibited higher CT values of coronary arteries than group A, and DLIR-L, DLIR-M, and DLIR-H in group B provided higher SNR values and CNR values and subjective scores, among which DLIR-H had the lowest noise and highest subjective scores. Using 70 kV combined with DLIR significantly reduces radiation and contrast dose while improving image quality in CCTA for slender patients with DLIR-H having the best effect on improving image quality. The 70 kV and DLIR-H may be used in CCTA for slender patients to significantly reduce radiation dose and contrast dose while improving image quality.

Cycle-conditional diffusion model for noise correction of diffusion-weighted images using unpaired data.

Zhu P, Liu C, Fu Y, Chen N, Qiu A

pubmed logopapersJul 1 2025
Diffusion-weighted imaging (DWI) is a key modality for studying brain microstructure, but its signals are highly susceptible to noise due to the thermal motion of water molecules and interactions with tissue microarchitecture, leading to significant signal attenuation and a low signal-to-noise ratio (SNR). In this paper, we propose a novel approach, a Cycle-Conditional Diffusion Model (Cycle-CDM) using unpaired data learning, aimed at improving DWI quality and reliability through noise correction. Cycle-CDM leverages a cycle-consistent translation architecture to bridge the domain gap between noise-contaminated and noise-free DWIs, enabling the restoration of high-quality images without requiring paired datasets. By utilizing two conditional diffusion models, Cycle-CDM establishes data interrelationships between the two types of DWIs, while incorporating synthesized anatomical priors from the cycle translation process to guide noise removal. In addition, we introduce specific constraints to preserve anatomical fidelity, allowing Cycle-CDM to effectively learn the underlying noise distribution and achieve accurate denoising. Our experiments conducted on simulated datasets, as well as children and adolescents' datasets with strong clinical relevance. Our results demonstrate that Cycle-CDM outperforms comparative methods, such as U-Net, CycleGAN, Pix2Pix, MUNIT and MPPCA, in terms of noise correction performance. We demonstrated that Cycle-CDM can be generalized to DWIs with head motion when they were acquired using different MRI scannsers. Importantly, the denoised DWI data produced by Cycle-CDM exhibit accurate preservation of underlying tissue microstructure, thus substantially improving their medical applicability.

One for multiple: Physics-informed synthetic data boosts generalizable deep learning for fast MRI reconstruction.

Wang Z, Yu X, Wang C, Chen W, Wang J, Chu YH, Sun H, Li R, Li P, Yang F, Han H, Kang T, Lin J, Yang C, Chang S, Shi Z, Hua S, Li Y, Hu J, Zhu L, Zhou J, Lin M, Guo J, Cai C, Chen Z, Guo D, Yang G, Qu X

pubmed logopapersJul 1 2025
Magnetic resonance imaging (MRI) is a widely used radiological modality renowned for its radiation-free, comprehensive insights into the human body, facilitating medical diagnoses. However, the drawback of prolonged scan times hinders its accessibility. The k-space undersampling offers a solution, yet the resultant artifacts necessitate meticulous removal during image reconstruction. Although deep learning (DL) has proven effective for fast MRI image reconstruction, its broader applicability across various imaging scenarios has been constrained. Challenges include the high cost and privacy restrictions associated with acquiring large-scale, diverse training data, coupled with the inherent difficulty of addressing mismatches between training and target data in existing DL methodologies. Here, we present a novel Physics-Informed Synthetic data learning Framework for fast MRI, called PISF. PISF marks a breakthrough by enabling generalizable DL for multi-scenario MRI reconstruction through a single trained model. Our approach separates the reconstruction of a 2D image into many 1D basic problems, commencing with 1D data synthesis to facilitate generalization. We demonstrate that training DL models on synthetic data, coupled with enhanced learning techniques, yields in vivo MRI reconstructions comparable to or surpassing those of models trained on matched realistic datasets, reducing the reliance on real-world MRI data by up to 96 %. With a single trained model, our PISF supports the high-quality reconstruction under 4 sampling patterns, 5 anatomies, 6 contrasts, 5 vendors, and 7 centers, exhibiting remarkable generalizability. Its adaptability to 2 neuro and 2 cardiovascular patient populations has been validated through evaluations by 10 experienced medical professionals. In summary, PISF presents a feasible and cost-effective way to significantly boost the widespread adoption of DL in various fast MRI applications.

Assessment of AI-accelerated T2-weighted brain MRI, based on clinical ratings and image quality evaluation.

Nonninger JN, Kienast P, Pogledic I, Mallouhi A, Barkhof F, Trattnig S, Robinson SD, Kasprian G, Haider L

pubmed logopapersJul 1 2025
To compare clinical ratings and signal-to-noise ratio (SNR) measures of a commercially available Deep Learning-based MRI reconstruction method (T2<sub>(DR)</sub>) against conventional T2- turbo spin echo brain MRI (T2<sub>(CN)</sub>). 100 consecutive patients with various neurological conditions underwent both T2<sub>(DR)</sub> and T2<sub>(CN)</sub> on a Siemens Vida 3 T scanner with a 64-channel head coil in the same examination. Acquisition times were 3.33 min for T2<sub>(CN)</sub> and 1.04 min for T2<sub>(DR)</sub>. Four neuroradiologists evaluated overall image quality (OIQ), diagnostic safety (DS), and image artifacts (IA), blinded to the acquisition mode. SNR and SNR<sub>eff</sub> (adjusted for acquisition time) were calculated for air, grey- and white matter, and cerebrospinal fluid. The mean patient age was 43.6 years (SD 20.3), with 54 females. The distribution of non-diagnostic ratings did not differ significantly between T2<sub>(CN)</sub> and T2<sub>(DR)</sub> (IA p = 0.108; OIQ: p = 0.700 and DS: p = 0.652). However, when considering the full spectrum of ratings, significant differences favouring T2<sub>(CN)</sub> emerged in OIQ (p = 0.003) and IA (p < 0.001). T2<sub>(CN)</sub> had higher SNR (157.9, SD 123.4) than T2<sub>(DR)</sub> (112.8, SD 82.7), p < 0.001, but T2<sub>(DR)</sub> demonstrated superior SNR<sub>eff</sub> (14.1, SD 10.3) compared to T2<sub>(CN)</sub> (10.8, SD 8.5), p < 0.001. Our results suggest that while T2<sub>(DR)</sub> may be clinically applicable for a diagnostic setting, it does not fully match the quality of high-standard conventional T2<sub>(CN)</sub>, MRI acquisitions.

Impact of CT reconstruction algorithms on pericoronary and epicardial adipose tissue attenuation.

Xiao H, Wang X, Yang P, Wang L, Xi J, Xu J

pubmed logopapersJul 1 2025
This study aims to investigate the impact of adaptive statistical iterative reconstruction-Veo (ASIR-V) and deep learning image reconstruction (DLIR) algorithms on the quantification of pericoronary adipose tissue (PCAT) and epicardial adipose tissue (EAT). Furthermore, we propose to explore the feasibility of correcting the effects through fat threshold adjustment. A retrospective analysis was conducted on the imaging data of 134 patients who underwent coronary CT angiography (CCTA) between December 2023 and January 2024. These data were reconstructed into seven datasets using filtered back projection (FBP), ASIR-V at three different intensities (ASIR-V 30%, ASIR-V 50%, ASIR-V 70%), and DLIR at three different intensities (DLIR-L, DLIR-M, DLIR-H). Repeated-measures ANOVA was used to compare differences in fat, PCAT and EAT attenuation values among the reconstruction algorithms, and Bland-Altman plots were used to analyze the agreement between ASIR-V or DLIR and FBP algorithms in PCAT attenuation values. Compared to FBP, ASIR-V 30 %, ASIR-V 50 %, ASIR-V 70 %, DLIR-L, DLIR-M, and DLIR-H significantly increased fat attenuation values (-103.91 ± 12.99 HU, -102.53 ± 12.68 HU, -101.14 ± 12.78 HU, -101.81 ± 12.41 HU, -100.87 ± 12.25 HU, -99.08 ± 12.00 HU vs. -105.95 ± 13.01 HU, all p < 0.001). When the fat threshold was set at -190 to -30 HU, ASIR-V and DLIR algorithms significantly increased PCAT and EAT attenuation values compared to FBP algorithm (all p < 0.05), with these values increasing as the reconstruction intensity level increased. After correction with a fat threshold of -200 to -35 HU for ASIR-V 30 %, -200 to -40 HU for ASIR-V 50 % and DLIR-L, and -200 to -45 HU for ASIR-V 70 %, DLIR-M, and DLIR-H, the mean differences in PCAT attenuation values between ASIR-V or DLIR and FBP algorithms decreased (-0.03 to 1.68 HU vs. 2.35 to 8.69 HU), and no significant difference was found in PCAT attenuation values between FBP and ASIR-V 30 %, ASIR-V 50 %, ASIR-V 70 %, DLIR-L, and DLIR-M (all p > 0.05). Compared to the FBP algorithm, ASIR-V and DLIR algorithms increase PCAT and EAT attenuation values. Adjusting the fat threshold can mitigate the impact of ASIR-V and DLIR algorithms on PCAT attenuation values.

A multi-task neural network for full waveform ultrasonic bone imaging.

Li P, Liu T, Ma H, Li D, Liu C, Ta D

pubmed logopapersJul 1 2025
It is a challenging task to use ultrasound for bone imaging, as the bone tissue has a complex structure with high acoustic impedance and speed-of-sound (SOS). Recently, full waveform inversion (FWI) has shown promising imaging for musculoskeletal tissues. However, the FWI showed a limited ability and tended to produce artifacts in bone imaging because the inversion process would be more easily trapped in local minimum for bone tissue with a large discrepancy in SOS distribution between bony and soft tissues. In addition, the application of FWI required a high computational burden and relatively long iterations. The objective of this study was to achieve high-resolution ultrasonic imaging of bone using a deep learning-based FWI approach. In this paper, we proposed a novel network named CEDD-Unet. The CEDD-Unet adopts a Dual-Decoder architecture, with the first decoder tasked with reconstructing the SOS model, and the second decoder tasked with finding the main boundaries between bony and soft tissues. To effectively capture multi-scale spatial-temporal features from ultrasound radio frequency (RF) signals, we integrated a Convolutional LSTM (ConvLSTM) module. Additionally, an Efficient Multi-scale Attention (EMA) module was incorporated into the encoder to enhance feature representation and improve reconstruction accuracy. Using the ultrasonic imaging modality with a ring array transducer, the performance of CEDD-Unet was tested on the SOS model datasets from human bones (noted as Dataset1) and mouse bones (noted as Dataset2), and compared with three classic reconstruction architectures (Unet, Unet++, and Att-Unet), four state-of-the-art architecture (InversionNet, DD-Net, UPFWI, and DEFE-Unet). Experiments showed that CEDD-Unet outperforms all competing methods, achieving the lowest MAE of 23.30 on Dataset1 and 25.29 on Dataset2, the highest SSIM of 0.9702 on Dataset1 and 0.9550 on Dataset2, and the highest PSNR of 30.60 dB on Dataset1 and 32.87 dB on Dataset2. Our method demonstrated superior reconstruction quality, with clearer bone boundaries, reduced artifacts, and improved consistency with ground truth. Moreover, CEDD-Unet surpasses traditional FWI by producing sharper skeletal SOS reconstructions, reducing computational cost, and eliminating the reliance for an initial model. Ablation studies further confirm the effectiveness of each network component. The results suggest that CEDD-Unet is a promising deep learning-based FWI method for high-resolution bone imaging, with the potential to reconstruct accurate and sharp-edged skeletal SOS models.

A deep learning framework for reconstructing Breast Amide Proton Transfer weighted imaging sequences from sparse frequency offsets to dense frequency offsets.

Yang Q, Su S, Zhang T, Wang M, Dou W, Li K, Ren Y, Zheng Y, Wang M, Xu Y, Sun Y, Liu Z, Tan T

pubmed logopapersJul 1 2025
Amide Proton Transfer (APT) technique is a novel functional MRI technique that enables quantification of protein metabolism, but its wide application is largely limited in clinical settings by its long acquisition time. One way to reduce the scanning time is to obtain fewer frequency offset images during image acquisition. However, sparse frequency offset images are not inadequate to fit the z-spectral, a curve essential to quantifying the APT effect, which might compromise its quantification. In our study, we develop a deep learning-based model that allows for reconstructing dense frequency offsets from sparse ones, potentially reducing scanning time. We propose to leverage time-series convolution to extract both short and long-range spatial and frequency features of the APT imaging sequence. Our proposed model outperforms other seq2seq models, achieving superior reconstruction with a peak signal-to-noise ratio of 45.8 (95% confidence interval (CI): [44.9 46.7]), and a structural similarity index of 0.989 (95% CI:[0.987 0.993]) for the tumor region. We have integrated a weighted layer into our model to evaluate the impact of individual frequency offset on the reconstruction process. The weights assigned to the frequency offset at ±6.5 ppm, 0 ppm, and 3.5 ppm demonstrate higher significance as learned by the model. Experimental results demonstrate that our proposed model effectively reconstructs dense frequency offsets (n = 29, from 7 to -7 with 0.5 ppm as an interval) from data with 21 frequency offsets, reducing scanning time by 25%. This work presents a method for shortening the APT imaging acquisition time, offering potential guidance for parameter settings in APT imaging and serving as a valuable reference for clinicians.

Assessment of biventricular cardiac function using free-breathing artificial intelligence cine with motion correction: Comparison with standard multiple breath-holding cine.

Ran L, Yan X, Zhao Y, Yang Z, Chen Z, Jia F, Song X, Huang L, Xia L

pubmed logopapersJul 1 2025
To assess the image quality and biventricular function utilizing a free-breathing artificial intelligence cine method with motion correction (FB AI MOCO). A total of 72 participants (mean age 38.3 ± 15.4 years, 40 males) prospectively enrolled in this single-center, cross-sectional study underwent cine scans using standard breath-holding (BH) cine and FB AI MOCO cine at 3.0 Tesla. The image quality of the cine images was evaluated with a 5-point Ordinal Likert scale based on blood-pool to myocardium contrast, endocardial edge definition, and artifacts, and overall quality score was calculated by the equal weight average of all three criteria, apparent signal to noise ratio (aSNR), estimated contrast to noise ratio (eCNR) were assessed. Biventricular functional parameters including Left Ventricular (LV), Right Ventricular (RV) End-Diastolic Volume (EDV), End-Systolic Volume (ESV), Stroke Volume (SV), Ejection Fraction (EF), and LV End-Diastolic Mass (LVEDM) were also assessed. Comparison between two sequences was assessed using paired t-test and Wilcoxon signed-rank test, correlation using Pearson correlation. The agreement of quantitative parameters was assessed using intraclass correlation coefficient (ICC) and Bland-Altman analysis. P < 0.05 was statistically significant. The total acquisition time of the entire stack for FB AI MOCO cine (14.7 s ± 1.9 s) was notably shorter than that for standard BH cine (82.6 s ± 11.9 s, P < 0.001). The aSNR between FB AI MOCO cine and standard BH cine has no significantly difference (76.7 ± 20.7 vs. 79.8 ± 20.7, P = 0.193). The eCNR of FB AI MOCO cine was higher than standard BH cine (191.6 ± 54.0 vs. 155.8 ± 68.4, P < 0.001), as was the scores of blood-pool to myocardium contrast (4.6 ± 0.5 vs. 4.4 ± 0.6, P = 0.003). Qualitative scores including endocardial edge definition (4.2 ± 0.5 vs. 4.3 ± 0.7, P = 0.123), artifact presence (4.3 ± 0.6 vs. 4.1 ± 0.8, P = 0.085), and overall image quality (4.4 ± 0.4 vs. 4.3 ± 0.6, P = 0.448), showed no significant differences between the two methods. Representative RV and LV functional parameters - including RVEDV (102.2 (86.4, 120.4) ml vs. 104.0 (88.5, 120.3) ml, P = 0.294), RVEF (31.0 ± 11.1 % vs. 31.2 ± 11.0 %, P = 0.570), and LVEDV (106.2 (86.7, 131.3) ml vs. 105.8 (84.4, 130.3) ml, P = 0.450) - also did not differ significantly between the two methods. Strong correlations (r > 0.900) and excellent agreement (ICC > 0.900) were found for all biventricular functional parameters between the two sequences. In subgroups with reduced LVEF (<50 %, n = 24) or elevated heart rate (≥80  bpm, n = 17), no significant differences were observed in any biventricular functional metrics (P > 0.05 for all) between the two sequences. In comparison to multiple BH cine, the FB AI MOCO cine achieved comparable image quality and biventricular functional parameters with shorter scan times, suggesting its promising potential for clinical applications.

Accelerating CEST MRI With Deep Learning-Based Frequency Selection and Parameter Estimation.

Shen C, Cheema K, Xie Y, Ruan D, Li D

pubmed logopapersJul 1 2025
Chemical exchange saturation transfer (CEST) MRI is a powerful molecular imaging technique for detecting metabolites through proton exchange. While CEST MRI provides high sensitivity, its clinical application is hindered by prolonged scan time due to the need for imaging across numerous frequency offsets for parameter estimation. Since scan time is directly proportional to the number of frequency offsets, identifying and selecting the most informative frequency can significantly reduce acquisition time. We propose a novel deep learning-based framework that integrates frequency selection and parameter estimation to accelerate CEST MRI. Our method leverages channel pruning via batch normalization to identify the most informative frequency offsets while simultaneously training the network for accurate parametric map prediction. Using data from six healthy volunteers, channel pruning selects 13 informative frequency offsets out of 53 without compromising map quality. Images from selected frequency offsets were reconstructed using the MR Multitasking method, which employs a low-rank tensor model to enable under-sampling of k-space lines for each frequency offset, further reducing scan time. Predicted parametric maps of amide proton transfer (APT), nuclear overhauser effect (NOE), and magnetization transfer (MT) based on these selected frequencies were comparable in quality to maps generated using all frequency offsets, achieving superior performance compared to Fisher information-based selection methods from our previous work. This integrated approach has the potential to reduce the whole-brain CEST MRI scan time from the original 5:30 min to under 1:30 min without compromising map quality. By leveraging deep learning for frequency selection and parametric map prediction, the proposed framework demonstrates its potential for efficient and practical clinical implementation. Future studies will focus on extending this method to patient populations and addressing challenges such as B<sub>0</sub> inhomogeneity and abnormal signal variation in diseased tissues.

Improve robustness to mismatched sampling rate: An alternating deep low-rank approach for exponential function reconstruction and its biomedical magnetic resonance applications.

Huang Y, Wang Z, Zhang X, Cao J, Tu Z, Lin M, Li L, Jiang X, Guo D, Qu X

pubmed logopapersJul 1 2025
Undersampling accelerates signal acquisition at the expense of introducing artifacts. Removing these artifacts is a fundamental problem in signal processing and this task is also called signal reconstruction. Through modeling signals as the superimposed exponential functions, deep learning has achieved fast and high-fidelity signal reconstruction by training a mapping from the undersampled exponentials to the fully sampled ones. However, the mismatch, such as undersampling rates (25 % vs. 50 %), anatomical region (knee vs. brain), and contrast configurations (PDw vs. T<sub>2</sub>w), between the training and target data will heavily compromise the reconstruction. To overcome this limitation, we propose Alternating Deep Low-Rank (ADLR), which combines deep learning solvers and classic optimization solvers. Experimental validation on the reconstruction of synthetic and real-world biomedical magnetic resonance signals demonstrates that ADLR can effectively alleviate the mismatch issue and achieve lower reconstruction errors than state-of-the-art methods.
Page 30 of 55541 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.