Sort by:
Page 45 of 54537 results

Accelerated proton resonance frequency-based magnetic resonance thermometry by optimized deep learning method.

Xu S, Zong S, Mei CS, Shen G, Zhao Y, Wang H

pubmed logopapersMay 31 2025
Proton resonance frequency (PRF)-based magnetic resonance (MR) thermometry plays a critical role in thermal ablation therapies through focused ultrasound (FUS). For clinical applications, accurate and rapid temperature feedback is essential to ensure both the safety and effectiveness of these treatments. This work aims to improve temporal resolution in dynamic MR temperature map reconstructions using an enhanced deep-learning method, thereby supporting the real-time monitoring required for effective FUS treatments. Five classical neural network architectures-cascade net, complex-valued U-Net, shift window transformer for MRI, real-valued U-Net, and U-Net with residual blocks-along with training-optimized methods were applied to reconstruct temperature maps from 2-fold and 4-fold undersampled k-space data. The training enhancements included pre-training/training-phase data augmentations, knowledge distillation, and a novel amplitude-phase decoupling loss function. Phantom and ex vivo tissue heating experiments were conducted using a FUS transducer. Ground truth was the complex MR images with accurate temperature changes, and datasets were manually undersampled to simulate such acceleration here. Separate testing datasets were used to evaluate real-time performance and temperature accuracy. Furthermore, our proposed deep learning-based rapid reconstruction approach was validated on a clinical dataset obtained from patients with uterine fibroids, demonstrating its clinical applicability. Acceleration factors of 1.9 and 3.7 were achieved for 2× and 4× k-space under samplings, respectively. The deep learning-based reconstruction using ResUNet incorporating the four optimizations, showed superior performance. For 2-fold acceleration, the RMSE of temperature map patches were 0.89°C and 1.15°C for the phantom and ex vivo testing datasets, respectively. The DICE coefficient for the 43°C isotherm-enclosed regions was 0.81, and the Bland-Altman analysis indicated a bias of -0.25°C with limits of agreement of ±2.16°C. In the 4-fold under-sampling case, these evaluation metrics showed approximately a 10% reduction in accuracy. Additionally, the DICE coefficient measuring the overlap between the reconstructed temperature maps (using the optimized ResUNet) and the ground truth, specifically in regions where the temperature exceeded the 43°C threshold, were 0.77 and 0.74 for the 2× and 4× under-sampling scenarios, respectively. This study demonstrates that deep learning-based reconstruction significantly enhances the accuracy and efficiency of MR thermometry, particularly in the context of FUS-based clinical treatments for uterine fibroids. This approach could also be extended to other applications such as essential tremor and prostate cancer treatments where MRI-guided FUS plays a critical role.

Dual-energy CT-based virtual monoenergetic imaging via unsupervised learning.

Liu CK, Chang HY, Huang HM

pubmed logopapersMay 31 2025
Since its development, virtual monoenergetic imaging (VMI) derived from dual-energy computed tomography (DECT) has been shown to be valuable in many clinical applications. However, DECT-based VMI showed increased noise at low keV levels. In this study, we proposed an unsupervised learning method to generate VMI from DECT. This means that we don't require training and labeled (i.e. high-quality VMI) data. Specifically, DECT images were fed into a deep learning (DL) based model expected to output VMI. Based on the theory that VMI obtained from image space data is a linear combination of DECT images, we used the model output (i.e. the predicted VMI) to recalculate DECT images. By minimizing the difference between the measured and recalculated DECT images, the DL-based model can be constrained itself to generate VMI from DECT images. We investigate whether the proposed DL-based method has the ability to improve the quality of VMIs. The experimental results obtained from patient data showed that the DL-based VMIs had better image quality than the conventional DECT-based VMIs. Moreover, the CT number differences between the DECT-based and DL-based VMIs were distributed within <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>±</mo></math> 10 HU for bone and <math xmlns="http://www.w3.org/1998/Math/MathML"><mo>±</mo></math> 5 HU for brain, fat, and muscle. Except for bone, no statistically significant difference in CT number measurements was found between the DECT-based and DL-based VMIs (p > 0.01). Our preliminary results show that DL has the potential to unsupervisedly generate high-quality VMIs directly from DECT.

Deep learning reconstruction improves computer-aided pulmonary nodule detection and measurement accuracy for ultra-low-dose chest CT.

Wang J, Zhu Z, Pan Z, Tan W, Han W, Zhou Z, Hu G, Ma Z, Xu Y, Ying Z, Sui X, Jin Z, Song L, Song W

pubmed logopapersMay 30 2025
To compare the image quality and pulmonary nodule detectability and measurement accuracy between deep learning reconstruction (DLR) and hybrid iterative reconstruction (HIR) of chest ultra-low-dose CT (ULDCT). Participants who underwent chest standard-dose CT (SDCT) followed by ULDCT from October 2020 to January 2022 were prospectively included. ULDCT images reconstructed with HIR and DLR were compared with SDCT images to evaluate image quality, nodule detection rate, and measurement accuracy using a commercially available deep learning-based nodule evaluation system. Wilcoxon signed-rank test was used to evaluate the percentage errors of nodule size and nodule volume between HIR and DLR images. Eighty-four participants (54 ± 13 years; 26 men) were finally enrolled. The effective radiation doses of ULDCT and SDCT were 0.16 ± 0.02 mSv and 1.77 ± 0.67 mSv, respectively (P < 0.001). The mean ± standard deviation of the lung tissue noises was 61.4 ± 3.0 HU for SDCT, 61.5 ± 2.8 HU and 55.1 ± 3.4 HU for ULDCT reconstructed with HIR-Strong setting (HIR-Str) and DLR-Strong setting (DLR-Str), respectively (P < 0.001). A total of 535 nodules were detected. The nodule detection rates of ULDCT HIR-Str and ULDCT DLR-Str were 74.0% and 83.4%, respectively (P < 0.001). The absolute percentage error in nodule volume from that of SDCT was 19.5% in ULDCT HIR-Str versus 17.9% in ULDCT DLR-Str (P < 0.001). Compared with HIR, DLR reduced image noise, increased nodule detection rate, and improved measurement accuracy of nodule volume at chest ULDCT. Not applicable.

Edge Computing for Physics-Driven AI in Computational MRI: A Feasibility Study

Yaşar Utku Alçalar, Yu Cao, Mehmet Akçakaya

arxiv logopreprintMay 30 2025
Physics-driven artificial intelligence (PD-AI) reconstruction methods have emerged as the state-of-the-art for accelerating MRI scans, enabling higher spatial and temporal resolutions. However, the high resolution of these scans generates massive data volumes, leading to challenges in transmission, storage, and real-time processing. This is particularly pronounced in functional MRI, where hundreds of volumetric acquisitions further exacerbate these demands. Edge computing with FPGAs presents a promising solution for enabling PD-AI reconstruction near the MRI sensors, reducing data transfer and storage bottlenecks. However, this requires optimization of PD-AI models for hardware efficiency through quantization and bypassing traditional FFT-based approaches, which can be a limitation due to their computational demands. In this work, we propose a novel PD-AI computational MRI approach optimized for FPGA-based edge computing devices, leveraging 8-bit complex data quantization and eliminating redundant FFT/IFFT operations. Our results show that this strategy improves computational efficiency while maintaining reconstruction quality comparable to conventional PD-AI methods, and outperforms standard clinical methods. Our approach presents an opportunity for high-resolution MRI reconstruction on resource-constrained devices, highlighting its potential for real-world deployment.

The Impact of Model-based Deep-learning Reconstruction Compared with that of Compressed Sensing-Sensitivity Encoding on the Image Quality and Precision of Cine Cardiac MR in Evaluating Left-ventricular Volume and Strain: A Study on Healthy Volunteers.

Tsuneta S, Aono S, Kimura R, Kwon J, Fujima N, Ishizaka K, Nishioka N, Yoneyama M, Kato F, Minowa K, Kudo K

pubmed logopapersMay 30 2025
To evaluate the effect of model-based deep-learning reconstruction (DLR) compared with that of compressed sensing-sensitivity encoding (CS) on cine cardiac magnetic resonance (CMR). Cine CMR images of 10 healthy volunteers were obtained with reduction factors of 2, 4, 6, and 8 and reconstructed using CS and DLR. The visual image quality scores assessed sharpness, image noise, and artifacts. Left-ventricular (LV) end-diastolic volume (EDV), end-systolic volume (ESV), stroke volume (SV), and ejection fraction (EF) were manually measured. LV global circumferential strain (GCS) was automatically measured using the software. The precision of EDV, ESV, SV, EF, and GCS measurements was compared between CS and DLR using Bland-Altman analysis with full-sampling data as the gold standard. Compared with CS, DLR significantly improved image quality with reduction factors of 6 and 8. The precision of EDV and ESV with a reduction factor of 8, and GCS with reduction factors of 6 and 8 measurements improved with DLR compared with CS, whereas those of SV and EF measurements were not different between DLR and CS. The effect of DLR on cine CMR's image quality and precision in evaluating quantitative volume and strain was equal or superior to that of CS. DLR may replace CS for cine CMR.

Advantages of deep learning reconstruction algorithm in ultra-high-resolution CT for the diagnosis of pancreatic cystic neoplasm.

Sofue K, Ueno Y, Yabe S, Ueshima E, Yamaguchi T, Masuda A, Sakai A, Toyama H, Fukumoto T, Hori M, Murakami T

pubmed logopapersMay 30 2025
This study aimed to evaluate the image quality and clinical utility of a deep learning reconstruction (DLR) algorithm in ultra-high-resolution computed tomography (UHR-CT) for the diagnosis of pancreatic cystic neoplasms (PCNs). This retrospective study included 45 patients with PCNs between March 2020 and February 2022. Contrast-enhanced UHR-CT images were obtained and reconstructed using DLR and hybrid iterative reconstruction (IR). Image noise and contrast-to-noise ratio (CNR) were measured. Two radiologists assessed the diagnostic performance of the imaging findings associated with PCNs using a 5-point Likert scale. The diagnostic performance metrics, including sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC), were calculated. Quantitative and qualitative features were compared between CT with DLR and hybrid IR. Interobserver agreement for qualitative assessments was also analyzed. DLR significantly reduced image noise and increased CNR compared to hybrid IR for all objects (p < 0.001). Radiologists rated DLR images as superior in overall quality, lesion delineation, and vessel conspicuity (p < 0.001). DLR produced higher AUROC values for diagnostic imaging findings (ductal communication: 0.887‒0.938 vs. 0.816‒0.827 and enhanced mural nodule: 0.843‒0.916 vs. 0.785‒0.801), although DLR did not directly improve sensitivity, specificity, and accuracy. Interobserver agreement for qualitative assessments was higher in CT with DLR (κ = 0.69‒0.82 vs. 0.57‒0.73). DLR improved image quality and diagnostic performance by effectively reducing image noise and improving lesion conspicuity in the diagnosis of PCNs on UHR-CT. The DLR demonstrated greater diagnostic confidence for the assessment of imaging findings associated with PCNs.

GLIMPSE: Generalized Locality for Scalable and Robust CT.

Khorashadizadeh A, Debarnot V, Liu T, Dokmanic I

pubmed logopapersMay 30 2025
Deep learning has become the state-of-the-art approach to medical tomographic imaging. A common approach is to feed the result of a simple inversion, for example the backprojection, to a multiscale convolutional neural network (CNN) which computes the final reconstruction. Despite good results on in-distribution test data, this often results in overfitting certain large-scale structures and poor generalization on out-of-distribution (OOD) samples. Moreover, the memory and computational complexity of multiscale CNNs scale unfavorably with image resolution, making them impractical for application at realistic clinical resolutions. In this paper, we introduce GLIMPSE, a local coordinate-based neural network for computed tomography which reconstructs a pixel value by processing only the measurements associated with the neighborhood of the pixel. GLIMPSE significantly outperforms successful CNNs on OOD samples, while achieving comparable or better performance on in-distribution test data and maintaining a memory footprint almost independent of image resolution; 5GB memory suffices to train on 1024 × 1024 images which is orders of magnitude less than CNNs. GLIMPSE is fully differentiable and can be used plug-and-play in arbitrary deep learning architectures, enabling feats such as correcting miscalibrated projection orientations.

Sparsity-Driven Parallel Imaging Consistency for Improved Self-Supervised MRI Reconstruction

Yaşar Utku Alçalar, Mehmet Akçakaya

arxiv logopreprintMay 30 2025
Physics-driven deep learning (PD-DL) models have proven to be a powerful approach for improved reconstruction of rapid MRI scans. In order to train these models in scenarios where fully-sampled reference data is unavailable, self-supervised learning has gained prominence. However, its application at high acceleration rates frequently introduces artifacts, compromising image fidelity. To mitigate this shortcoming, we propose a novel way to train PD-DL networks via carefully-designed perturbations. In particular, we enhance the k-space masking idea of conventional self-supervised learning with a novel consistency term that assesses the model's ability to accurately predict the added perturbations in a sparse domain, leading to more reliable and artifact-free reconstructions. The results obtained from the fastMRI knee and brain datasets show that the proposed training strategy effectively reduces aliasing artifacts and mitigates noise amplification at high acceleration rates, outperforming state-of-the-art self-supervised methods both visually and quantitatively.

Self-supervised feature learning for cardiac Cine MR image reconstruction

Siying Xu, Marcel Früh, Kerstin Hammernik, Andreas Lingg, Jens Kübler, Patrick Krumm, Daniel Rueckert, Sergios Gatidis, Thomas Küstner

arxiv logopreprintMay 29 2025
We propose a self-supervised feature learning assisted reconstruction (SSFL-Recon) framework for MRI reconstruction to address the limitation of existing supervised learning methods. Although recent deep learning-based methods have shown promising performance in MRI reconstruction, most require fully-sampled images for supervised learning, which is challenging in practice considering long acquisition times under respiratory or organ motion. Moreover, nearly all fully-sampled datasets are obtained from conventional reconstruction of mildly accelerated datasets, thus potentially biasing the achievable performance. The numerous undersampled datasets with different accelerations in clinical practice, hence, remain underutilized. To address these issues, we first train a self-supervised feature extractor on undersampled images to learn sampling-insensitive features. The pre-learned features are subsequently embedded in the self-supervised reconstruction network to assist in removing artifacts. Experiments were conducted retrospectively on an in-house 2D cardiac Cine dataset, including 91 cardiovascular patients and 38 healthy subjects. The results demonstrate that the proposed SSFL-Recon framework outperforms existing self-supervised MRI reconstruction methods and even exhibits comparable or better performance to supervised learning up to $16\times$ retrospective undersampling. The feature learning strategy can effectively extract global representations, which have proven beneficial in removing artifacts and increasing generalization ability during reconstruction.

CT-denoimer: efficient contextual transformer network for low-dose CT denoising.

Zhang Y, Xu F, Zhang R, Guo Y, Wang H, Wei B, Ma F, Meng J, Liu J, Lu H, Chen Y

pubmed logopapersMay 29 2025
Low-dose computed tomography (LDCT) effectively reduces radiation exposure to patients, but introduces severe noise artifacts that affect diagnostic accuracy. Recently, Transformer-based network architectures have been widely applied to LDCT image denoising, generally achieving superior results compared to traditional convolutional methods. However, these methods are often hindered by high computational costs and struggles in capturing complex local contextual features, which negatively impact denoising performance. In this work, we propose CT-Denoimer, an efficient CT Denoising Transformer network that captures both global correlations and intricate, spatially varying local contextual details in CT images, enabling the generation of high-quality images. The core of our framework is a Transformer module that consists of two key components: the Multi-Dconv head Transposed Attention (MDTA) and the Mixed Contextual Feed-forward Network (MCFN). The MDTA block captures global correlations in the image with linear computational complexity, while the MCFN block manages multi-scale local contextual information, both static and dynamic, through a series of Enhanced Contextual Transformer (eCoT) modules. In addition, we incorporate Operation-Wise Attention Layers (OWALs) to enable collaborative refinement in the proposed CT-Denoimer, enhancing its ability to more effectively handle complex and varying noise patterns in LDCT images. Extensive experimental validation on both the AAPM-Mayo public dataset and a real-world clinical dataset demonstrated the state-of-the-art performance of the proposed CT-Denoimer. It achieved a peak signal-to-noise ratio (PSNR) of 33.681 dB, a structural similarity index measure (SSIM) of 0.921, an information fidelity criterion (IFC) of 2.857 and a visual information fidelity (VIF) of 0.349. Subjective assessment by radiologists gave an average score of 4.39, confirming its clinical applicability and clear advantages over existing methods. This study presents an innovative CT denoising Transformer network that sets a new benchmark in LDCT image denoising, excelling in both noise reduction and fine structure preservation.
Page 45 of 54537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.