Sort by:
Page 41 of 54537 results

UR-cycleGAN: Denoising full-body low-dose PET images using cycle-consistent Generative Adversarial Networks.

Liu Y, Sun Z, Liu H

pubmed logopapersJun 2 2025
This study aims to develop a CycleGAN based denoising model to enhance the quality of low-dose PET (LDPET) images, making them as close as possible to standard-dose PET (SDPET) images. Using a Philips Vereos PET/CT system, whole-body PET images of fluorine-18 fluorodeoxyglucose (18F-FDG) were acquired from 37 patients to facilitate the development of the UR-CycleGAN model. In this model, low-dose data were simulated by reconstructing PET images with a 30-s acquisition time, while standard-dose data were reconstructed from a 2.5-min acquisition. The network was trained in a supervised manner on 13 210 pairs of PET images, and the quality of the images was objectively evaluated using peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). Compared to simulated low-dose data, the denoised PET images generated by our model showed significant improvement, with a clear trend toward SDPET image quality. The proposed method reduces acquisition time by 80% compared to standard-dose imaging, while achieving image quality close to SDPET images. It also enhances visual detail fidelity, demonstrating the feasibility and practical utility of the model for significantly reducing imaging time while maintaining high image quality.

Accelerating 3D radial MPnRAGE using a self-supervised deep factor model.

Chen Y, Kecskemeti SR, Holmes JH, Corum CA, Yaghoobi N, Magnotta VA, Jacob M

pubmed logopapersJun 2 2025
To develop a self-supervised and memory-efficient deep learning image reconstruction method for 4D non-Cartesian MRI with high resolution and a large parametric dimension. The deep factor model (DFM) represents a parametric series of 3D multicontrast images using a neural network conditioned by the inversion time using efficient zero-filled reconstructions as input estimates. The model parameters are learned in a single-shot learning (SSL) fashion from the k-space data of each acquisition. A compatible transfer learning (TL) approach using previously acquired data is also developed to reduce reconstruction time. The DFM is compared to subspace methods with different regularization strategies in a series of phantom and in vivo experiments using the MPnRAGE acquisition for multicontrast <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> imaging and quantitative <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> estimation. DFM-SSL improved the image quality and reduced bias and variance in quantitative <math xmlns="http://www.w3.org/1998/Math/MathML"> <semantics> <mrow> <msub><mrow><mi>T</mi></mrow> <mrow><mn>1</mn></mrow> </msub> </mrow> <annotation>$$ {T}_1 $$</annotation></semantics> </math> estimates in both phantom and in vivo studies, outperforming all other tested methods. DFM-TL reduced the inference time while maintaining a performance comparable to DFM-SSL and outperforming subspace methods with multiple regularization techniques. The proposed DFM offers a superior representation of the multicontrast images compared to subspace models, especially in the highly accelerated MPnRAGE setting. The self-supervised training is ideal for methods with both high resolution and a large parametric dimension, where training neural networks can become computationally demanding without a dedicated high-end GPU array.

Tomographic Foundation Model -- FORCE: Flow-Oriented Reconstruction Conditioning Engine

Wenjun Xia, Chuang Niu, Ge Wang

arxiv logopreprintJun 2 2025
Computed tomography (CT) is a major medical imaging modality. Clinical CT scenarios, such as low-dose screening, sparse-view scanning, and metal implants, often lead to severe noise and artifacts in reconstructed images, requiring improved reconstruction techniques. The introduction of deep learning has significantly advanced CT image reconstruction. However, obtaining paired training data remains rather challenging due to patient motion and other constraints. Although deep learning methods can still perform well with approximately paired data, they inherently carry the risk of hallucination due to data inconsistencies and model instability. In this paper, we integrate the data fidelity with the state-of-the-art generative AI model, referred to as the Poisson flow generative model (PFGM) with a generalized version PFGM++, and propose a novel CT framework: Flow-Oriented Reconstruction Conditioning Engine (FORCE). In our experiments, the proposed method shows superior performance in various CT imaging tasks, outperforming existing unsupervised reconstruction approaches.

Referenceless 4D Flow Cardiovascular Magnetic Resonance with deep learning.

Trenti C, Ylipää E, Ebbers T, Carlhäll CJ, Engvall J, Dyverfeldt P

pubmed logopapersJun 2 2025
Despite its potential to improve the assessment of cardiovascular diseases, 4D Flow CMR is hampered by long scan times. 4D Flow CMR is conventionally acquired with three motion encodings and one reference encoding, as the 3-dimensional velocity data are obtained by subtracting the phase of the reference from the phase of the motion encodings. In this study, we aim to use deep learning to predict the reference encoding from the three motion encodings for cardiovascular 4D Flow. A U-Net was trained with adversarial learning (U-Net<sub>ADV</sub>) and with a velocity frequency-weighted loss function (U-Net<sub>VEL</sub>) to predict the reference encoding from the three motion encodings obtained with a non-symmetric velocity-encoding scheme. Whole-heart 4D Flow datasets from 126 patients with different types of cardiomyopathies were retrospectively included. The models were trained on 113 patients with a 5-fold cross-validation, and tested on 13 patients. Flow volumes in the aorta and pulmonary artery, mean and maximum velocity, total and maximum turbulent kinetic energy at peak systole in the cardiac chambers and main vessels were assessed. 3-dimensional velocity data reconstructed with the reference encoding predicted by deep learning agreed well with the velocities obtained with the reference encoding acquired at the scanner for both models. U-Net<sub>ADV</sub> performed more consistently throughout the cardiac cycle and across the test subjects, while U-Net<sub>VEL</sub> performed better for systolic velocities. Comprehensively, the largest error for flow volumes, maximum and mean velocities was -6.031% for maximum velocities in the right ventricle for the U-Net<sub>ADV</sub>, and -6.92% for mean velocities in the right ventricle for U-Net<sub>VEL</sub>. For total turbulent kinetic energy, the highest errors were in the left ventricle (-77.17%) for the U-Net<sub>ADV</sub>, and in the right ventricle (24.96%) for the U-Net<sub>VEL</sub>, while for maximum turbulent kinetic energy were in the pulmonary artery for both models, with a value of -15.5% for U-Net<sub>ADV</sub> and 15.38% for the U-Net<sub>VEL</sub>. Deep learning-enabled referenceless 4D Flow CMR permits velocities and flow volumes quantification comparable to conventional 4D Flow. Omitting the reference encoding reduces the amount of acquired data by 25%, thus allowing shorter scan times or improved resolution, which is valuable for utilization in the clinical routine.

Robust multi-coil MRI reconstruction via self-supervised denoising.

Aali A, Arvinte M, Kumar S, Arefeen YI, Tamir JI

pubmed logopapersJun 2 2025
To examine the effect of incorporating self-supervised denoising as a pre-processing step for training deep learning (DL) based reconstruction methods on data corrupted by Gaussian noise. K-space data employed for training are typically multi-coil and inherently noisy. Although DL-based reconstruction methods trained on fully sampled data can enable high reconstruction quality, obtaining large, noise-free datasets is impractical. We leverage Generalized Stein's Unbiased Risk Estimate (GSURE) for denoising. We evaluate two DL-based reconstruction methods: Diffusion Probabilistic Models (DPMs) and Model-Based Deep Learning (MoDL). We evaluate the impact of denoising on the performance of these DL-based methods in solving accelerated multi-coil magnetic resonance imaging (MRI) reconstruction. The experiments were carried out on T2-weighted brain and fat-suppressed proton-density knee scans. We observed that self-supervised denoising enhances the quality and efficiency of MRI reconstructions across various scenarios. Specifically, employing denoised images rather than noisy counterparts when training DL networks results in lower normalized root mean squared error (NRMSE), higher structural similarity index measure (SSIM) and peak signal-to-noise ratio (PSNR) across different SNR levels, including 32, 22, and 12 dB for T2-weighted brain data, and 24, 14, and 4 dB for fat-suppressed knee data. We showed that denoising is an essential pre-processing technique capable of improving the efficacy of DL-based MRI reconstruction methods under diverse conditions. By refining the quality of input data, denoising enables training more effective DL networks, potentially bypassing the need for noise-free reference MRI scans.

Implicit neural representation for medical image reconstruction.

Zhu Y, Liu Y, Zhang Y, Liang D

pubmed logopapersJun 2 2025
Medical image reconstruction aims to generate high-quality images from sparsely sampled raw sensor data, which poses an ill-posed inverse problem. Traditional iterative reconstruction methods rely on prior information to empirically construct regularization terms, a process that is not trivial. While deep learning (DL)-based supervised reconstruction has made significant progress in improving image quality, it requires large-scale training data, which is difficult to obtain in medical imaging. Recently, implicit neural representation (INR) has emerged as a promising approach, offering a flexible and continuous representation of images by modeling the underlying signal as a function of spatial coordinates. This allows INR to capture fine details and complex structures more effectively than conventional discrete methods. This paper provides a comprehensive review of INR-based medical image reconstruction techniques, highlighting its growing impact on the field. The benefits of INR in both image and measurement domains are presented, and its advantages, limitations, and future research directions are discussed.&#xD.

Direct parametric reconstruction in dynamic PET using deep image prior and a novel parameter magnification strategy.

Hong X, Wang F, Sun H, Arabi H, Lu L

pubmed logopapersJun 2 2025
Multiple parametric imaging in positron emission tomography (PET) is challenging due to the noisy dynamic data and the complex mapping to kinetic parameters. Although methods like direct parametric reconstruction have been proposed to improve the image quality, limitations persist, particularly for nonlinear and small-value micro-parameters (e.g., k<sub>2</sub>, k<sub>3</sub>). This study presents a novel unsupervised deep learning approach to reconstruct and improve the quality of these micro-parameters. We proposed a direct parametric image reconstruction model, DIP-PM, integrating deep image prior (DIP) with a parameter magnification (PM) strategy. The model employs a U-Net generator to predict multiple parametric images using a CT image prior, with each output channel subsequently magnified by a factor to adjust the intensity. The model was optimized with a log-likelihood loss computed between the measured projection data and forward projected data. Two tracer datasets were simulated for evaluation: <sup>82</sup>Rb data using the 1-tissue compartment (1 TC) model and <sup>18</sup>F-FDG data using the 2-tissue compartment (2 TC) model, with 10-fold magnification applied to the 1 TC k<sub>2</sub> and the 2 TC k<sub>3</sub>, respectively. DIP-PM was compared to the indirect method, direct algorithm (OTEM) and the DIP method without parameter magnification (DIP-only). Performance was assessed on phantom data using peak signal-to-noise ratio (PSNR), normalized root mean square error (NRMSE) and structural similarity index (SSIM), as well as on real <sup>18</sup>F-FDG scan from a male subject. For the 1 TC model, OTEM performed well in K<sub>1</sub> reconstruction, but both indirect and OTEM methods showed high noise and poor performance in k<sub>2</sub>. The DIP-only method suppressed noise in k<sub>2</sub>, but failed to reconstruct fine structures in the myocardium. DIP-PM outperformed other methods with well-preserved detailed structures, particularly in k<sub>2</sub>, achieving the best metrics (PSNR: 19.00, NRMSE: 0.3002, SSIM: 0.9289). For the 2 TC model, traditional methods exhibited high noise and blurred structures in estimating all nonlinear parameters (K<sub>1</sub>, k<sub>2</sub>, k<sub>3</sub>), while DIP-based methods significantly improved image quality. DIP-PM outperformed all methods in k<sub>3</sub> (PSNR: 21.89, NRMSE: 0.4054, SSIM: 0.8797), and consequently produced the most accurate 2 TC K<sub>i</sub> images (PSNR: 22.74, NRMSE: 0.4897, SSIM: 0.8391). On real FDG data, DIP-PM also showed evident advantages in estimating K<sub>1</sub>, k<sub>2</sub> and k<sub>3</sub> while preserving myocardial structures. The results underscore the efficacy of the DIP-based direct parametric imaging in generating and improving quality of PET parametric images. This study suggests that the proposed DIP-PM method with the parameter magnification strategy can enhance the fidelity of nonlinear micro-parameter images.

Whole Brain 3D T1 Mapping in Multiple Sclerosis Using Standard Clinical Images Compared to MP2RAGE and MR Fingerprinting.

Snyder J, Blevins G, Smyth P, Wilman AH

pubmed logopapersJun 1 2025
Quantitative T1 and T2 mapping is a useful tool to assess properties of healthy and diseased tissues. However, clinical diagnostic imaging remains dominated by relaxation-weighted imaging without direct collection of relaxation maps. Dedicated research sequences such as MR fingerprinting can save time and improve resolution over classical gold standard quantitative MRI (qMRI) methods, although they are not widely adopted in clinical studies. We investigate the use of clinical sequences in conjunction with prior knowledge provided by machine learning to elucidate T1 maps of brain in routine imaging studies without the need for specialized sequences. A classification learner was trained on T1w (magnetization prepared rapid gradient echo [MPRAGE]) and T2w (fluid-attenuated inversion recovery [FLAIR]) data (2.6 million voxels) from multiple sclerosis (MS) patients at 3T, compared to gold standard inversion recovery fast spin echo T1 maps in five healthy subjects, and tested on eight MS patients. In the MS patient test, the results of the machine learner-produced T1 maps were compared to MP2RAGE and MR fingerprinting T1 maps in seven tissue regions of the brain: cortical grey matter, white matter, cerebrospinal fluid, caudate, putamen and globus pallidus. Additionally, T1s in lesion-segmented tissue was compared using the three different methods. The machine learner (ML) method had excellent agreement with MP2RAGE, with all average tissue deviations less than 3.2%, with T1 lesion variation of 0.1%-5.3% across the eight patients. The machine learning method provides a valuable and accurate estimation of T1 values in the human brain while using data from standard clinical sequences and allowing retrospective reconstruction from past studies without the need for new quantitative techniques.

Explicit Abnormality Extraction for Unsupervised Motion Artifact Reduction in Magnetic Resonance Imaging.

Zhou Y, Li H, Liu J, Kong Z, Huang T, Ahn E, Lv Z, Kim J, Feng DD

pubmed logopapersJun 1 2025
Motion artifacts compromise the quality of magnetic resonance imaging (MRI) and pose challenges to achieving diagnostic outcomes and image-guided therapies. In recent years, supervised deep learning approaches have emerged as successful solutions for motion artifact reduction (MAR). One disadvantage of these methods is their dependency on acquiring paired sets of motion artifact-corrupted (MA-corrupted) and motion artifact-free (MA-free) MR images for training purposes. Obtaining such image pairs is difficult and therefore limits the application of supervised training. In this paper, we propose a novel UNsupervised Abnormality Extraction Network (UNAEN) to alleviate this problem. Our network is capable of working with unpaired MA-corrupted and MA-free images. It converts the MA-corrupted images to MA-reduced images by extracting abnormalities from the MA-corrupted images using a proposed artifact extractor, which intercepts the residual artifact maps from the MA-corrupted MR images explicitly, and a reconstructor to restore the original input from the MA-reduced images. The performance of UNAEN was assessed by experimenting with various publicly available MRI datasets and comparing them with state-of-the-art methods. The quantitative evaluation demonstrates the superiority of UNAEN over alternative MAR methods and visually exhibits fewer residual artifacts. Our results substantiate the potential of UNAEN as a promising solution applicable in real-world clinical environments, with the capability to enhance diagnostic accuracy and facilitate image-guided therapies.

IM-Diff: Implicit Multi-Contrast Diffusion Model for Arbitrary Scale MRI Super-Resolution.

Liu L, Zou J, Xu C, Wang K, Lyu J, Xu X, Hu Z, Qin J

pubmed logopapersJun 1 2025
Diffusion models have garnered significant attention for MRI Super-Resolution (SR) and have achieved promising results. However, existing diffusion-based SR models face two formidable challenges: 1) insufficient exploitation of complementary information from multi-contrast images, which hinders the faithful reconstruction of texture details and anatomical structures; and 2) reliance on fixed magnification factors, such as 2× or 4×, which is impractical for clinical scenarios that require arbitrary scale magnification. To circumvent these issues, this paper introduces IM-Diff, an implicit multi-contrast diffusion model for arbitrary-scale MRI SR, leveraging the merits of both multi-contrast information and the continuous nature of implicit neural representation (INR). Firstly, we propose an innovative hierarchical multi-contrast fusion (HMF) module with reference-aware cross Mamba (RCM) to effectively incorporate target-relevant information from the reference image into the target image, while ensuring a substantial receptive field with computational efficiency. Secondly, we introduce multiple wavelet INR magnification (WINRM) modules into the denoising process by integrating the wavelet implicit neural non-linearity, enabling effective learning of continuous representations of MR images. The involved wavelet activation enhances space-frequency concentration, further bolstering representation accuracy and robustness in INR. Extensive experiments on three public datasets demonstrate the superiority of our method over existing state-of-the-art SR models across various magnification factors.
Page 41 of 54537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.