Sort by:
Page 6 of 18177 results

IM-Diff: Implicit Multi-Contrast Diffusion Model for Arbitrary Scale MRI Super-Resolution.

Liu L, Zou J, Xu C, Wang K, Lyu J, Xu X, Hu Z, Qin J

pubmed logopapersJun 1 2025
Diffusion models have garnered significant attention for MRI Super-Resolution (SR) and have achieved promising results. However, existing diffusion-based SR models face two formidable challenges: 1) insufficient exploitation of complementary information from multi-contrast images, which hinders the faithful reconstruction of texture details and anatomical structures; and 2) reliance on fixed magnification factors, such as 2× or 4×, which is impractical for clinical scenarios that require arbitrary scale magnification. To circumvent these issues, this paper introduces IM-Diff, an implicit multi-contrast diffusion model for arbitrary-scale MRI SR, leveraging the merits of both multi-contrast information and the continuous nature of implicit neural representation (INR). Firstly, we propose an innovative hierarchical multi-contrast fusion (HMF) module with reference-aware cross Mamba (RCM) to effectively incorporate target-relevant information from the reference image into the target image, while ensuring a substantial receptive field with computational efficiency. Secondly, we introduce multiple wavelet INR magnification (WINRM) modules into the denoising process by integrating the wavelet implicit neural non-linearity, enabling effective learning of continuous representations of MR images. The involved wavelet activation enhances space-frequency concentration, further bolstering representation accuracy and robustness in INR. Extensive experiments on three public datasets demonstrate the superiority of our method over existing state-of-the-art SR models across various magnification factors.

Lag-Net: Lag correction for cone-beam CT via a convolutional neural network.

Ren C, Kan S, Huang W, Xi Y, Ji X, Chen Y

pubmed logopapersJun 1 2025
Due to the presence of charge traps in amorphous silicon flat-panel detectors, lag signals are generated in consecutively captured projections. These signals lead to ghosting in projection images and severe lag artifacts in cone-beam computed tomography (CBCT) reconstructions. Traditional Linear Time-Invariant (LTI) correction need to measure lag correction factors (LCF) and may leave residual lag artifacts. This incomplete correction is partly attributed to the lack of consideration for exposure dependency. To measure the lag signals more accurately and suppress lag artifacts, we develop a novel hardware correction method. This method requires two scans of the same object, with adjustments to the operating timing of the CT instrumentation during the second scan to measure the lag signal from the first. While this hardware correction significantly mitigates lag artifacts, it is complex to implement and imposes high demands on the CT instrumentation. To enhance the process, We introduce a deep learning method called Lag-Net to remove lag signal, utilizing the nearly lag-free results from hardware correction as training targets for the network. Qualitative and quantitative analyses of experimental results on both simulated and real datasets demonstrate that deep learning correction significantly outperforms traditional LTI correction in terms of lag artifact suppression and image quality enhancement. Furthermore, the deep learning method achieves reconstruction results comparable to those obtained from hardware correction while avoiding the operational complexities associated with the hardware correction approach. The proposed hardware correction method, despite its operational complexity, demonstrates superior artifact suppression performance compared to the LTI algorithm, particularly under low-exposure conditions. The introduced Lag-Net, which utilizes the results of the hardware correction method as training targets, leverages the end-to-end nature of deep learning to circumvent the intricate operational drawbacks associated with hardware correction. Furthermore, the network's correction efficacy surpasses that of the LTI algorithm in low-exposure scenarios.

Generative adversarial networks in medical image reconstruction: A systematic literature review.

Hussain J, Båth M, Ivarsson J

pubmed logopapersJun 1 2025
Recent advancements in generative adversarial networks (GANs) have demonstrated substantial potential in medical image processing. Despite this progress, reconstructing images from incomplete data remains a challenge, impacting image quality. This systematic literature review explores the use of GANs in enhancing and reconstructing medical imaging data. A document survey of computing literature was conducted using the ACM Digital Library to identify relevant articles from journals and conference proceedings using keyword combinations, such as "generative adversarial networks or generative adversarial network," "medical image or medical imaging," and "image reconstruction." Across the reviewed articles, there were 122 datasets used in 175 instances, 89 top metrics employed 335 times, 10 different tasks with a total count of 173, 31 distinct organs featured in 119 instances, and 18 modalities utilized in 121 instances, collectively depicting significant utilization of GANs in medical imaging. The adaptability and efficacy of GANs were showcased across diverse medical tasks, organs, and modalities, utilizing top public as well as private/synthetic datasets for disease diagnosis, including the identification of conditions like cancer in different anatomical regions. The study emphasized GAN's increasing integration and adaptability in diverse radiology modalities, showcasing their transformative impact on diagnostic techniques, including cross-modality tasks. The intricate interplay between network size, batch size, and loss function refinement significantly impacts GAN's performance, although challenges in training persist. The study underscores GANs as dynamic tools shaping medical imaging, contributing significantly to image quality, training methodologies, and overall medical advancements, positioning them as substantial components driving medical advancements.

Effect of Deep Learning Image Reconstruction on Image Quality and Pericoronary Fat Attenuation Index.

Mei J, Chen C, Liu R, Ma H

pubmed logopapersJun 1 2025
To compare the image quality and fat attenuation index (FAI) of coronary artery CT angiography (CCTA) under different tube voltages between deep learning image reconstruction (DLIR) and adaptive statistical iterative reconstruction V (ASIR-V). Three hundred one patients who underwent CCTA with automatic tube current modulation were prospectively enrolled and divided into two groups: 120 kV group and low tube voltage group. Images were reconstructed using ASIR-V level 50% (ASIR-V50%) and high-strength DLIR (DLIR-H). In the low tube voltage group, the voltage was selected according to Chinese BMI classification: 70 kV (BMI < 24 kg/m<sup>2</sup>), 80 kV (24 kg/m<sup>2</sup> ≤ BMI < 28 kg/m<sup>2</sup>), 100 kV (BMI ≥ 28 kg/m<sup>2</sup>). At the same tube voltage, the subjective and objective image quality, edge rise distance (ERD), and FAI between different algorithms were compared. Under different tube voltages, we used DLIR-H to compare the differences between subjective, objective image quality, and ERD. Compared with the 120 kV group, the DLIR-H image noise of 70 kV, 80 kV, and 100 kV groups increased by 36%, 25%, and 12%, respectively (all P < 0.001); contrast-to-noise ratio (CNR), subjective score, and ERD were similar (all P > 0.05). In the 70 kV, 80 kV, 100 kV, and 120 kV groups, compared with ASIR-V50%, DLIR-H image noise decreased by 50%, 53%, 47%, and 38-50%, respectively; CNR, subjective score, and FAI value increased significantly (all P < 0.001), ERD decreased. Compared with 120 kV tube voltage, the combination of DLIR-H and low tube voltage maintains image quality. At the same tube voltage, compared with ASIR-V, DLIR-H improves image quality and FAI value.

An Adaptive SCG-ECG Multimodal Gating Framework for Cardiac CTA.

Ganesh S, Abozeed M, Aziz U, Tridandapani S, Bhatti PT

pubmed logopapersJun 1 2025
Cardiovascular disease (CVD) is the leading cause of death worldwide. Coronary artery disease (CAD), a prevalent form of CVD, is typically assessed using catheter coronary angiography (CCA), an invasive, costly procedure with associated risks. While cardiac computed tomography angiography (CTA) presents a less invasive alternative, it suffers from limited temporal resolution, often resulting in motion artifacts that degrade diagnostic quality. Traditional ECG-based gating methods for CTA inadequately capture cardiac mechanical motion. To address this, we propose a novel multimodal approach that enhances CTA imaging by predicting cardiac quiescent periods using seismocardiogram (SCG) and ECG data, integrated through a weighted fusion (WF) approach and artificial neural networks (ANNs). We developed a regression-based ANN framework (r-ANN WF) designed to improve prediction accuracy and reduce computational complexity, which was compared with a classification-based framework (c-ANN WF), ECG gating, and US data. Our results demonstrate that the r-ANN WF approach improved overall diastolic and systolic cardiac quiescence prediction accuracy by 52.6% compared to ECG-based predictions, using ultrasound (US) as the ground truth, with an average prediction time of 4.83 ms. Comparative evaluations based on reconstructed CTA images show that both r-ANN WF and c-ANN WF offer diagnostic quality comparable to US-based gating, underscoring their clinical potential. Additionally, the lower computational complexity of r-ANN WF makes it suitable for real-time applications. This approach could enhance CTA's diagnostic quality, offering a more accurate and efficient method for CVD diagnosis and management.

Adaptive Weighting Based Metal Artifact Reduction in CT Images.

Wang H, Wu Y, Wang Y, Wei D, Wu X, Ma J, Zheng Y

pubmed logopapersJun 1 2025
Against the metal artifact reduction (MAR) task in computed tomography (CT) imaging, most of the existing deep-learning-based approaches generally select a single Hounsfield unit (HU) window followed by a normalization operation to preprocess CT images. However, in practical clinical scenarios, different body tissues and organs are often inspected under varying window settings for good contrast. The methods trained on a fixed single window would lead to insufficient removal of metal artifacts when being transferred to deal with other windows. To alleviate this problem, few works have proposed to reconstruct the CT images under multiple-window configurations. Albeit achieving good reconstruction performance for different windows, they adopt to directly supervise each window learning in an equal weighting way based on the training set. To improve the learning flexibility and model generalizability, in this paper, we propose an adaptive weighting algorithm, called AdaW, for the multiple-window metal artifact reduction, which can be applied to different deep MAR network backbones. Specifically, we first formulate the multiple window learning task as a bi-level optimization problem. Then we derive an adaptive weighting optimization algorithm where the learning process for MAR under each window is automatically weighted via a learning-to-learn paradigm based on the training set and validation set. This rationality is finely substantiated through theoretical analysis. Based on different network backbones, experimental comparisons executed on five datasets with different body sites comprehensively validate the effectiveness of AdaW in helping improve the generalization performance as well as its good applicability. We will release the code at https://github.com/hongwang01/AdaW.

Fine-Tuning Deep Learning Model for Quantitative Knee Joint Mapping With MR Fingerprinting and Its Comparison to Dictionary Matching Method: Fine-Tuning Deep Learning Model for Quantitative MRF.

Zhang X, de Moura HL, Monga A, Zibetti MVW, Regatte RR

pubmed logopapersJun 1 2025
Magnetic resonance fingerprinting (MRF), as an emerging versatile and noninvasive imaging technique, provides simultaneous quantification of multiple quantitative MRI parameters, which have been used to detect changes in cartilage composition and structure in osteoarthritis. Deep learning (DL)-based methods for quantification mapping in MRF overcome the memory constraints and offer faster processing compared to the conventional dictionary matching (DM) method. However, limited attention has been given to the fine-tuning of neural networks (NNs) in DL and fair comparison with DM. In this study, we investigate the impact of training parameter choices on NN performance and compare the fine-tuned NN with DM for multiparametric mapping in MRF. Our approach includes optimizing NN hyperparameters, analyzing the singular value decomposition (SVD) components of MRF data, and optimization of the DM method. We conducted experiments on synthetic data, the NIST/ISMRM MRI system phantom with ground truth, and in vivo knee data from 14 healthy volunteers. The results demonstrate the critical importance of selecting appropriate training parameters, as these significantly affect NN performance. The findings also show that NNs improve the accuracy and robustness of T<sub>1</sub>, T<sub>2</sub>, and T<sub>1ρ</sub> mappings compared to DM in synthetic datasets. For in vivo knee data, the NN achieved comparable results for T<sub>1</sub>, with slightly lower T<sub>2</sub> and slightly higher T<sub>1ρ</sub> measurements compared to DM. In conclusion, the fine-tuned NN can be used to increase accuracy and robustness for multiparametric quantitative mapping from MRF of the knee joint.

WAND: Wavelet Analysis-Based Neural Decomposition of MRS Signals for Artifact Removal.

Merkofer JP, van de Sande DMJ, Amirrajab S, Min Nam K, van Sloun RJG, Bhogal AA

pubmed logopapersJun 1 2025
Accurate quantification of metabolites in magnetic resonance spectroscopy (MRS) is challenged by low signal-to-noise ratio (SNR), overlapping metabolites, and various artifacts. Particularly, unknown and unparameterized baseline effects obscure the quantification of low-concentration metabolites, limiting MRS reliability. This paper introduces wavelet analysis-based neural decomposition (WAND), a novel data-driven method designed to decompose MRS signals into their constituent components: metabolite-specific signals, baseline, and artifacts. WAND takes advantage of the enhanced separability of these components within the wavelet domain. The method employs a neural network, specifically a U-Net architecture, trained to predict masks for wavelet coefficients obtained through the continuous wavelet transform. These masks effectively isolate desired signal components in the wavelet domain, which are then inverse-transformed to obtain separated signals. Notably, an artifact mask is created by inverting the sum of all known signal masks, enabling WAND to capture and remove even unpredictable artifacts. The effectiveness of WAND in achieving accurate decomposition is demonstrated through numerical evaluations using simulated spectra. Furthermore, WAND's artifact removal capabilities significantly enhance the quantification accuracy of linear combination model fitting. The method's robustness is further validated using data from the 2016 MRS Fitting Challenge and in vivo experiments.

Semi-supervised spatial-frequency transformer for metal artifact reduction in maxillofacial CT and evaluation with intraoral scan.

Li Y, Ma C, Li Z, Wang Z, Han J, Shan H, Liu J

pubmed logopapersJun 1 2025
To develop a semi-supervised domain adaptation technique for metal artifact reduction with a spatial-frequency transformer (SFTrans) model (Semi-SFTrans), and to quantitatively compare its performance with supervised models (Sup-SFTrans and ResUNet) and traditional linear interpolation MAR method (LI) in oral and maxillofacial CT. Supervised models, including Sup-SFTrans and a state-of-the-art model termed ResUNet, were trained with paired simulated CT images, while semi-supervised model, Semi-SFTrans, was trained with both paired simulated and unpaired clinical CT images. For evaluation on the simulated data, we calculated Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) on the images corrected by four methods: LI, ResUNet, Sup-SFTrans, and Semi-SFTrans. For evaluation on the clinical data, we collected twenty-two clinical cases with real metal artifacts, and the corresponding intraoral scan data. Three radiologists visually assessed the severity of artifacts using Likert scales on the original, Sup-SFTrans-corrected, and Semi-SFTrans-corrected images. Quantitative MAR evaluation was conducted by measuring Mean Hounsfield Unit (HU) values, standard deviations, and Signal-to-Noise Ratios (SNRs) across Regions of Interest (ROIs) such as the tongue, bilateral buccal, lips, and bilateral masseter muscles, using paired t-tests and Wilcoxon signed-rank tests. Further, teeth integrity in the corrected images was assessed by comparing teeth segmentation results from the corrected images against the ground-truth segmentation derived from registered intraoral scan data, using Dice Score and Hausdorff Distance. Sup-SFTrans outperformed LI, ResUNet and Semi-SFTrans on the simulated dataset. Visual assessments from the radiologists showed that average scores were (2.02 ± 0.91) for original CT, (4.46 ± 0.51) for Semi-SFTrans CT, and (3.64 ± 0.90) for Sup-SFTrans CT, with intra correlation coefficients (ICCs)>0.8 of all groups and p < 0.001 between groups. On soft tissue, both Semi-SFTrans and Sup-SFTrans significantly reduced metal artifacts in tongue (p < 0.001), lips, bilateral buccal regions, and masseter muscle areas (p < 0.05). Semi-SFTrans achieved superior metal artifact reduction than Sup-SFTrans in all ROIs (p < 0.001). SNR results indicated significant differences between Semi-SFTrans and Sup-SFTrans in tongue (p = 0.0391), bilateral buccal (p = 0.0067), lips (p = 0.0208), and bilateral masseter muscle areas (p = 0.0031). Notably, Semi-SFTrans demonstrated better teeth integrity preservation than Sup-SFTrans (Dice Score: p < 0.001; Hausdorff Distance: p = 0.0022). The semi-supervised MAR model, Semi-SFTrans, demonstrated superior metal artifact reduction performance over supervised counterparts in real dental CT images.

Fast aberration correction in 3D transcranial photoacoustic computed tomography via a learning-based image reconstruction method.

Huang HK, Kuo J, Zhang Y, Aborahama Y, Cui M, Sastry K, Park S, Villa U, Wang LV, Anastasio MA

pubmed logopapersJun 1 2025
Transcranial photoacoustic computed tomography (PACT) holds significant potential as a neuroimaging modality. However, compensating for skull-induced aberrations in reconstructed images remains a challenge. Although optimization-based image reconstruction methods (OBRMs) can account for the relevant wave physics, they are computationally demanding and generally require accurate estimates of the skull's viscoelastic parameters. To circumvent these issues, a learning-based image reconstruction method was investigated for three-dimensional (3D) transcranial PACT. The method was systematically assessed in virtual imaging studies that involved stochastic 3D numerical head phantoms and applied to experimental data acquired by use of a physical head phantom that involved a human skull. The results demonstrated that the learning-based method yielded accurate images and exhibited robustness to errors in the assumed skull properties, while substantially reducing computational times compared to an OBRM. To the best of our knowledge, this is the first demonstration of a learned image reconstruction method for 3D transcranial PACT.
Page 6 of 18177 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.