Sort by:
Page 44 of 54537 results

Lag-Net: Lag correction for cone-beam CT via a convolutional neural network.

Ren C, Kan S, Huang W, Xi Y, Ji X, Chen Y

pubmed logopapersJun 1 2025
Due to the presence of charge traps in amorphous silicon flat-panel detectors, lag signals are generated in consecutively captured projections. These signals lead to ghosting in projection images and severe lag artifacts in cone-beam computed tomography (CBCT) reconstructions. Traditional Linear Time-Invariant (LTI) correction need to measure lag correction factors (LCF) and may leave residual lag artifacts. This incomplete correction is partly attributed to the lack of consideration for exposure dependency. To measure the lag signals more accurately and suppress lag artifacts, we develop a novel hardware correction method. This method requires two scans of the same object, with adjustments to the operating timing of the CT instrumentation during the second scan to measure the lag signal from the first. While this hardware correction significantly mitigates lag artifacts, it is complex to implement and imposes high demands on the CT instrumentation. To enhance the process, We introduce a deep learning method called Lag-Net to remove lag signal, utilizing the nearly lag-free results from hardware correction as training targets for the network. Qualitative and quantitative analyses of experimental results on both simulated and real datasets demonstrate that deep learning correction significantly outperforms traditional LTI correction in terms of lag artifact suppression and image quality enhancement. Furthermore, the deep learning method achieves reconstruction results comparable to those obtained from hardware correction while avoiding the operational complexities associated with the hardware correction approach. The proposed hardware correction method, despite its operational complexity, demonstrates superior artifact suppression performance compared to the LTI algorithm, particularly under low-exposure conditions. The introduced Lag-Net, which utilizes the results of the hardware correction method as training targets, leverages the end-to-end nature of deep learning to circumvent the intricate operational drawbacks associated with hardware correction. Furthermore, the network's correction efficacy surpasses that of the LTI algorithm in low-exposure scenarios.

Combining Deep Data-Driven and Physics-Inspired Learning for Shear Wave Speed Estimation in Ultrasound Elastography.

Tehrani AKZ, Schoen S, Candel I, Gu Y, Guo P, Thomenius K, Pierce TT, Wang M, Tadross R, Washburn M, Rivaz H, Samir AE

pubmed logopapersJun 1 2025
The shear wave elastography (SWE) provides quantitative markers for tissue characterization by measuring the shear wave speed (SWS), which reflects tissue stiffness. SWE uses an acoustic radiation force pulse sequence to generate shear waves that propagate laterally through tissue with transient displacements. These waves travel perpendicular to the applied force, and their displacements are tracked using high-frame-rate ultrasound. Estimating the SWS map involves two main steps: speckle tracking and SWS estimation. Speckle tracking calculates particle velocity by measuring RF/IQ data displacement between adjacent firings, while SWS estimation methods typically compare particle velocity profiles of samples that are laterally a few millimeters apart. Deep learning (DL) methods have gained attention for SWS estimation, often relying on supervised training using simulated data. However, these methods may struggle with real-world data, which can differ significantly from the simulated training data, potentially leading to artifacts in the estimated SWS map. To address this challenge, we propose a physics-inspired learning approach that utilizes real data without known SWS values. Our method employs an adaptive unsupervised loss function, allowing the network to train with the real noisy data to minimize the artifacts and improve the robustness. We validate our approach using experimental phantom data and in vivo liver data from two human subjects, demonstrating enhanced accuracy and reliability in SWS estimation compared with conventional and supervised methods. This hybrid approach leverages the strengths of both data-driven and physics-inspired learning, offering a promising solution for more accurate and robust SWS mapping in clinical applications.

Fine-Tuning Deep Learning Model for Quantitative Knee Joint Mapping With MR Fingerprinting and Its Comparison to Dictionary Matching Method: Fine-Tuning Deep Learning Model for Quantitative MRF.

Zhang X, de Moura HL, Monga A, Zibetti MVW, Regatte RR

pubmed logopapersJun 1 2025
Magnetic resonance fingerprinting (MRF), as an emerging versatile and noninvasive imaging technique, provides simultaneous quantification of multiple quantitative MRI parameters, which have been used to detect changes in cartilage composition and structure in osteoarthritis. Deep learning (DL)-based methods for quantification mapping in MRF overcome the memory constraints and offer faster processing compared to the conventional dictionary matching (DM) method. However, limited attention has been given to the fine-tuning of neural networks (NNs) in DL and fair comparison with DM. In this study, we investigate the impact of training parameter choices on NN performance and compare the fine-tuned NN with DM for multiparametric mapping in MRF. Our approach includes optimizing NN hyperparameters, analyzing the singular value decomposition (SVD) components of MRF data, and optimization of the DM method. We conducted experiments on synthetic data, the NIST/ISMRM MRI system phantom with ground truth, and in vivo knee data from 14 healthy volunteers. The results demonstrate the critical importance of selecting appropriate training parameters, as these significantly affect NN performance. The findings also show that NNs improve the accuracy and robustness of T<sub>1</sub>, T<sub>2</sub>, and T<sub>1ρ</sub> mappings compared to DM in synthetic datasets. For in vivo knee data, the NN achieved comparable results for T<sub>1</sub>, with slightly lower T<sub>2</sub> and slightly higher T<sub>1ρ</sub> measurements compared to DM. In conclusion, the fine-tuned NN can be used to increase accuracy and robustness for multiparametric quantitative mapping from MRF of the knee joint.

A CT-free deep-learning-based attenuation and scatter correction for copper-64 PET in different time-point scans.

Adeli Z, Hosseini SA, Salimi Y, Vahidfar N, Sheikhzadeh P

pubmed logopapersJun 1 2025
This study aimed to develop and evaluate a deep-learning model for attenuation and scatter correction in whole-body 64Cu-based PET imaging. A swinUNETR model was implemented using the MONAI framework. Whole-body PET-nonAC and PET-CTAC image pairs were used for training, where PET-nonAC served as the input and PET-CTAC as the output. Due to the limited number of Cu-based PET/CT images, a model pre-trained on 51 Ga-PSMA PET images was fine-tuned on 15 Cu-based PET images via transfer learning. The model was trained without freezing layers, adapting learned features to the Cu-based dataset. For testing, six additional Cu-based PET images were used, representing 1-h, 12-h, and 48-h time points, with two images per group. The model performed best at the 12-h time point, with an MSE of 0.002 ± 0.0004 SUV<sup>2</sup>, PSNR of 43.14 ± 0.08 dB, and SSIM of 0.981 ± 0.002. At 48 h, accuracy slightly decreased (MSE = 0.036 ± 0.034 SUV<sup>2</sup>), but image quality remained high (PSNR = 44.49 ± 1.09 dB, SSIM = 0.981 ± 0.006). At 1 h, the model also showed strong results (MSE = 0.024 ± 0.002 SUV<sup>2</sup>, PSNR = 45.89 ± 5.23 dB, SSIM = 0.984 ± 0.005), demonstrating consistency across time points. Despite the limited size of the training dataset, the use of fine-tuning from a previously pre-trained model yielded acceptable performance. The results demonstrate that the proposed deep learning model can effectively generate PET-DLAC images that closely resemble PET-CTAC images, with only minor errors.

Semi-supervised spatial-frequency transformer for metal artifact reduction in maxillofacial CT and evaluation with intraoral scan.

Li Y, Ma C, Li Z, Wang Z, Han J, Shan H, Liu J

pubmed logopapersJun 1 2025
To develop a semi-supervised domain adaptation technique for metal artifact reduction with a spatial-frequency transformer (SFTrans) model (Semi-SFTrans), and to quantitatively compare its performance with supervised models (Sup-SFTrans and ResUNet) and traditional linear interpolation MAR method (LI) in oral and maxillofacial CT. Supervised models, including Sup-SFTrans and a state-of-the-art model termed ResUNet, were trained with paired simulated CT images, while semi-supervised model, Semi-SFTrans, was trained with both paired simulated and unpaired clinical CT images. For evaluation on the simulated data, we calculated Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index Measure (SSIM) on the images corrected by four methods: LI, ResUNet, Sup-SFTrans, and Semi-SFTrans. For evaluation on the clinical data, we collected twenty-two clinical cases with real metal artifacts, and the corresponding intraoral scan data. Three radiologists visually assessed the severity of artifacts using Likert scales on the original, Sup-SFTrans-corrected, and Semi-SFTrans-corrected images. Quantitative MAR evaluation was conducted by measuring Mean Hounsfield Unit (HU) values, standard deviations, and Signal-to-Noise Ratios (SNRs) across Regions of Interest (ROIs) such as the tongue, bilateral buccal, lips, and bilateral masseter muscles, using paired t-tests and Wilcoxon signed-rank tests. Further, teeth integrity in the corrected images was assessed by comparing teeth segmentation results from the corrected images against the ground-truth segmentation derived from registered intraoral scan data, using Dice Score and Hausdorff Distance. Sup-SFTrans outperformed LI, ResUNet and Semi-SFTrans on the simulated dataset. Visual assessments from the radiologists showed that average scores were (2.02 ± 0.91) for original CT, (4.46 ± 0.51) for Semi-SFTrans CT, and (3.64 ± 0.90) for Sup-SFTrans CT, with intra correlation coefficients (ICCs)>0.8 of all groups and p < 0.001 between groups. On soft tissue, both Semi-SFTrans and Sup-SFTrans significantly reduced metal artifacts in tongue (p < 0.001), lips, bilateral buccal regions, and masseter muscle areas (p < 0.05). Semi-SFTrans achieved superior metal artifact reduction than Sup-SFTrans in all ROIs (p < 0.001). SNR results indicated significant differences between Semi-SFTrans and Sup-SFTrans in tongue (p = 0.0391), bilateral buccal (p = 0.0067), lips (p = 0.0208), and bilateral masseter muscle areas (p = 0.0031). Notably, Semi-SFTrans demonstrated better teeth integrity preservation than Sup-SFTrans (Dice Score: p < 0.001; Hausdorff Distance: p = 0.0022). The semi-supervised MAR model, Semi-SFTrans, demonstrated superior metal artifact reduction performance over supervised counterparts in real dental CT images.

WAND: Wavelet Analysis-Based Neural Decomposition of MRS Signals for Artifact Removal.

Merkofer JP, van de Sande DMJ, Amirrajab S, Min Nam K, van Sloun RJG, Bhogal AA

pubmed logopapersJun 1 2025
Accurate quantification of metabolites in magnetic resonance spectroscopy (MRS) is challenged by low signal-to-noise ratio (SNR), overlapping metabolites, and various artifacts. Particularly, unknown and unparameterized baseline effects obscure the quantification of low-concentration metabolites, limiting MRS reliability. This paper introduces wavelet analysis-based neural decomposition (WAND), a novel data-driven method designed to decompose MRS signals into their constituent components: metabolite-specific signals, baseline, and artifacts. WAND takes advantage of the enhanced separability of these components within the wavelet domain. The method employs a neural network, specifically a U-Net architecture, trained to predict masks for wavelet coefficients obtained through the continuous wavelet transform. These masks effectively isolate desired signal components in the wavelet domain, which are then inverse-transformed to obtain separated signals. Notably, an artifact mask is created by inverting the sum of all known signal masks, enabling WAND to capture and remove even unpredictable artifacts. The effectiveness of WAND in achieving accurate decomposition is demonstrated through numerical evaluations using simulated spectra. Furthermore, WAND's artifact removal capabilities significantly enhance the quantification accuracy of linear combination model fitting. The method's robustness is further validated using data from the 2016 MRS Fitting Challenge and in vivo experiments.

Deep learning-based MRI reconstruction with Artificial Fourier Transform Network (AFTNet).

Yang Y, Zhang Y, Li Z, Tian JS, Dagommer M, Guo J

pubmed logopapersJun 1 2025
Deep complex-valued neural networks (CVNNs) provide a powerful way to leverage complex number operations and representations and have succeeded in several phase-based applications. However, previous networks have not fully explored the impact of complex-valued networks in the frequency domain. Here, we introduce a unified complex-valued deep learning framework - Artificial Fourier Transform Network (AFTNet) - which combines domain-manifold learning and CVNNs. AFTNet can be readily used to solve image inverse problems in domain transformation, especially for accelerated magnetic resonance imaging (MRI) reconstruction and other applications. While conventional methods typically utilize magnitude images or treat the real and imaginary components of k-space data as separate channels, our approach directly processes raw k-space data in the frequency domain, utilizing complex-valued operations. This allows for a mapping between the frequency (k-space) and image domain to be determined through cross-domain learning. We show that AFTNet achieves superior accelerated MRI reconstruction compared to existing approaches. Furthermore, our approach can be applied to various tasks, such as denoised magnetic resonance spectroscopy (MRS) reconstruction and datasets with various contrasts. The AFTNet presented here is a valuable preprocessing component for different preclinical studies and provides an innovative alternative for solving inverse problems in imaging and spectroscopy. The code is available at: https://github.com/yanting-yang/AFT-Net.

Deep Learning in Knee MRI: A Prospective Study to Enhance Efficiency, Diagnostic Confidence and Sustainability.

Reschke P, Gotta J, Gruenewald LD, Bachir AA, Strecker R, Nickel D, Booz C, Martin SS, Scholtz JE, D'Angelo T, Dahm D, Solim LA, Konrad P, Mahmoudi S, Bernatz S, Al-Saleh S, Hong QAL, Sommer CM, Eichler K, Vogl TJ, Haberkorn SM, Koch V

pubmed logopapersJun 1 2025
The objective of this study was to evaluate a combination of deep learning (DL)-reconstructed parallel acquisition technique (PAT) and simultaneous multislice (SMS) acceleration imaging in comparison to conventional knee imaging. Adults undergoing knee magnetic resonance imaging (MRI) with DL-enhanced acquisitions were prospectively analyzed from December 2023 to April 2024. The participants received T1 without fat saturation and fat-suppressed PD-weighted TSE pulse sequences using conventional two-fold PAT (P2) and either DL-enhanced four-fold PAT (P4) or a combination of DL-enhanced four-fold PAT with two-fold SMS acceleration (P4S2). Three independent readers assessed image quality, signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR), and radiomics features. 34 participants (mean age 45±17years; 14 women) were included who underwent P4S2, P4, and P2 imaging. Both P4S2 and P4 demonstrated higher CNR and SNR values compared to P2 (P<.001). P4 was diagnostically inferior to P2 only in the visualization of cartilage damage (P<.005), while P4S2 consistently outperformed P2 in anatomical delineation across all evaluated structures and raters (P<.05). Radiomics analysis revealed significant differences in contrast and gray-level characteristics among P2, P4, and P4S2 (P<.05). P4 reduced time by 31% and P4S2 by 41% compared to P2 (P<.05). P4S2 DL acceleration offers significant advancements over P4 and P2 in knee MRI, combining superior image quality and improved anatomical delineation at significant time reduction. Its improvements in anatomical delineation, energy consumption, and workforce optimization make P4S2 a significant step forward.

Accelerated High-resolution T1- and T2-weighted Breast MRI with Deep Learning Super-resolution Reconstruction.

Mesropyan N, Katemann C, Leutner C, Sommer A, Isaak A, Weber OM, Peeters JM, Dell T, Bischoff L, Kuetting D, Pieper CC, Lakghomi A, Luetkens JA

pubmed logopapersJun 1 2025
To assess the performance of an industry-developed deep learning (DL) algorithm to reconstruct low-resolution Cartesian T1-weighted dynamic contrast-enhanced (T1w) and T2-weighted turbo-spin-echo (T2w) sequences and compare them to standard sequences. Female patients with indications for breast MRI were included in this prospective study. The study protocol at 1.5 Tesla MRI included T1w and T2w. Both sequences were acquired in standard resolution (T1<sub>S</sub> and T2<sub>S</sub>) and in low-resolution with following DL reconstructions (T1<sub>DL</sub> and T2<sub>DL</sub>). For DL reconstruction, two convolutional networks were used: (1) Adaptive-CS-Net for denoising with compressed sensing, and (2) Precise-Image-Net for resolution upscaling of previously downscaled images. Overall image quality was assessed using 5-point-Likert scale (from 1=non-diagnostic to 5=excellent). Apparent signal-to-noise (aSNR) and contrast-to-noise (aCNR) ratios were calculated. Breast Imaging Reporting and Data System (BI-RADS) agreement between different sequence types was assessed. A total of 47 patients were included (mean age, 58±11 years). Acquisition time for T1<sub>DL</sub> and T2<sub>DL</sub> were reduced by 51% (44 vs. 90 s per dynamic phase) and 46% (102 vs. 192 s), respectively. T1<sub>DL</sub> and T2<sub>DL</sub> showed higher overall image quality (e.g., 4 [IQR, 4-4] for T1<sub>S</sub> vs. 5 [IQR, 5-5] for T1<sub>DL</sub>, P<0.001). Both, T1<sub>DL</sub> and T2<sub>DL</sub> revealed higher aSNR and aCNR than T1<sub>S</sub> and T2<sub>S</sub> (e.g., aSNR: 32.35±10.23 for T2<sub>S</sub> vs. 27.88±6.86 for T2<sub>DL</sub>, P=0.014). Cohen k agreement by BI-RADS assessment was excellent (0.962, P<0.001). DL for denoising and resolution upscaling reduces acquisition time and improves image quality for T1w and T2w breast MRI.

GDP-Net: Global Dependency-Enhanced Dual-Domain Parallel Network for Ring Artifact Removal.

Zhang Y, Liu G, Liu Y, Xie S, Gu J, Huang Z, Ji X, Lyu T, Xi Y, Zhu S, Yang J, Chen Y

pubmed logopapersJun 1 2025
In Computed Tomography (CT) imaging, the ring artifacts caused by the inconsistent detector response can significantly degrade the reconstructed images, having negative impacts on the subsequent applications. The new generation of CT systems based on photon-counting detectors are affected by ring artifacts more severely. The flexibility and variety of detector responses make it difficult to build a well-defined model to characterize the ring artifacts. In this context, this study proposes the global dependency-enhanced dual-domain parallel neural network for Ring Artifact Removal (RAR). First, based on the fact that the features of ring artifacts are different in Cartesian and Polar coordinates, the parallel architecture is adopted to construct the deep neural network so that it can extract and exploit the latent features from different domains to improve the performance of ring artifact removal. Besides, the ring artifacts are globally relevant whether in Cartesian or Polar coordinate systems, but convolutional neural networks show inherent shortcomings in modeling long-range dependency. To tackle this problem, this study introduces the novel Mamba mechanism to achieve a global receptive field without incurring high computational complexity. It enables effective capture of the long-range dependency, thereby enhancing the model performance in image restoration and artifact reduction. The experiments on the simulated data validate the effectiveness of the dual-domain parallel neural network and the Mamba mechanism, and the results on two unseen real datasets demonstrate the promising performance of the proposed RAR algorithm in eliminating ring artifacts and recovering image details.
Page 44 of 54537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.