Sort by:
Page 36 of 54537 results

VHU-Net: Variational Hadamard U-Net for Body MRI Bias Field Correction

Xin Zhu

arxiv logopreprintJun 23 2025
Bias field artifacts in magnetic resonance imaging (MRI) scans introduce spatially smooth intensity inhomogeneities that degrade image quality and hinder downstream analysis. To address this challenge, we propose a novel variational Hadamard U-Net (VHU-Net) for effective body MRI bias field correction. The encoder comprises multiple convolutional Hadamard transform blocks (ConvHTBlocks), each integrating convolutional layers with a Hadamard transform (HT) layer. Specifically, the HT layer performs channel-wise frequency decomposition to isolate low-frequency components, while a subsequent scaling layer and semi-soft thresholding mechanism suppress redundant high-frequency noise. To compensate for the HT layer's inability to model inter-channel dependencies, the decoder incorporates an inverse HT-reconstructed transformer block, enabling global, frequency-aware attention for the recovery of spatially consistent bias fields. The stacked decoder ConvHTBlocks further enhance the capacity to reconstruct the underlying ground-truth bias field. Building on the principles of variational inference, we formulate a new evidence lower bound (ELBO) as the training objective, promoting sparsity in the latent space while ensuring accurate bias field estimation. Comprehensive experiments on abdominal and prostate MRI datasets demonstrate the superiority of VHU-Net over existing state-of-the-art methods in terms of intensity uniformity, signal fidelity, and tissue contrast. Moreover, the corrected images yield substantial downstream improvements in segmentation accuracy. Our framework offers computational efficiency, interpretability, and robust performance across multi-center datasets, making it suitable for clinical deployment.

Evaluation of deep learning reconstruction in accelerated knee MRI: comparison of visual and diagnostic performance metrics.

Wen S, Xu Y, Yang G, Huang F, Zeng Z

pubmed logopapersJun 23 2025
To investigate the clinical value of deep learning reconstruction (DLR) in accelerated magnetic resonance imaging (MRI) of the knee and compare its visual quality and diagnostic performance metrics with conventional fast spin-echo T2-weighted imaging with fat suppression (FSE-T2WI-FS). This prospective study included 116 patients with knee injuries. All patients underwent both conventional FSE-T2WI-FS and DLR-accelerated FSE-T2WI-FS scans on a 1.5‑T MRI scanner. Two radiologists independently evaluated overall image quality, artifacts, and image sharpness using a 5-point Likert scale. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) of lesion regions were measured. Subjective scores were compared using the Wilcoxon signed-rank test, SNR/CNR differences were analyzed via paired t tests, and inter-reader agreement was assessed using Cohen's kappa. The accelerated sequences with DLR achieved a 36 % reduction in total scan time compared to conventional sequences (p < 0.05), shortening acquisition from 9 min 50 s to 6 min 15 s. Moreover, DLR demonstrated superior artifact suppression and enhanced quantitative image quality, with significantly higher SNR and CNR (p < 0.001). Despite these improvements, diagnostic equivalence was maintained: No significant differences were observed in overall image quality, sharpness (p > 0.05), or lesion detection rates. Inter-reader agreement was good (κ> 0.75), further validating the clinical reliability of the DLR technique. Using DLR-accelerated FSE-T2WI-FS reduces scan time, suppresses artifacts, and improves quantitative image quality while maintaining diagnostic accuracy comparable to conventional sequences. This technology holds promise for optimizing clinical workflows in MRI of the knee.

Self-Supervised Optimization of RF Data Coherence for Improving Breast Reflection UCT Reconstruction.

He L, Liu Z, Cai Y, Zhang Q, Zhou L, Yuan J, Xu Y, Ding M, Yuchi M, Qiu W

pubmed logopapersJun 23 2025
Reflection Ultrasound Computed Tomography (UCT) is gaining prominence as an essential instrument for breast cancer screening. However, reflection UCT quality is often compromised by the variability in sound speed across breast tissue. Traditionally, reflection UCT utilizes the Delay and Sum (DAS) algorithm, where the Time of Flight significantly affects the coherence of the reflected radio frequency (RF) data, based on an oversimplified assumption of uniform sound speed. This study introduces three meticulously engineered modules that leverage the spatial correlation of receiving arrays to improve the coherence of RF data and enable more effective summation. These modules include the self-supervised blind RF data segment block (BSegB) and the state-space model-based strong reflection prediction block (SSM-SRP), followed by a polarity-based adaptive replacing refinement (PARR) strategy to suppress sidelobe noise caused by aperture narrowing. To assess the effectiveness of our method, we utilized standard image quality metrics, including Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Root Mean Squared Error (RMSE). Additionally, coherence factor (CF) and variance (Var) were employed to verify the method's ability to enhance signal coherence at the RF data level. The findings reveal that our approach greatly improves performance, achieving an average PSNR of 19.64 dB, an average SSIM of 0.71, and an average RMSE of 0.10, notably under conditions of sparse transmission. The conducted experimental analyses affirm the superior performance of our framework compared to alternative enhancement strategies, including adaptive beamforming methods and deep learning-based beamforming approaches.

Trans${^2}$-CBCT: A Dual-Transformer Framework for Sparse-View CBCT Reconstruction

Minmin Yang, Huantao Ren, Senem Velipasalar

arxiv logopreprintJun 20 2025
Cone-beam computed tomography (CBCT) using only a few X-ray projection views enables faster scans with lower radiation dose, but the resulting severe under-sampling causes strong artifacts and poor spatial coverage. We address these challenges in a unified framework. First, we replace conventional UNet/ResNet encoders with TransUNet, a hybrid CNN-Transformer model. Convolutional layers capture local details, while self-attention layers enhance global context. We adapt TransUNet to CBCT by combining multi-scale features, querying view-specific features per 3D point, and adding a lightweight attenuation-prediction head. This yields Trans-CBCT, which surpasses prior baselines by 1.17 dB PSNR and 0.0163 SSIM on the LUNA16 dataset with six views. Second, we introduce a neighbor-aware Point Transformer to enforce volumetric coherence. This module uses 3D positional encoding and attention over k-nearest neighbors to improve spatial consistency. The resulting model, Trans$^2$-CBCT, provides an additional gain of 0.63 dB PSNR and 0.0117 SSIM. Experiments on LUNA16 and ToothFairy show consistent gains from six to ten views, validating the effectiveness of combining CNN-Transformer features with point-based geometry reasoning for sparse-view CBCT reconstruction.

Ultrafast J-resolved magnetic resonance spectroscopic imaging for high-resolution metabolic brain imaging.

Zhao Y, Li Y, Jin W, Guo R, Ma C, Tang W, Li Y, El Fakhri G, Liang ZP

pubmed logopapersJun 20 2025
Magnetic resonance spectroscopic imaging has potential for non-invasive metabolic imaging of the human brain. Here we report a method that overcomes several long-standing technical barriers associated with clinical magnetic resonance spectroscopic imaging, including long data acquisition times, limited spatial coverage and poor spatial resolution. Our method achieves ultrafast data acquisition using an efficient approach to encode spatial, spectral and J-coupling information of multiple molecules. Physics-informed machine learning is synergistically integrated in data processing to enable reconstruction of high-quality molecular maps. We validated the proposed method through phantom experiments. We obtained high-resolution molecular maps from healthy participants, revealing metabolic heterogeneities in different brain regions. We also obtained high-resolution whole-brain molecular maps in regular clinical settings, revealing metabolic alterations in tumours and multiple sclerosis. This method has the potential to transform clinical metabolic imaging and provide a long-desired capability for non-invasive label-free metabolic imaging of brain function and diseases for both research and clinical applications.

Multi-domain information fusion diffusion model (MDIF-DM) for limited-angle computed tomography.

Ma G, Xia D, Zhao S

pubmed logopapersJun 19 2025
BackgroundLimited-angle Computed Tomography imaging suffers from severe artifacts in the reconstructed image due to incomplete projection data. Deep learning methods have been developed currently to address the challenges of robustness and low contrast of the limited-angle CT reconstruction with a relatively effective way.ObjectiveTo improve the low contrast of the current limited-angle CT reconstruction image, enhance the robustness of the reconstruction method and the contrast of the limited-angle image.MethodIn this paper, we proposed a limited-angle CT reconstruction method that combining the Fourier domain reweighting and wavelet domain enhancement, which fused information from different domains, thereby getting high-resolution reconstruction images.ResultsWe verified the feasibility and effectiveness of the proposed solution through experiments, and the reconstruction results are improved compared with the state-of-the-art methods.ConclusionsThe proposed method enhances some features of the original image domain data from different domains, which is beneficial to the reasonable diffusion and restoration of diffuse detail texture features.

Qualitative and quantitative analysis of functional cardiac MRI using a novel compressed SENSE sequence with artificial intelligence image reconstruction.

Konstantin K, Christian LM, Lenhard P, Thomas S, Robert T, Luisa LI, David M, Matej G, Kristina S, Philip NC

pubmed logopapersJun 19 2025
To evaluate the feasibility of combining Compressed SENSE (CS) with a newly developed deep learning-based algorithm (CS-AI) using a Convolutional Neural Network to accelerate balanced steady-state free precession (bSSFP)-sequences for cardiac magnetic resonance imaging (MRI). 30 healthy volunteers were examined prospectively with a 3 T MRI scanner. We acquired CINE bSSFP sequences for short axis (SA, multi-breath-hold) and four-chamber (4CH)-view of the heart. For each sequence, four different CS accelerations and CS-AI reconstructions with three different denoising parameters, CS-AI medium, CS-AI strong, and CS-AI complete, were used. Cardiac left ventricular (LV) function (i.e., ejection fraction, end-diastolic volume, end-systolic volume, and LV mass) was analyzed using the SA sequences in every CS factor and each AI level. Two readers, blinded to the acceleration and denoising levels, evaluated all sequences regarding image quality and artifacts using a 5-point Likert scale. Friedman and Dunn's multiple comparison tests were used for qualitative evaluation, ANOVA and Tukey Kramer test for quantitative metrics. Scan time could be decreased up to 57 % for the SA-Sequences and up to 56 % for the 4CH-Sequences compared to the clinically established sequences consisting of SA-CS3 and 4CH-CS2,5 (SA-CS3: 112 s vs. SA-CS6: 48 s; 4CH-CS2,5: 9 s vs. 4CH-CS5: 4 s, p < 0.001). LV-functional analysis was not compromised by using accelerated MRI sequences combined with CS-AI reconstructions (all p > 0.05). The image quality loss and artifact increase accompanying increasing acceleration levels could be entirely compensated by CS-AI post-processing, with the best results for image quality using the combination of the highest CS factor with strong AI (SA-CINE: Coef.:1.31, 95 %CI:1.05-1.58; 4CH-CINE: Coef.:1.18, 95 %CI:1.05-1.58; both p < 0.001), and with complete AI regarding the artifact score (SA-CINE: Coef.:1.33, 95 %CI:1.06-1.60; 4CH-CINE: Coef.:1.31, 95 %CI:0.86-1.77; both p < 0.001). Combining CS sequences with AI-based image reconstruction for denoising significantly decreases scan time in cardiac imaging while upholding LV functional analysis accuracy and delivering stable outcomes for image quality and artifact reduction. This integration presents a promising advancement in cardiac MRI, promising improved efficiency without compromising diagnostic quality.

Optimization of Photon-Counting CT Myelography for the Detection of CSF-Venous Fistulas Using Convolutional Neural Network Denoising: A Comparative Analysis of Reconstruction Techniques.

Madhavan AA, Zhou Z, Farnsworth PJ, Thorne J, Amrhein TJ, Kranz PG, Brinjikji W, Cutsforth-Gregory JK, Kodet ML, Weber NM, Thompson G, Diehn FE, Yu L

pubmed logopapersJun 19 2025
Photon-counting detector CT myelography (PCD-CTM) is a recently described technique used for detecting spinal CSF leaks, including CSF-venous fistulas. Various image reconstruction techniques, including smoother-versus-sharper kernels and virtual monoenergetic images, are available with photon-counting CT. Moreover, denoising algorithms have shown promise in improving sharp kernel images. No prior studies have compared image quality of these different reconstructions on photon-counting CT myelography. Here, we sought to compare several image reconstructions using various parameters important for the detection of CSF-venous fistulas. We performed a retrospective review of all consecutive decubitus PCD-CTM between February 1, 2022, and August 1, 2024, at 1 institution. We included patients whose studies had the following reconstructions: Br48-40 keV virtual monoenergetic reconstruction, Br56 low-energy threshold (T3D), Qr89-T3D denoised with quantum iterative reconstruction, and Qr89-T3D denoised with a convolutional neural network algorithm. We excluded patients who had extradural CSF on preprocedural imaging or a technically unsatisfactory myelogram-. All 4 reconstructions were independently reviewed by 2 neuroradiologists. Each reviewer rated spatial resolution, noise, the presence of artifacts, image quality, and diagnostic confidence (whether positive or negative) on a 1-5 scale. These metrics were compared using the Friedman test. Additionally, noise and contrast were quantitatively assessed by a third reviewer and compared. The Qr89 reconstructions demonstrated higher spatial resolution than their Br56 or Br48-40keV counterparts. Qr89 with convolutional neural network denoising had less noise, better image quality, and improved diagnostic confidence compared with Qr89 with quantum iterative reconstruction denoising. The Br48-40keV reconstruction had the highest contrast-to-noise ratio quantitatively. In our study, the sharpest quantitative kernel (Qr89-T3D) with convolutional neural network denoising demonstrated the best performance regarding spatial resolution, noise level, image quality, and diagnostic confidence for detecting or excluding the presence of a CSF-venous fistula.

Dual-scan self-learning denoising for application in ultralow-field MRI.

Zhang Y, He W, Wu J, Xu Z

pubmed logopapersJun 18 2025
This study develops a self-learning method to denoise MR images for use in ultralow field (ULF) applications. We propose use of a self-learning neural network for denoising 3D MRI obtained from two acquisitions (dual scan), which are utilized as training pairs. Based on the self-learning method Noise2Noise, an effective data augmentation method and integrated learning strategy for enhancing model performance are proposed. Experimental results demonstrate that (1) the proposed model can produce exceptional denoising results and outperform the traditional Noise2Noise method subjectively and objectively; (2) magnitude images can be effectively denoised comparing with several state-of-the-art methods on synthetic and real ULF data; and (3) the proposed method can yield better results on phase images and quantitative imaging applications than other denoisers due to the self-learning framework. Theoretical and experimental implementations show that the proposed self-learning model achieves improved performance on magnitude image denoising with synthetic and real-world data at ULF. Additionally, we test our method on calculated phase and quantification images, demonstrating its superior performance over several contrastive methods.

Implicit neural representations for accurate estimation of the standard model of white matter

Tom Hendriks, Gerrit Arends, Edwin Versteeg, Anna Vilanova, Maxime Chamberland, Chantal M. W. Tax

arxiv logopreprintJun 18 2025
Diffusion magnetic resonance imaging (dMRI) enables non-invasive investigation of tissue microstructure. The Standard Model (SM) of white matter aims to disentangle dMRI signal contributions from intra- and extra-axonal water compartments. However, due to the model its high-dimensional nature, extensive acquisition protocols with multiple b-values and diffusion tensor shapes are typically required to mitigate parameter degeneracies. Even then, accurate estimation remains challenging due to noise. This work introduces a novel estimation framework based on implicit neural representations (INRs), which incorporate spatial regularization through the sinusoidal encoding of the input coordinates. The INR method is evaluated on both synthetic and in vivo datasets and compared to parameter estimates using cubic polynomials, supervised neural networks, and nonlinear least squares. Results demonstrate superior accuracy of the INR method in estimating SM parameters, particularly in low signal-to-noise conditions. Additionally, spatial upsampling of the INR can represent the underlying dataset anatomically plausibly in a continuous way, which is unattainable with linear or cubic interpolation. The INR is fully unsupervised, eliminating the need for labeled training data. It achieves fast inference ($\sim$6 minutes), is robust to both Gaussian and Rician noise, supports joint estimation of SM kernel parameters and the fiber orientation distribution function with spherical harmonics orders up to at least 8 and non-negativity constraints, and accommodates spatially varying acquisition protocols caused by magnetic gradient non-uniformities. The combination of these properties along with the possibility to easily adapt the framework to other dMRI models, positions INRs as a potentially important tool for analyzing and interpreting diffusion MRI data.
Page 36 of 54537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.