Sort by:
Page 7 of 54537 results

Optimized reconstruction of undersampled Dixon sequences using new memory-efficient unrolled deep neural networks: HalfVarNet and HalfDIRCN.

Martin S, Trabelsi A, Guye M, Dubois M, Abdeddaim R, Bendahan D, André R

pubmed logopapersSep 8 2025
Fat fraction (FF) quantification in individual muscles using quantitative MRI is of major importance for monitoring disease progression and assessing disease severity in neuromuscular diseases. Undersampling of MRI acquisitions is commonly used to reduce scanning time. The present paper introduces novel unrolled neural networks for the reconstruction of undersampled MRI acquisitions. These networks are designed with the aim of maintaining accurate FF quantification while reducing reconstruction time and memory usage. The proposed approach relies on a combination of a simplified architecture (Half U-Net) with unrolled networks that achieved high performance in the well-known FastMRI challenge (variational network [VarNet] and densely interconnected residual cascading network [DIRCN]). The algorithms were trained and evaluated using 3D MRI Dixon acquisitions of the thigh from controls and patients with neuromuscular diseases. The study was performed by applying a retrospective undersampling with acceleration factors of 4 and 8. Reconstructed images were used to computed FF maps. Results disclose that the novel unrolled neural networks were able to maintain reconstruction, biomarker assessment, and segmentation quality while reducing memory usage by 24% to 16% and reducing reconstruction time from 21% to 17%. Using an acceleration factor of 8, the proposed algorithms, HalfVarNet and HalfDIRCN, achieved structural similarity index (SSIM) scores of 93.76 ± 0.38 and 94.95 ± 0.32, mean squared error (MSE) values of 12.76 ± 1.08 × 10<sup>-2</sup> and 10.25 ± 0.87 × 10<sup>-2</sup>, and a relative FF quadratic error of 0.23 ± 0.02% and 0.17 ± 0.02%, respectively. The proposed method enables time and memory-efficient reconstruction of undersampled 3D MRI data, supporting its potential for clinical application.

Evaluation of Machine Learning Reconstruction Techniques for Accelerated Brain MRI Scans

Jonathan I. Mandel, Shivaprakash Hiremath, Hedyeh Keshtgar, Timothy Scholl, Sadegh Raeisi

arxiv logopreprintSep 8 2025
This retrospective-prospective study evaluated whether a deep learning-based MRI reconstruction algorithm can preserve diagnostic quality in brain MRI scans accelerated up to fourfold, using both public and prospective clinical data. The study included 18 healthy volunteers (scans acquired at 3T, January 2024-March 2025), as well as selected fastMRI public datasets with diverse pathologies. Phase-encoding-undersampled 2D/3D T1, T2, and FLAIR sequences were reconstructed with DeepFoqus-Accelerate and compared with standard-of-care (SOC). Three board-certified neuroradiologists and two MRI technologists independently reviewed 36 paired SOC/AI reconstructions from both datasets using a 5-point Likert scale, while quantitative similarity was assessed for 408 scans and 1224 datasets using Structural Similarity Index (SSIM), Peak Signal-to-Noise Ratio (PSNR), and Haar wavelet-based Perceptual Similarity Index (HaarPSI). No AI-reconstructed scan scored below 3 (minimally acceptable), and 95% scored $\geq 4$. Mean SSIM was 0.95 $\pm$ 0.03 (90% cases >0.90), PSNR >41.0 dB, and HaarPSI >0.94. Inter-rater agreement was slight to moderate. Rare artifacts did not affect diagnostic interpretation. These findings demonstrate that DeepFoqus-Accelerate enables robust fourfold brain MRI acceleration with 75% reduced scan time, while preserving diagnostic image quality and supporting improved workflow efficiency.

Physics-Guided Diffusion Transformer with Spherical Harmonic Posterior Sampling for High-Fidelity Angular Super-Resolution in Diffusion MRI

Mu Nan, Taohui Xiao, Ruoyou Wu, Shoujun Yu, Ye Li, Hairong Zheng, Shanshan Wang

arxiv logopreprintSep 7 2025
Diffusion MRI (dMRI) angular super-resolution (ASR) aims to reconstruct high-angular-resolution (HAR) signals from limited low-angular-resolution (LAR) data without prolonging scan time. However, existing methods are limited in recovering fine-grained angular details or preserving high fidelity due to inadequate modeling of q-space geometry and insufficient incorporation of physical constraints. In this paper, we introduce a Physics-Guided Diffusion Transformer (PGDiT) designed to explore physical priors throughout both training and inference stages. During training, a Q-space Geometry-Aware Module (QGAM) with b-vector modulation and random angular masking facilitates direction-aware representation learning, enabling the network to generate directionally consistent reconstructions with fine angular details from sparse and noisy data. In inference, a two-stage Spherical Harmonics-Guided Posterior Sampling (SHPS) enforces alignment with the acquired data, followed by heat-diffusion-based SH regularization to ensure physically plausible reconstructions. This coarse-to-fine refinement strategy mitigates oversmoothing and artifacts commonly observed in purely data-driven or generative models. Extensive experiments on general ASR tasks and two downstream applications, Diffusion Tensor Imaging (DTI) and Neurite Orientation Dispersion and Density Imaging (NODDI), demonstrate that PGDiT outperforms existing deep learning models in detail recovery and data fidelity. Our approach presents a novel generative ASR framework that offers high-fidelity HAR dMRI reconstructions, with potential applications in neuroscience and clinical research.

Multi-Strategy Guided Diffusion via Sparse Masking Temporal Reweighting Distribution Correction

Zekun Zhou, Yanru Gong, Liu Shi, Qiegen Liu

arxiv logopreprintSep 7 2025
Diffusion models have demonstrated remarkable generative capabilities in image processing tasks. We propose a Sparse condition Temporal Rewighted Integrated Distribution Estimation guided diffusion model (STRIDE) for sparse-view CT reconstruction. Specifically, we design a joint training mechanism guided by sparse conditional probabilities to facilitate the model effective learning of missing projection view completion and global information modeling. Based on systematic theoretical analysis, we propose a temporally varying sparse condition reweighting guidance strategy to dynamically adjusts weights during the progressive denoising process from pure noise to the real image, enabling the model to progressively perceive sparse-view information. The linear regression is employed to correct distributional shifts between known and generated data, mitigating inconsistencies arising during the guidance process. Furthermore, we construct a dual-network parallel architecture to perform global correction and optimization across multiple sub-frequency components, thereby effectively improving the model capability in both detail restoration and structural preservation, ultimately achieving high-quality image reconstruction. Experimental results on both public and real datasets demonstrate that the proposed method achieves the best improvement of 2.58 dB in PSNR, increase of 2.37\% in SSIM, and reduction of 0.236 in MSE compared to the best-performing baseline methods. The reconstructed images exhibit excellent generalization and robustness in terms of structural consistency, detail restoration, and artifact suppression.

Systematic Review and Meta-analysis of AI-driven MRI Motion Artifact Detection and Correction

Mojtaba Safari, Zach Eidex, Richard L. J. Qiu, Matthew Goette, Tonghe Wang, Xiaofeng Yang

arxiv logopreprintSep 5 2025
Background: To systematically review and perform a meta-analysis of artificial intelligence (AI)-driven methods for detecting and correcting magnetic resonance imaging (MRI) motion artifacts, assessing current developments, effectiveness, challenges, and future research directions. Methods: A comprehensive systematic review and meta-analysis were conducted, focusing on deep learning (DL) approaches, particularly generative models, for the detection and correction of MRI motion artifacts. Quantitative data were extracted regarding utilized datasets, DL architectures, and performance metrics. Results: DL, particularly generative models, show promise for reducing motion artifacts and improving image quality; however, limited generalizability, reliance on paired training data, and risk of visual distortions remain key challenges that motivate standardized datasets and reporting. Conclusions: AI-driven methods, particularly DL generative models, show significant potential for improving MRI image quality by effectively addressing motion artifacts. However, critical challenges must be addressed, including the need for comprehensive public datasets, standardized reporting protocols for artifact levels, and more advanced, adaptable DL techniques to reduce reliance on extensive paired datasets. Addressing these aspects could substantially enhance MRI diagnostic accuracy, reduce healthcare costs, and improve patient care outcomes.

INR meets Multi-Contrast MRI Reconstruction

Natascha Niessen, Carolin M. Pirkl, Ana Beatriz Solana, Hannah Eichhorn, Veronika Spieker, Wenqi Huang, Tim Sprenger, Marion I. Menzel, Julia A. Schnabel

arxiv logopreprintSep 5 2025
Multi-contrast MRI sequences allow for the acquisition of images with varying tissue contrast within a single scan. The resulting multi-contrast images can be used to extract quantitative information on tissue microstructure. To make such multi-contrast sequences feasible for clinical routine, the usually very long scan times need to be shortened e.g. through undersampling in k-space. However, this comes with challenges for the reconstruction. In general, advanced reconstruction techniques such as compressed sensing or deep learning-based approaches can enable the acquisition of high-quality images despite the acceleration. In this work, we leverage redundant anatomical information of multi-contrast sequences to achieve even higher acceleration rates. We use undersampling patterns that capture the contrast information located at the k-space center, while performing complementary undersampling across contrasts for high frequencies. To reconstruct this highly sparse k-space data, we propose an implicit neural representation (INR) network that is ideal for using the complementary information acquired across contrasts as it jointly reconstructs all contrast images. We demonstrate the benefits of our proposed INR method by applying it to multi-contrast MRI using the MPnRAGE sequence, where it outperforms the state-of-the-art parallel imaging compressed sensing (PICS) reconstruction method, even at higher acceleration factors.

Accelerated Patient-specific Non-Cartesian MRI Reconstruction using Implicit Neural Representations.

Xu D, Liu H, Miao X, O'Connor D, Scholey JE, Yang W, Feng M, Ohliger M, Lin H, Ruan D, Yang Y, Sheng K

pubmed logopapersSep 5 2025
Accelerating MR acquisition is essential for image guided therapeutic applications. Compressed sensing (CS) has been developed to minimize image artifacts in accelerated scans, but the required iterative reconstruction is computationally complex and difficult to generalize. Convolutional neural networks (CNNs)/Transformers-based deep learning (DL) methods emerged as a faster alternative but face challenges in modeling continuous k-space, a problem amplified with non-Cartesian sampling commonly used in accelerated acquisition. In comparison, implicit neural representations can model continuous signals in the frequency domain and thus are compatible with arbitrary k-space sampling patterns. The current study develops a novel generative-adversarially trained implicit neural representations (k-GINR) for de novo undersampled non-Cartesian k-space reconstruction. k-GINR consists of two stages: 1) supervised training on an existing patient cohort; 2) self-supervised patient-specific optimization. The StarVIBE T1-weighted liver dataset consisting of 118 prospectively acquired scans and corresponding coil data were employed for testing. k-GINR is compared with two INR based methods, NeRP and k-NeRP, an unrolled DL method, Deep Cascade CNN, and CS. k-GINR consistently outperformed the baselines with a larger performance advantage observed at very high accelerations (PSNR: 6.8%-15.2% higher at 3 times, 15.1%-48.8% at 10 times, and 29.3%-60.5% higher at 20 times). The reconstruction times for k-GINR, NeRP, k-NeRP, CS, and Deep Cascade CNN were approximately 3 minutes, 4-10 minutes, 3 minutes, 4 minutes and 3 second, respectively. k-GINR, an innovative two-stage INR network incorporating adversarial training, was designed for direct non-Cartesian k-space reconstruction for new incoming patients. It demonstrated superior image quality compared to CS and Deep Cascade CNN across a wide range of acceleration ratios.

Optimization of carotid CT angiography image quality with deep learning image reconstruction with high setting (DLIR-H) algorithm under ultra-low radiation and contrast agent conditions.

Wang C, Long J, Liu X, Xu W, Zhang H, Liu Z, Yu M, Wang C, Wu Y, Sun A, Xu K, Meng Y

pubmed logopapersSep 5 2025
Carotid artery disease is a major cause of stroke and is frequently evaluated using Carotid CT Angiography (CTA). However, the associated radiation exposure and contrast agent use raise concerns, particularly for high-risk patients. Recent advances in Deep Learning Image Reconstruction (DLIR) offer new potential to enhance image quality under low-dose conditions. This study aimed to evaluate the effectiveness of the DLIR-H algorithm in improving image quality of 40 keV Virtual Monoenergetic Images (VMI) in dual-energy CTA (DE-CTA) while minimizing radiation dose and contrast agent usage. A total of 120 patients undergoing DE-CTA were prospectively divided into four groups: one control group using ASIR-V and three experimental groups using DLIR-L, DLIR-M, and DLIR-H algorithms. All scans employed a "triple-low" protocol-low radiation, low contrast volume, and low injection rate. Objective image quality was assessed via CT values, image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). Subjective image quality was evaluated using a 5-point Likert scale. The DLIR-H group showed the greatest improvements in image quality, with significantly reduced noise and increased SNR and CNR, particularly at complex vascular sites such as the carotid bifurcation and internal carotid artery. Radiation dose and contrast volume were reduced by 15.6 % and 17.5 %, respectively. DLIR-H also received the highest subjective image quality scores. DLIR-H significantly enhances DE-CTA image quality under ultra-low-dose conditions, preserving diagnostic detail while reducing patient risk. DLIR-H supports safer and more effective carotid imaging, especially for high-risk groups like renal-impaired patients and those needing repeated scans, enabling wider clinical use of ultra-low-dose protocols.

A Physics-ASIC Architecture-Driven Deep Learning Photon-Counting Detector Model Under Limited Data.

Yu X, Wu Q, Qin W, Zhong T, Su M, Ma J, Zhang Y, Ji X, Wang W, Quan G, Du Y, Chen Y, Lai X

pubmed logopapersSep 4 2025
Photon-counting computed tomography (PCCT) based on photon-counting detectors (PCDs) represents a cutting-edge CT technology, offering higher spatial resolution, reduced radiation dose, and advanced material decomposition capabilities. Accurately modeling complex and nonlinear PCDs under limited calibration data becomes one of the challenges hindering the widespread accessibility of PCCT. This paper introduces a physics-ASIC architecture-driven deep learning detector model for PCDs. This model adeptly captures the comprehensive response of the PCD, encompassing both sensor and ASIC responses. We present experimental results demonstrating the model's exceptional accuracy and robustness with limited calibration data. Key advancements include reduced calibration errors, reasonable physics-ASIC parameters estimation, and high-quality and high-accuracy material decomposition images.

MUSiK: An Open Source Simulation Library for 3D Multi-view Ultrasound.

Chan TJ, Nair-Kanneganti A, Anthony B, Pouch A

pubmed logopapersSep 4 2025
Diagnostic ultrasound has long filled a crucial niche in medical imaging thanks to its portability, affordability, and favorable safety profile. Now, multi-view hardware and deep-learning-based image reconstruction algorithms promise to extend this niche to increasingly sophisticated applications, such as volume rendering and long-term organ monitoring. However, progress on these fronts is impeded by the complexities of ultrasound electronics and by the scarcity of high-fidelity radiofrequency data. Evidently, there is a critical need for tools that enable rapid ultrasound prototyping and generation of synthetic data. We meet this need with MUSiK, the first open-source ultrasound simulation library expressly designed for multi-view acoustic simulations of realistic anatomy. This library covers the full gamut of image acquisition: building anatomical digital phantoms, defining and positioning diverse transducer types, running simulations, and reconstructing images. In this paper, we demonstrate several use cases for MUSiK. We simulate in vitro multi-view experiments and compare the resolution and contrast of the resulting images. We then perform multiple conventional and experimental in vivo imaging tasks, such as 2D scans of the kidney, 2D and 3D echocardiography, 2.5D tomography of large regions, and 3D tomography for lesion detection in soft tissue. Finally, we introduce MUSiK's Bayesian reconstruction framework for multi-view ultrasound and validate an original SNR-enhancing reconstruction algorithm. We anticipate that these unique features will seed new hypotheses and accelerate the overall pace of ultrasound technological development. The MUSiK library is publicly available at github.com/norway99/MUSiK.
Page 7 of 54537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.