Sort by:
Page 21 of 25243 results

From Low Field to High Value: Robust Cortical Mapping from Low-Field MRI

Karthik Gopinath, Annabel Sorby-Adams, Jonathan W. Ramirez, Dina Zemlyanker, Jennifer Guo, David Hunt, Christine L. Mac Donald, C. Dirk Keene, Timothy Coalson, Matthew F. Glasser, David Van Essen, Matthew S. Rosen, Oula Puonti, W. Taylor Kimberly, Juan Eugenio Iglesias

arxiv logopreprintMay 18 2025
Three-dimensional reconstruction of cortical surfaces from MRI for morphometric analysis is fundamental for understanding brain structure. While high-field MRI (HF-MRI) is standard in research and clinical settings, its limited availability hinders widespread use. Low-field MRI (LF-MRI), particularly portable systems, offers a cost-effective and accessible alternative. However, existing cortical surface analysis tools are optimized for high-resolution HF-MRI and struggle with the lower signal-to-noise ratio and resolution of LF-MRI. In this work, we present a machine learning method for 3D reconstruction and analysis of portable LF-MRI across a range of contrasts and resolutions. Our method works "out of the box" without retraining. It uses a 3D U-Net trained on synthetic LF-MRI to predict signed distance functions of cortical surfaces, followed by geometric processing to ensure topological accuracy. We evaluate our method using paired HF/LF-MRI scans of the same subjects, showing that LF-MRI surface reconstruction accuracy depends on acquisition parameters, including contrast type (T1 vs T2), orientation (axial vs isotropic), and resolution. A 3mm isotropic T2-weighted scan acquired in under 4 minutes, yields strong agreement with HF-derived surfaces: surface area correlates at r=0.96, cortical parcellations reach Dice=0.98, and gray matter volume achieves r=0.93. Cortical thickness remains more challenging with correlations up to r=0.70, reflecting the difficulty of sub-mm precision with 3mm voxels. We further validate our method on challenging postmortem LF-MRI, demonstrating its robustness. Our method represents a step toward enabling cortical surface analysis on portable LF-MRI. Code is available at https://surfer.nmr.mgh.harvard.edu/fswiki/ReconAny

Accelerated deep learning-based function assessment in cardiovascular magnetic resonance.

De Santis D, Fanelli F, Pugliese L, Bona GG, Polidori T, Santangeli C, Polici M, Del Gaudio A, Tremamunno G, Zerunian M, Laghi A, Caruso D

pubmed logopapersMay 17 2025
To evaluate diagnostic accuracy and image quality of deep learning (DL) cine sequences for LV and RV parameters compared to conventional balanced steady-state free precession (bSSFP) cine sequences in cardiovascular magnetic resonance (CMR). From January to April 2024, patients with clinically indicated CMR were prospectively included. LV and RV were segmented from short-axis bSSFP and DL cine sequences. LV and RV end-diastolic volume (EDV), end-systolic volume (EDV), stroke volume (SV), ejection fraction, and LV end-diastolic mass were calculated. The acquisition time of both sequences was registered. Results were compared with paired-samples t test or Wilcoxon signed-rank test. Agreement between DL cine and bSSFP was assessed using Bland-Altman plots. Image quality was graded by two readers based on blood-to-myocardium contrast, endocardial edge definition, and motion artifacts, using a 5-point Likert scale (1 = insufficient quality; 5 = excellent quality). Sixty-two patients were included (mean age: 47 ± 17 years, 41 men). No significant differences between DL cine and bSSFP were found for all LV and RV parameters (P ≥ .176). DL cine was significantly faster (1.35 ± .55 m vs 2.83 ± .79 m; P < .001). The agreement between DL cine and bSSFP was strong, with bias ranging from 45 to 1.75% for LV and from - 0.38 to 2.43% for RV. Among LV parameters, the highest agreement was obtained for ESV and SV, which fell within the acceptable limit of agreement (LOA) in 84% of cases. EDV obtained the highest agreement among RV parameters, falling within the acceptable LOA in 90% of cases. Overall image quality was comparable (median: 5, IQR: 4-5; P = .330), while endocardial edge definition of DL cine (median: 4, IQR: 4-5) was lower than bSSFP (median: 5, IQR: 4-5; P = .002). DL cine allows fast and accurate quantification of LV and RV parameters and comparable image quality with conventional bSSFP.

Measurement Score-Based Diffusion Model

Chicago Y. Park, Shirin Shoushtari, Hongyu An, Ulugbek S. Kamilov

arxiv logopreprintMay 17 2025
Diffusion models are widely used in applications ranging from image generation to inverse problems. However, training diffusion models typically requires clean ground-truth images, which are unavailable in many applications. We introduce the Measurement Score-based diffusion Model (MSM), a novel framework that learns partial measurement scores using only noisy and subsampled measurements. MSM models the distribution of full measurements as an expectation over partial scores induced by randomized subsampling. To make the MSM representation computationally efficient, we also develop a stochastic sampling algorithm that generates full images by using a randomly selected subset of partial scores at each step. We additionally propose a new posterior sampling method for solving inverse problems that reconstructs images using these partial scores. We provide a theoretical analysis that bounds the Kullback-Leibler divergence between the distributions induced by full and stochastic sampling, establishing the accuracy of the proposed algorithm. We demonstrate the effectiveness of MSM on natural images and multi-coil MRI, showing that it can generate high-quality images and solve inverse problems -- all without access to clean training data. Code is available at https://github.com/wustl-cig/MSM.

UGoDIT: Unsupervised Group Deep Image Prior Via Transferable Weights

Shijun Liang, Ismail R. Alkhouri, Siddhant Gautam, Qing Qu, Saiprasad Ravishankar

arxiv logopreprintMay 16 2025
Recent advances in data-centric deep generative models have led to significant progress in solving inverse imaging problems. However, these models (e.g., diffusion models (DMs)) typically require large amounts of fully sampled (clean) training data, which is often impractical in medical and scientific settings such as dynamic imaging. On the other hand, training-data-free approaches like the Deep Image Prior (DIP) do not require clean ground-truth images but suffer from noise overfitting and can be computationally expensive as the network parameters need to be optimized for each measurement set independently. Moreover, DIP-based methods often overlook the potential of learning a prior using a small number of sub-sampled measurements (or degraded images) available during training. In this paper, we propose UGoDIT, an Unsupervised Group DIP via Transferable weights, designed for the low-data regime where only a very small number, M, of sub-sampled measurement vectors are available during training. Our method learns a set of transferable weights by optimizing a shared encoder and M disentangled decoders. At test time, we reconstruct the unseen degraded image using a DIP network, where part of the parameters are fixed to the learned weights, while the remaining are optimized to enforce measurement consistency. We evaluate UGoDIT on both medical (multi-coil MRI) and natural (super resolution and non-linear deblurring) image recovery tasks under various settings. Compared to recent standalone DIP methods, UGoDIT provides accelerated convergence and notable improvement in reconstruction quality. Furthermore, our method achieves performance competitive with SOTA DM-based and supervised approaches, despite not requiring large amounts of clean training data.

FlowMRI-Net: A Generalizable Self-Supervised 4D Flow MRI Reconstruction Network.

Jacobs L, Piccirelli M, Vishnevskiy V, Kozerke S

pubmed logopapersMay 16 2025
Image reconstruction from highly undersampled 4D flow MRI data can be very time consuming and may result in significant underestimation of velocities depending on regularization, thereby limiting the applicability of the method. The objective of the present work was to develop a generalizable self-supervised deep learning-based framework for fast and accurate reconstruction of highly undersampled 4D flow MRI and to demonstrate the utility of the framework for aortic and cerebrovascular applications. The proposed deep-learning-based framework, called FlowMRI-Net, employs physics-driven unrolled optimization using a complex-valued convolutional recurrent neural network and is trained in a self-supervised manner. The generalizability of the framework is evaluated using aortic and cerebrovascular 4D flow MRI acquisitions acquired on systems from two different vendors for various undersampling factors (R=8,16,24) and compared to compressed sensing (CS-LLR) reconstructions. Evaluation includes an ablation study and a qualitative and quantitative analysis of image and velocity magnitudes. FlowMRI-Net outperforms CS-LLR for aortic 4D flow MRI reconstruction, resulting in significantly lower vectorial normalized root mean square error and mean directional errors for velocities in the thoracic aorta. Furthermore, the feasibility of FlowMRI-Net's generalizability is demonstrated for cerebrovascular 4D flow MRI reconstruction. Reconstruction times ranged from 3 to 7minutes on commodity CPU/GPU hardware. FlowMRI-Net enables fast and accurate reconstruction of highly undersampled aortic and cerebrovascular 4D flow MRI, with possible applications to other vascular territories.

Diff-Unfolding: A Model-Based Score Learning Framework for Inverse Problems

Yuanhao Wang, Shirin Shoushtari, Ulugbek S. Kamilov

arxiv logopreprintMay 16 2025
Diffusion models are extensively used for modeling image priors for inverse problems. We introduce \emph{Diff-Unfolding}, a principled framework for learning posterior score functions of \emph{conditional diffusion models} by explicitly incorporating the physical measurement operator into a modular network architecture. Diff-Unfolding formulates posterior score learning as the training of an unrolled optimization scheme, where the measurement model is decoupled from the learned image prior. This design allows our method to generalize across inverse problems at inference time by simply replacing the forward operator without retraining. We theoretically justify our unrolling approach by showing that the posterior score can be derived from a composite model-based optimization formulation. Extensive experiments on image restoration and accelerated MRI show that Diff-Unfolding achieves state-of-the-art performance, improving PSNR by up to 2 dB and reducing LPIPS by $22.7\%$, while being both compact (47M parameters) and efficient (0.72 seconds per $256 \times 256$ image). An optimized C++/LibTorch implementation further reduces inference time to 0.63 seconds, underscoring the practicality of our approach.

Application of deep learning with fractal images to sparse-view CT.

Kawaguchi R, Minagawa T, Hori K, Hashimoto T

pubmed logopapersMay 15 2025
Deep learning has been widely used in research on sparse-view computed tomography (CT) image reconstruction. While sufficient training data can lead to high accuracy, collecting medical images is often challenging due to legal or ethical concerns, making it necessary to develop methods that perform well with limited data. To address this issue, we explored the use of nonmedical images for pre-training. Therefore, in this study, we investigated whether fractal images could improve the quality of sparse-view CT images, even with a reduced number of medical images. Fractal images generated by an iterated function system (IFS) were used for nonmedical images, and medical images were obtained from the CHAOS dataset. Sinograms were then generated using 36 projections in sparse-view and the images were reconstructed by filtered back-projection (FBP). FBPConvNet and WNet (first module: learning fractal images, second module: testing medical images, and third module: learning output) were used as networks. The effectiveness of pre-training was then investigated for each network. The quality of the reconstructed images was evaluated using two indices: structural similarity (SSIM) and peak signal-to-noise ratio (PSNR). The network parameters pre-trained with fractal images showed reduced artifacts compared to the network trained exclusively with medical images, resulting in improved SSIM. WNet outperformed FBPConvNet in terms of PSNR. Pre-training WNet with fractal images produced the best image quality, and the number of medical images required for main-training was reduced from 5000 to 1000 (80% reduction). Using fractal images for network training can reduce the number of medical images required for artifact reduction in sparse-view CT. Therefore, fractal images can improve accuracy even with a limited amount of training data in deep learning.

A CVAE-based generative model for generalized B<sub>1</sub> inhomogeneity corrected chemical exchange saturation transfer MRI at 5 T.

Zhang R, Zhang Q, Wu Y

pubmed logopapersMay 15 2025
Chemical exchange saturation transfer (CEST) magnetic resonance imaging (MRI) has emerged as a powerful tool to image endogenous or exogenous macromolecules. CEST contrast highly depends on radiofrequency irradiation B<sub>1</sub> level. Spatial inhomogeneity of B<sub>1</sub> field would bias CEST measurement. Conventional interpolation-based B<sub>1</sub> correction method required CEST dataset acquisition under multiple B<sub>1</sub> levels, substantially prolonging scan time. The recently proposed supervised deep learning approach reconstructed B<sub>1</sub> inhomogeneity corrected CEST effect at the identical B<sub>1</sub> as of the training data, hindering its generalization to other B<sub>1</sub> levels. In this study, we proposed a Conditional Variational Autoencoder (CVAE)-based generative model to generate B<sub>1</sub> inhomogeneity corrected Z spectra from single CEST acquisition. The model was trained from pixel-wise source-target paired Z spectra under multiple B<sub>1</sub> with target B<sub>1</sub> as a conditional variable. Numerical simulation and healthy human brain imaging at 5 T were respectively performed to evaluate the performance of proposed model in B<sub>1</sub> inhomogeneity corrected CEST MRI. Results showed that the generated B<sub>1</sub>-corrected Z spectra agreed well with the reference averaged from regions with subtle B<sub>1</sub> inhomogeneity. Moreover, the performance of the proposed model in correcting B<sub>1</sub> inhomogeneity in APT CEST effect, as measured by both MTR<sub>asym</sub> and [Formula: see text] at 3.5 ppm, were superior over conventional Z/contrast-B<sub>1</sub>-interpolation and other deep learning methods, especially when target B<sub>1</sub> were not included in sampling or training dataset. In summary, the proposed model allows generalized B<sub>1</sub> inhomogeneity correction, benefiting quantitative CEST MRI in clinical routines.

Ordered-subsets Multi-diffusion Model for Sparse-view CT Reconstruction

Pengfei Yu, Bin Huang, Minghui Zhang, Weiwen Wu, Shaoyu Wang, Qiegen Liu

arxiv logopreprintMay 15 2025
Score-based diffusion models have shown significant promise in the field of sparse-view CT reconstruction. However, the projection dataset is large and riddled with redundancy. Consequently, applying the diffusion model to unprocessed data results in lower learning effectiveness and higher learning difficulty, frequently leading to reconstructed images that lack fine details. To address these issues, we propose the ordered-subsets multi-diffusion model (OSMM) for sparse-view CT reconstruction. The OSMM innovatively divides the CT projection data into equal subsets and employs multi-subsets diffusion model (MSDM) to learn from each subset independently. This targeted learning approach reduces complexity and enhances the reconstruction of fine details. Furthermore, the integration of one-whole diffusion model (OWDM) with complete sinogram data acts as a global information constraint, which can reduce the possibility of generating erroneous or inconsistent sinogram information. Moreover, the OSMM's unsupervised learning framework provides strong robustness and generalizability, adapting seamlessly to varying sparsity levels of CT sinograms. This ensures consistent and reliable performance across different clinical scenarios. Experimental results demonstrate that OSMM outperforms traditional diffusion models in terms of image quality and noise resilience, offering a powerful and versatile solution for advanced CT imaging in sparse-view scenarios.

Dual-Domain deep prior guided sparse-view CT reconstruction with multi-scale fusion attention.

Wu J, Lin J, Jiang X, Zheng W, Zhong L, Pang Y, Meng H, Li Z

pubmed logopapersMay 15 2025
Sparse-view CT reconstruction is a challenging ill-posed inverse problem, where insufficient projection data leads to degraded image quality with increased noise and artifacts. Recent deep learning approaches have shown promising results in CT reconstruction. However, existing methods often neglect projection data constraints and rely heavily on convolutional neural networks, resulting in limited feature extraction capabilities and inadequate adaptability. To address these limitations, we propose a Dual-domain deep Prior-guided Multi-scale fusion Attention (DPMA) model for sparse-view CT reconstruction, aiming to enhance reconstruction accuracy while ensuring data consistency and stability. First, we establish a residual regularization strategy that applies constraints on the difference between the prior image and target image, effectively integrating deep learning-based priors with model-based optimization. Second, we develop a multi-scale fusion attention mechanism that employs parallel pathways to simultaneously model global context, regional dependencies, and local details in a unified framework. Third, we incorporate a physics-informed consistency module based on range-null space decomposition to ensure adherence to projection data constraints. Experimental results demonstrate that DPMA achieves improved reconstruction quality compared to existing approaches, particularly in noise suppression, artifact reduction, and fine detail preservation.
Page 21 of 25243 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.