Sort by:
Page 4 of 54537 results

Diffusion-based arbitrary-scale magnetic resonance image super-resolution via progressive k-space reconstruction and denoising.

Wang J, Shi Z, Gu X, Yang Y, Sun J

pubmed logopapersSep 20 2025
Acquiring high-resolution Magnetic resonance (MR) images is challenging due to constraints such as hardware limitations and acquisition times. Super-resolution (SR) techniques offer a potential solution to enhance MR image quality without changing the magnetic resonance imaging (MRI) hardware. However, typical SR methods are designed for fixed upsampling scales and often produce over-smoothed images that lack fine textures and edge details. To address these issues, we propose a unified diffusion-based framework for arbitrary-scale in-plane MR image SR, dubbed Progressive Reconstruction and Denoising Diffusion Model (PRDDiff). Specifically, the forward diffusion process of PRDDiff gradually masks out high-frequency components and adds Gaussian noise to simulate the downsampling process in MRI. To reverse this process, we propose an Adaptive Resolution Restoration Network (ARRNet), which introduces a current step corresponding to the resolution of input MR image and an ending step corresponding to the target resolution. This design guide the ARRNet to recovering the clean MR image at the target resolution from input MR image. The SR process starts from an MR image at the initial resolution and gradually enhances them to higher resolution by progressively reconstructing high-frequency components and removing the noise based on the recovered MR image from ARRNet. Furthermore, we design a multi-stage SR strategy that incrementally enhances resolution through multiple sequential stages to further improve recovery accuracy. Each stage utilizes a set number of sampling steps from PRDDiff, guided by a specific ending step, to recover details pertinent to the predefined intermediate resolution. We conduct extensive experiments on fastMRI knee dataset, fastMRI brain dataset, our real-collected LR-HR brain dataset, and clinical pediatric cerebral palsy (CP) dataset, including T1-weighted and T2-weighted images for the brain and proton density-weighted images for the knee. The results demonstrate that PRDDiff outperforms previous MR image super-resolution methods in term of reconstruction accuracy, generalization, and downstream lesion segmentation accuracy and CP classification performance. The code is publicly available at https://github.com/Jiazhen-Wang/PRDDiff-main.

Visual language model-assisted spectral CT reconstruction by diffusion and low-rank priors from limited-angle measurements.

Wang Y, Liang N, Ren J, Zhang X, Shen Y, Cai A, Zheng Z, Li L, Yan B

pubmed logopapersSep 19 2025
Spectral computed tomography (CT) is a critical tool in clinical practice, offering capabilities in multi-energy spectrum imaging and material identification. The limited-angle (LA) scanning strategy has attracted attention for its advantages in fast data acquisition and reduced radiation exposure, aligning with the as low as reasonably achievable principle. However, most deep learning-based methods require separate models for each LA setting, which limits their flexibility in adapting to new conditions. In this study, we developed a novel Visual-Language model-assisted Spectral CT Reconstruction (VLSR) method to address LA artifacts and enable multi-setting adaptation within a single model. The VLSR method integrates the image-text perception ability of visual-language models and the image generation potential of diffusion models. Prompt engineering is introduced to better represent LA artifact characteristics, further improving artifact accuracy. Additionally, a collaborative sampling framework combining data consistency, low-rank regularization, and image-domain diffusion models is developed to produce high-quality and consistent spectral CT reconstructions. The performance of VLSR is superior to other comparison methods. Under the scanning angles of 90° and 60° for simulated data, the VLSR method improves peak signal noise ratio by at least 0.41 dB and 1.13 dB compared with other methods. VLSR method can reconstruct high-quality spectral CT images under diverse LA configurations, allowing faster and more flexible scans with dose reductions.

Deep learning-based acceleration and denoising of 0.55T MRI for enhanced conspicuity of vestibular Schwannoma post contrast administration.

Hinsen M, Nagel A, Heiss R, May M, Wiesmueller M, Mathy C, Zeilinger M, Hornung J, Mueller S, Uder M, Kopp M

pubmed logopapersSep 19 2025
Deep-learning (DL) based MRI denoising techniques promise improved image quality and shorter examination times. This advancement is particularly beneficial for 0.55T MRI, where the inherently lower signal-to-noise (SNR) ratio can compromise image quality. Sufficient SNR is crucial for the reliable detection of vestibular schwannoma (VS). The objective of this study is to evaluate the VS conspicuity and acquisition time (TA) of 0.55T MRI examinations with contrast agents using a DL-denoising algorithm. From January 2024 to October 2024, we retrospectively included 30 patients with VS (9 women). We acquired a clinical reference protocol of the cerebellopontine angle containing a T1w fat-saturated (fs) axial (number of signal averages [NSA] 4) and a T1w Spectral Attenuated Inversion Recovery (SPAIR) coronal (NSA 2) sequence after contrast agent (CA) application without advanced DL-based denoising (w/o DL). We reconstructed the T1w fs CA sequence axial and the T1w SPAIR CA coronal with full DL-denoising mode without change of NSA, and secondly with 1 NSA for T1w fs CA axial and T1w SPAIR coronal (DL&1NSA). Each sequence was rated on a 5-point Likert scale (1: insufficient, 3: moderate, clinically sufficient; 5: perfect) for: overall image quality; VS conspicuity, and artifacts. Secondly, we analyzed the reliability of the size measurements. Two radiologists specializing in head and neck imaging performed the reading and measurements. The Wilcoxon Signed-Rank Test was used for non-parametric statistical comparison. The DL&4NSA axial/coronal study sequence achieved the highest overall IQ (median 4.9). The image quality (IQ) for DL&1NSA was higher (M: 4.0) than for the reference sequence (w/o DL; median 4.0 versus 3.5, each p < 0.01). Similarly, the VS conspicuity was best for DL&4NSA (M: 4.9), decreased for DL&1NSA (M: 4.1), and was lower but still sufficient for w/o DL (M: 3.7, each p < 0.01). The TA for the axial and coronal post-contrast sequences was 8:59 minutes for DL&4NSA and w/o DL and decreased to 3:24 minutes with DL&1NSA. This study underlines that advanced DL-based denoising techniques can reduce the examination time by more than half while simultaneously improving image quality.

Optimized deep learning-accelerated single-breath-hold abdominal HASTE with and without fat saturation improves and accelerates abdominal imaging at 3 Tesla.

Tan Q, Kubicka F, Nickel D, Weiland E, Hamm B, Geisel D, Wagner M, Walter-Rittel TC

pubmed logopapersSep 18 2025
Deep learning-accelerated single-shot turbo-spin-echo techniques (DL-HASTE) enable single-breath-hold T2-weighted abdominal imaging. However, studies evaluating the image quality of DL-HASTE with and without fat saturation (FS) remain limited. This study aimed to prospectively evaluate the technical feasibility and image quality of abdominal DL-HASTE with and without FS at 3 Tesla. DL-HASTE of the upper abdomen was acquired with variable sequence parameters regarding FS, flip angle (FA) and field of view (FOV) in 10 healthy volunteers and 50 patients. DL-HASTE sequences were compared to clinical sequences (HASTE, HASTE-FS and T2-TSE-FS BLADE). Two radiologists independently assessed the sequences regarding scores of overall image quality, delineation of abdominal organs, artifacts and fat saturation using a Likert scale (range: 1-5). Breath-hold time of DL-HASTE and DL-HASTE-FS was 21 ± 2 s with fixed FA and 20 ± 2 s with variable FA (p < 0.001), with no overall image quality difference (p > 0.05). DL-HASTE required a 10% larger FOV than DL-HASTE-FS to avoid aliasing artifacts from subcutaneous fat. Both DL-HASTE and DL-HASTE-FS had significantly higher overall image quality scores than standard HASTE acquisitions (DL-HASTE vs. HASTE: 4.8 ± 0.40 vs. 4.1 ± 0.50; DL-HASTE-FS vs. HASTE-FS: 4.6 ± 0.50 vs. 3.6 ± 0.60; p < 0.001). Compared to the T2-TSE-FS BLADE, DL-HASTE-FS provided higher overall image quality (4.6 ± 0.50 vs. 4.3 ± 0.63, p = 0.011). DL-HASTE achieved significant higher image quality (p = 0.006) and higher sharpness score of organs compared to DL-HASTE-FS (p < 0.001). Deep learning-accelerated HASTE with and without fat saturation were both feasible at 3 Tesla and showed improved image quality compared to conventional sequences. Not applicable.

Rapid and robust quantitative cartilage assessment for the clinical setting: deep learning-enhanced accelerated T2 mapping.

Carretero-Gómez L, Wiesinger F, Fung M, Nunes B, Pedoia V, Majumdar S, Desai AD, Gatti A, Chaudhari A, Sánchez-Lacalle E, Malpica N, Padrón M

pubmed logopapersSep 18 2025
Clinical adoption of T2 mapping is limited by poor reproducibility, lengthy examination times, and cumbersome image analysis. This study aimed to develop an accelerated deep learning (DL)-enhanced cartilage T2 mapping sequence (DL CartiGram), demonstrate its repeatability and reproducibility, and evaluate its accuracy compared to conventional T2 mapping using a semi-automatic pipeline. DL CartiGram was implemented using a modified 2D Multi-Echo Spin-Echo sequence at 3 T, incorporating parallel imaging and DL-based image reconstruction. Phantom tests were performed at two sites to obtain test-retest T2 maps, using single-echo spin-echo (SE) measurements as reference values. At one site, DL CartiGram and conventional T2 mapping were performed on 43 patients. T2 values were extracted from 52 patellar and femoral compartments using DL knee segmentation and the DOSMA framework. Repeatability and reproducibility were assessed using coefficients of variation (CV), Bland-Altman analysis, and concordance correlation coefficients (CCC). T2 differences were evaluated with Wilcoxon signed-rank tests, paired t tests, and accuracy CV. Phantom tests showed intra-site repeatability with CVs ≤ 2.52% and T2 precision ≤ 1 ms. Inter-site reproducibility showed a CV of 2.74% and a CCC of 99% (CI 92-100%). Bland-Altman analysis showed a bias of 1.56 ms between sites (p = 0.03), likely due to temperature effects. In vivo, DL CartiGram reduced scan time by 40%, yielding accurate cartilage T2 measurements (CV = 0.97%) with no significant differences compared to conventional T2 mapping (p = 0.1). DL CartiGram significantly accelerates T2 mapping, while still assuring excellent repeatability and reproducibility. Combined with the semi-automatic post-processing pipeline, it emerges as a promising tool for quantitative T2 cartilage biomarker assessment in clinical settings.

DICE: Diffusion Consensus Equilibrium for Sparse-view CT Reconstruction

Leon Suarez-Rodriguez, Roman Jacome, Romario Gualdron-Hurtado, Ana Mantilla-Dulcey, Henry Arguello

arxiv logopreprintSep 18 2025
Sparse-view computed tomography (CT) reconstruction is fundamentally challenging due to undersampling, leading to an ill-posed inverse problem. Traditional iterative methods incorporate handcrafted or learned priors to regularize the solution but struggle to capture the complex structures present in medical images. In contrast, diffusion models (DMs) have recently emerged as powerful generative priors that can accurately model complex image distributions. In this work, we introduce Diffusion Consensus Equilibrium (DICE), a framework that integrates a two-agent consensus equilibrium into the sampling process of a DM. DICE alternates between: (i) a data-consistency agent, implemented through a proximal operator enforcing measurement consistency, and (ii) a prior agent, realized by a DM performing a clean image estimation at each sampling step. By balancing these two complementary agents iteratively, DICE effectively combines strong generative prior capabilities with measurement consistency. Experimental results show that DICE significantly outperforms state-of-the-art baselines in reconstructing high-quality CT images under uniform and non-uniform sparse-view settings of 15, 30, and 60 views (out of a total of 180), demonstrating both its effectiveness and robustness.

Mamba-Enhanced Diffusion Model for Perception-Aware Blind Super-Resolution of Magnetic Resonance Imaging.

Zhao X, Yang X, Song Z

pubmed logopapersSep 18 2025
High-resolution magnetic resonance imaging (HR MRI) can provide accurate and rich information for doctors to better detect subtle lesions, delineate tumor boundaries, evaluate small anatomical structures, and assess early-stage pathological changes that might be obscured in lower resolution images. However, the acquisition of HR MRI images often requires prolonged scanning time, which causes the patient's physical and mental discomfort. The patient's slight movement may produce the motion artifacts and make the obtained MRI image become blurry, affecting the accuracy of clinical diagnosis. To tackle these problems, we propose a novel method, Mamba-enhanced Diffusion Model (MDM) for perception-aware blind super-resolution of Magnetic Resonance Imaging, which includes two important components: kernel noise estimator and SR reconstructor. Specifically, we propose a Perception-aware Blur Kernel Noise estimator (PBKN estimator), which takes advantage of the diffusion model to estimate the blur kernel from lowresolution images. Meanwhile, we construct a novel progressive feature reconstructor, which takes the estimated blur kernel and the content information of LR images as prior knowledge to reconstruct more accurate SR MRI images by using diffusion model. Moreover, we design a novel Semantic Information Fusion Mamba (SIF-Mamba) module for the SR reconstruction task. SIF-Mamba is specifically designed in the progressive feature reconstructor to capture the global context of MRI images and improve the feature reconstruction. The extensive experiments demonstrate that our proposed MDM achieves better SR reconstruction results than several outstanding methods. Our codes are available at https://github.com/YXDBright/MDM.

MRI on a Budget: Leveraging Low and Ultra-Low Intensity Technology in Africa.

Ussi KK, Mtenga RB

pubmed logopapersSep 18 2025
Magnetic resonance imaging (MRI) is a cornerstone of brain and spine diagnostics. Yet, access across Africa is limited by high installation costs, power requirements, and the need for specialized shielding and facilities. Low-and ultra low-field (ULF) MRI systems operating below 0.3 T are emerging as a practical alternative to expand neuroimaging capacity in resource-constrained settings. However, its faced with challenges that hinder its use in clinical setting. Technological advances that seek to tackle these challenges such as permanent Halbach array magnets, portable scanner designs such as those successfully deployed in Uganda and Malawi, and deep learning methods including convolutional neural network electromagnetic interference cancellation and residual U-Net image reconstruction have improved image quality and reduced noise, making ULF MRI increasingly viable. We review the state of low-field MRI technology, its application in point-of-care and rural contexts, and the specific limitations that remain, including reduced signal-to-noise ratio, larger voxel size requirements, and susceptibility to motion artifacts. Although not a replacement for high-field scanners in detecting subtle or small lesions, low-field MRI offers a promising pathway to broaden diagnostic imaging availability, support clinical decision-making, and advance equitable neuroimaging research in under-resourced regions.ABBREVIATIONS: CNN=Convolutional neural network; EMI=Electromagnetic interference; FID=Free induction wave; LMIC=Low and middle income countries; MRI=Magnetic Resonance Imaging; NCDs=Non communicable diseases; RF=Radiofrequency Pulse; SNR= Signal to noise ratio; TBI=Traumatic brain Injury.

Assessing the Feasibility of Deep Learning-Based Attenuation Correction Using Photon Emission Data in 18F-FDG Images for Dedicated Head and Neck PET Scanners.

Shahrbabaki Mofrad M, Ghafari A, Amiri Tehrani Zade A, Aghahosseini F, Ay M, Farzenefar S, Sheikhzadeh P

pubmed logopapersSep 18 2025
&#xD;This study aimed to evaluate the use of deep learning techniques to produce measured attenuation-corrected (MAC) images from non-attenuation-corrected (NAC) F-FDG PET images, focusing on head and neck imaging.&#xD;Materials and Methods:&#xD;A Residual Network (ResNet) was used to train 2D head and neck PET images from 114 patients (12,068 slices) without pathology or artifacts. For validation during training and testing, 21 and 24 patient images without pathology and artifacts were used, and 12 images with pathologies were used for independent testing. Prediction accuracy was assessed using metrics such as RMSE, SSIM, PSNR, and MSE. The impact of unseen pathologies on the network was evaluated by measuring contrast and SNR in tumoral/hot regions of both reference and predicted images. Statistical significance between the contrast and SNR of reference and predicted images was assessed using a paired-sample t-test.&#xD;Results:&#xD;Two nuclear medicine physicians evaluated the predicted head and neck MAC images, finding them visually similar to reference images. In the normal test group, PSNR, SSIM, RMSE, and MSE were 44.02 ± 1.77, 0.99 ± 0.002, 0.007 ± 0.0019, and 0.000053 ± 0.000030, respectively. For the pathological test group, values were 43.14 ± 2.10, 0.99 ± 0.005, 0.0078 ± 0.0015, and 0.000063 ± 0.000026, respectively. No significant differences were found in SNR and contrast between reference and test images without pathology (p-value>0.05), but significant differences were found in pathological images (p-value <0.05)&#xD;Conclusion:&#xD;The deep learning network demonstrated the ability to directly generate head and neck MAC images that closely resembled the reference images. With additional training data, the model has the potential to be utilized in dedicated head and neck PET scanners without the requirement of computed tomography [CT] for attenuation correction.&#xD.

Dose reduction in 4D CT imaging: Breathing signal-guided deep learning-driven data acquisition.

Wimmert L, Gauerd T, Dickmanne J, Hofmanne C, Sentkera T, Wernera R

pubmed logopapersSep 18 2025
4D CT imaging is essential for radiotherapy planning in thoracic tumors. However, current protocols tend to acquire more projection data than is strictly necessary for reconstructing the 4D CT, potentially leading to unnecessary radiation exposure and misalignment with the ALARA (As Low As Reasonably Achievable) principle. We propose a deep learning (DL)-driven approach that uses the patient's breathing signal to guide data acquisition, aiming to acquire only necessary projection data. This retrospective study analyzed 1,415 breathing signals from 294 patients, with a 75/25 training/validation split at patient level. Based on the signals, a DL model was trained to predict optimal beam-on events for projection data acquisition. Model testing was performed on 104 independent clinical 4D CT scans. The performance of the model was assessed by measuring temporal alignment between predicted and optimal beam-on events. To assess the impact on the reconstructed images, each 4D dataset was reconstructed twice: (1) using all clinically acquired projections (reference) and (2) using only the model-selected projections (dose-reduced). Reference and dose-reduced images were compared using Dice coefficients for organ segmentations, deformable image registration (DIR)-based displacement fields, artifact frequency, and tumor segmentation agreement, the latter evaluated in terms of Hausdorff distance and tumor motion ranges. The proposed approach reduced beam-on time and imaging dose by a median of 29% (IQR: 24-35%), corresponding to 11.6 mGy dose reduction for a standard 4D CT CTDIvol of 40 mGy. Temporal alignment between predicted and optimal beam-on events showed marginal differences. Similarly, reconstructed dose-reduced images showed only minimal differences to the reference images, demonstrated by high lung and liver segmentation Dice values, small-magnitude (DIR) displacement fields, and unchanged artifact frequency. Minor deviations of tumor segmentation and motion ranges compared to the reference suggest only minimal impact of the proposed approach on treatment planning. The proposed DL-driven data acquisition approach has the ability to reduce radiation exposure during 4D CT imaging while preserving diagnostic quality, offering a clinically viable, ALARA-adhering solution for 4D CT imaging.
Page 4 of 54537 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.