Sort by:
Page 2 of 26256 results

Nuclear Diffusion Models for Low-Rank Background Suppression in Videos

Tristan S. W. Stevens, Oisín Nolan, Jean-Luc Robert, Ruud J. G. van Sloun

arxiv logopreprintSep 25 2025
Video sequences often contain structured noise and background artifacts that obscure dynamic content, posing challenges for accurate analysis and restoration. Robust principal component methods address this by decomposing data into low-rank and sparse components. Still, the sparsity assumption often fails to capture the rich variability present in real video data. To overcome this limitation, a hybrid framework that integrates low-rank temporal modeling with diffusion posterior sampling is proposed. The proposed method, Nuclear Diffusion, is evaluated on a real-world medical imaging problem, namely cardiac ultrasound dehazing, and demonstrates improved dehazing performance compared to traditional RPCA concerning contrast enhancement (gCNR) and signal preservation (KS statistic). These results highlight the potential of combining model-based temporal models with deep generative priors for high-fidelity video restoration.

T2I-Diff: fMRI Signal Generation via Time-Frequency Image Transform and Classifier-Free Denoising Diffusion Models

Hwa Hui Tew, Junn Yong Loo, Yee-Fan Tan, Xinyu Tang, Hernando Ombao, Fuad Noman, Raphael C. -W. Phan, Chee-Ming Ting

arxiv logopreprintSep 25 2025
Functional Magnetic Resonance Imaging (fMRI) is an advanced neuroimaging method that enables in-depth analysis of brain activity by measuring dynamic changes in the blood oxygenation level-dependent (BOLD) signals. However, the resource-intensive nature of fMRI data acquisition limits the availability of high-fidelity samples required for data-driven brain analysis models. While modern generative models can synthesize fMRI data, they often underperform because they overlook the complex non-stationarity and nonlinear BOLD dynamics. To address these challenges, we introduce T2I-Diff, an fMRI generation framework that leverages time-frequency representation of BOLD signals and classifier-free denoising diffusion. Specifically, our framework first converts BOLD signals into windowed spectrograms via a time-dependent Fourier transform, capturing both the underlying temporal dynamics and spectral evolution. Subsequently, a classifier-free diffusion model is trained to generate class-conditioned frequency spectrograms, which are then reverted to BOLD signals via inverse Fourier transforms. Finally, we validate the efficacy of our approach by demonstrating improved accuracy and generalization in downstream fMRI-based brain network classification.

Pseudo PET synthesis from CT based on deep neural networks.

Wang H, Zou W, Wang J, Li J, Zhang B

pubmed logopapersSep 24 2025
<i>Objective</i>. Integrated PET/CT imaging plays a vital role in tumor diagnosis by offering both anatomical and functional information. However, the high cost, limited accessibility of PET imaging and concerns about cumulative radiation exposure in repeated scans may restrict its clinical use. This study aims to develop a cross-modal medical image synthesis method for generating PET images from CT scans, with a particular focus on accurately synthesizing lesion regions.&#xD;<i>Approach</i>. We propose a two-stage Generative Adversarial Network termed MMF-PAE-GAN (Multi-modal Fusion Pre-trained AutoEncoder GAN) that integrates pre-GAN and post-GAN in terms of a Pre-trained AutoEncoder (PAE). The pre-GAN produces an initial pseudo PET image and provides the post-GAN with PET related multi-scale features. Unlike traditional Sample Adaptive Encoder (SAE), the PAE enhances sample-specific representation by extracting multi-scale contextual features. To capture both lesion-related and non-lesion-related anatomical information, two CT scans processed under different window settings are fed into the post-GAN. Furthermore, a Multi-modal Weighted Feature Fusion Module (MMWFFM) is introduced to dynamically highlight informative cross-modal features while suppress redundancies. A Perceptual Loss (PL), computed based on the PAE, is also used to impose constraints in feature-space and improve the fidelity lesion synthesis. &#xD;<i>Main results</i>. On the AutoPET dataset, our method achieved a PSNR of 29.1781 dB, MAE of 0.0094, SSIM of 0.9217, NMSE of 0.3651 for pixel-level metrics, along with a Sensitivity of 85.31\%, Specificity of 97.02\% and Accuracy of 95.97\% for slice-level classification metrics. On the FAHSU dataset, these two metrics amount to a PSNR of 29.1506 dB, MAE of 0.0095, SSIM of 0.9193, NMSE of 0.3663, Sensitivity of 84.51\%, Specificity of 96.82\% and Accuracy of 95.71\%.&#xD;<i>Significance</i>. The proposed MMF-PAE-GAN can generate high-quality PET images directly from CT scans without the need for radioactive tracers, which potentially improves accessibility of functional imaging and reduces costs in clinical scenarios where PET acquisition is limited or repeated scans are not feasible.

Localizing Knee Pain via Explainable Bayesian Generative Models and Counterfactual MRI: Data from the Osteoarthritis Initiative.

Chuang TY, Lian PH, Kuo YC, Chang GH

pubmed logopapersSep 24 2025
Osteoarthritis (OA) pain often does not correlate with magnetic resonance imaging (MRI)-detected structural abnormalities, limiting the clinical utility of traditional volume-based lesion assessments. To address this mismatch, we present a novel explainable artificial intelligence (XAI) framework that localizes pain-driving abnormalities in knee MR images via counterfactual image synthesis and Shapley-based feature attribution. Our method combines a Bayesian generative network-which is trained to synthesize asymptomatic versions of symptomatic knees-with a black-box pain classifier to generate counterfactual MRI scans. These counterfactuals, which are constrained by multimodal segmentation and uncertainty-aware inference, isolate lesion regions that are likely responsible for symptoms. Applying Shapley additive explanations (SHAP) to the output of the classifier enables the contribution of each lesion to pain to be precisely quantified. We trained and validated this framework on 2148 knee pairs obtained from a multicenter study of the Osteoarthritis Initiative (OAI), achieving high anatomical specificity in terms of identifying pain-relevant features such as patellar effusions and bone marrow lesions. An odds ratio (OR) analysis revealed that SHAP-derived lesion scores were significantly more strongly associated with pain than raw lesion volumes were (OR 6.75 vs. 3.73 in patellar regions), supporting the interpretability and clinical relevance of the model. Compared with conventional saliency methods and volumetric measures, our approach demonstrates superior lesion-level resolution and highlights the spatial heterogeneity of OA pain mechanisms. These results establish a new direction for conducting interpretable, lesion-specific MRI analyses that could guide personalized treatment strategies for musculoskeletal disorders.

Generating Brain MRI with StyleGAN2-ADA: The Effect of the Training Set Size on the Quality of Synthetic Images.

Lai M, Mascalchi M, Tessa C, Diciotti S

pubmed logopapersSep 23 2025
The potential of deep learning for medical imaging is often constrained by limited data availability. Generative models can unlock this potential by generating synthetic data that reproduces the statistical properties of real data while being more accessible for sharing. In this study, we investigated the influence of training set size on the performance of a state-of-the-art generative adversarial network, the StyleGAN2-ADA, trained on a cohort of 3,227 subjects from the OpenBHB dataset to generate 2D slices of brain MR images from healthy subjects. The quality of the synthetic images was assessed through qualitative evaluations and state-of-the-art quantitative metrics, which are provided in a publicly accessible repository. Our results demonstrate that StyleGAN2-ADA generates realistic and high-quality images, deceiving even expert radiologists while preserving privacy, as it did not memorize training images. Notably, increasing the training set size led to slight improvements in fidelity metrics. However, training set size had no noticeable impact on diversity metrics, highlighting the persistent limitation of mode collapse. Furthermore, we observed that diversity metrics, such as coverage and β-recall, are highly sensitive to the number of synthetic images used in their computation, leading to inflated values when synthetic data significantly outnumber real ones. These findings underscore the need to carefully interpret diversity metrics and the importance of employing complementary evaluation strategies for robust assessment. Overall, while StyleGAN2-ADA shows promise as a tool for generating privacy-preserving synthetic medical images, overcoming diversity limitations will require exploring alternative generative architectures or incorporating additional regularization techniques.

Conditional Diffusion Models for CT Image Synthesis from CBCT: A Systematic Review

Alzahra Altalib, Chunhui Li, Alessandro Perelli

arxiv logopreprintSep 22 2025
Objective: Cone-beam computed tomography (CBCT) provides a low-dose imaging alternative to conventional CT, but suffers from noise, scatter, and artifacts that degrade image quality. Synthetic CT (sCT) aims to translate CBCT to high-quality CT-like images for improved anatomical accuracy and dosimetric precision. Although deep learning approaches have shown promise, they often face limitations in generalizability and detail preservation. Conditional diffusion models (CDMs), with their iterative refinement process, offers a novel solution. This review systematically examines the use of CDMs for CBCT-to-sCT synthesis. Methods: A systematic search was conducted in Web of Science, Scopus, and Google Scholar for studies published between 2013 and 2024. Inclusion criteria targeted works employing conditional diffusion models specifically for sCT generation. Eleven relevant studies were identified and analyzed to address three questions: (1) What conditional diffusion methods are used? (2) How do they compare to conventional deep learning in accuracy? (3) What are their clinical implications? Results: CDMs incorporating anatomical priors and spatial-frequency features demonstrated improved structural preservation and noise robustness. Energy-guided and hybrid latent models enabled enhanced dosimetric accuracy and personalized image synthesis. Across studies, CDMs consistently outperformed traditional deep learning models in noise suppression and artefact reduction, especially in challenging cases like lung imaging and dual-energy CT. Conclusion: Conditional diffusion models show strong potential for generalized, accurate sCT generation from CBCT. However, clinical adoption remains limited. Future work should focus on scalability, real-time inference, and integration with multi-modal imaging to enhance clinical relevance.

Echo-Path: Pathology-Conditioned Echo Video Generation

Kabir Hamzah Muhammad, Marawan Elbatel, Yi Qin, Xiaomeng Li

arxiv logopreprintSep 21 2025
Cardiovascular diseases (CVDs) remain the leading cause of mortality globally, and echocardiography is critical for diagnosis of both common and congenital cardiac conditions. However, echocardiographic data for certain pathologies are scarce, hindering the development of robust automated diagnosis models. In this work, we propose Echo-Path, a novel generative framework to produce echocardiogram videos conditioned on specific cardiac pathologies. Echo-Path can synthesize realistic ultrasound video sequences that exhibit targeted abnormalities, focusing here on atrial septal defect (ASD) and pulmonary arterial hypertension (PAH). Our approach introduces a pathology-conditioning mechanism into a state-of-the-art echo video generator, allowing the model to learn and control disease-specific structural and motion patterns in the heart. Quantitative evaluation demonstrates that the synthetic videos achieve low distribution distances, indicating high visual fidelity. Clinically, the generated echoes exhibit plausible pathology markers. Furthermore, classifiers trained on our synthetic data generalize well to real data and, when used to augment real training sets, it improves downstream diagnosis of ASD and PAH by 7\% and 8\% respectively. Code, weights and dataset are available here https://github.com/Marshall-mk/EchoPathv1

A Novel Metric for Detecting Memorization in Generative Models for Brain MRI Synthesis

Antonio Scardace, Lemuel Puglisi, Francesco Guarnera, Sebastiano Battiato, Daniele Ravì

arxiv logopreprintSep 20 2025
Deep generative models have emerged as a transformative tool in medical imaging, offering substantial potential for synthetic data generation. However, recent empirical studies highlight a critical vulnerability: these models can memorize sensitive training data, posing significant risks of unauthorized patient information disclosure. Detecting memorization in generative models remains particularly challenging, necessitating scalable methods capable of identifying training data leakage across large sets of generated samples. In this work, we propose DeepSSIM, a novel self-supervised metric for quantifying memorization in generative models. DeepSSIM is trained to: i) project images into a learned embedding space and ii) force the cosine similarity between embeddings to match the ground-truth SSIM (Structural Similarity Index) scores computed in the image space. To capture domain-specific anatomical features, training incorporates structure-preserving augmentations, allowing DeepSSIM to estimate similarity reliably without requiring precise spatial alignment. We evaluate DeepSSIM in a case study involving synthetic brain MRI data generated by a Latent Diffusion Model (LDM) trained under memorization-prone conditions, using 2,195 MRI scans from two publicly available datasets (IXI and CoRR). Compared to state-of-the-art memorization metrics, DeepSSIM achieves superior performance, improving F1 scores by an average of +52.03% over the best existing method. Code and data of our approach are publicly available at the following link: https://github.com/brAIn-science/DeepSSIM.

Insertion of hepatic lesions into clinical photon-counting-detector CT projection data.

Gong H, Kharat S, Wellinghoff J, El Sadaney AO, Fletcher JG, Chang S, Yu L, Leng S, McCollough CH

pubmed logopapersSep 19 2025
To facilitate task-driven image quality assessment of lesion detectability in clinical photon-counting-detector CT (PCD-CT), it is desired to have patient image data with known pathology and precise annotation. Standard patient case collection and reference standard establishment are time- and resource-intensive. To mitigate this challenge, we aimed to develop a projection-domain lesion insertion framework that efficiently creates realistic patient cases by digitally inserting real radiopathologic features into patient PCD-CT images. &#xD;Approach. This framework used an artificial-intelligence-assisted (AI) semi-automatic annotation to generate digital lesion models from real lesion images. The x-ray energy for commercial beam-hardening correction in PCD-CT system was estimated and used for calculating multi-energy forward projections of these lesion models at different energy thresholds. Lesion projections were subsequently added to patient projections from PCD-CT exams. The modified projections were reconstructed to form realistic lesion-present patient images, using the CT manufacturer's offline reconstruction software. Image quality was qualitatively and quantitatively validated in phantom scans and patient cases with liver lesions, using visual inspection, CT number accuracy, structural similarity index (SSIM), and radiomic feature analysis. Statistical tests were performed using Wilcoxon signed rank test. &#xD;Main results. No statistically significant discrepancy (p>0.05) of CT numbers was observed between original and re-inserted tissue- and contrast-media-mimicking rods and hepatic lesions (mean ± standard deviation): rods 0.4 ± 2.3 HU, lesions -1.8 ± 6.4 HU. The original and inserted lesions showed similar morphological features at original and re-inserted locations: mean ± standard deviation of SSIM 0.95 ± 0.02. Additionally, the corresponding radiomic features presented highly similar feature clusters with no statistically significant differences (p>0.05). &#xD;Significance. The proposed framework can generate patient PCD-CT exams with realistic liver lesions using archived patient data and lesion images. It will facilitate systematic evaluation of PCD-CT systems and advanced reconstruction and post-processing algorithms with target pathological features.

SLaM-DiMM: Shared Latent Modeling for Diffusion Based Missing Modality Synthesis in MRI

Bhavesh Sandbhor, Bheeshm Sharma, Balamurugan Palaniappan

arxiv logopreprintSep 19 2025
Brain MRI scans are often found in four modalities, consisting of T1-weighted with and without contrast enhancement (T1ce and T1w), T2-weighted imaging (T2w), and Flair. Leveraging complementary information from these different modalities enables models to learn richer, more discriminative features for understanding brain anatomy, which could be used in downstream tasks such as anomaly detection. However, in clinical practice, not all MRI modalities are always available due to various reasons. This makes missing modality generation a critical challenge in medical image analysis. In this paper, we propose SLaM-DiMM, a novel missing modality generation framework that harnesses the power of diffusion models to synthesize any of the four target MRI modalities from other available modalities. Our approach not only generates high-fidelity images but also ensures structural coherence across the depth of the volume through a dedicated coherence enhancement mechanism. Qualitative and quantitative evaluations on the BraTS-Lighthouse-2025 Challenge dataset demonstrate the effectiveness of the proposed approach in synthesizing anatomically plausible and structurally consistent results. Code is available at https://github.com/BheeshmSharma/SLaM-DiMM-MICCAI-BraTS-Challenge-2025.
Page 2 of 26256 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.