Sort by:
Page 19 of 26256 results

D2Diff : A Dual Domain Diffusion Model for Accurate Multi-Contrast MRI Synthesis

Sanuwani Dayarathna, Himashi Peiris, Kh Tohidul Islam, Tien-Tsin Wong, Zhaolin Chen

arxiv logopreprintJun 18 2025
Multi contrast MRI synthesis is inherently challenging due to the complex and nonlinear relationships among different contrasts. Each MRI contrast highlights unique tissue properties, but their complementary information is difficult to exploit due to variations in intensity distributions and contrast specific textures. Existing methods for multi contrast MRI synthesis primarily utilize spatial domain features, which capture localized anatomical structures but struggle to model global intensity variations and distributed patterns. Conversely, frequency domain features provide structured inter contrast correlations but lack spatial precision, limiting their ability to retain finer details. To address this, we propose a dual domain learning framework that integrates spatial and frequency domain information across multiple MRI contrasts for enhanced synthesis. Our method employs two mutually trained denoising networks, one conditioned on spatial domain and the other on frequency domain contrast features through a shared critic network. Additionally, an uncertainty driven mask loss directs the models focus toward more critical regions, further improving synthesis accuracy. Extensive experiments show that our method outperforms SOTA baselines, and the downstream segmentation performance highlights the diagnostic value of the synthetic results.

DiffM<sup>4</sup>RI: A Latent Diffusion Model with Modality Inpainting for Synthesizing Missing Modalities in MRI Analysis.

Ye W, Guo Z, Ren Y, Tian Y, Shen Y, Chen Z, He J, Ke J, Shen Y

pubmed logopapersJun 17 2025
Foundation Models (FMs) have shown great promise for multimodal medical image analysis such as Magnetic Resonance Imaging (MRI). However, certain MRI sequences may be unavailable due to various constraints, such as limited scanning time, patient discomfort, or scanner limitations. The absence of certain modalities can hinder the performance of FMs in clinical applications, making effective missing modality imputation crucial for ensuring their applicability. Previous approaches, including generative adversarial networks (GANs), have been employed to synthesize missing modalities in either a one-to-one or many-to-one manner. However, these methods have limitations, as they require training a new model for different missing scenarios and are prone to mode collapse, generating limited diversity in the synthesized images. To address these challenges, we propose DiffM<sup>4</sup>RI, a diffusion model for many-to-many missing modality imputation in MRI. DiffM<sup>4</sup>RI innovatively formulates the missing modality imputation as a modality-level inpainting task, enabling it to handle arbitrary missing modality situations without the need for training multiple networks. Experiments on the BraTs datasets demonstrate DiffM<sup>4</sup>RI can achieve an average SSIM improvement of 0.15 over MustGAN, 0.1 over SynDiff, and 0.02 over VQ-VAE-2. These results highlight the potential of DiffM<sup>4</sup>RI in enhancing the reliability of FMs in clinical applications. The code is available at https://github.com/27yw/DiffM4RI.

Toward general text-guided multimodal brain MRI synthesis for diagnosis and medical image analysis.

Wang Y, Xiong H, Sun K, Bai S, Dai L, Ding Z, Liu J, Wang Q, Liu Q, Shen D

pubmed logopapersJun 17 2025
Multimodal brain magnetic resonance imaging (MRI) offers complementary insights into brain structure and function, thereby improving the diagnostic accuracy of neurological disorders and advancing brain-related research. However, the widespread applicability of MRI is substantially limited by restricted scanner accessibility and prolonged acquisition times. Here, we present TUMSyn, a text-guided universal MRI synthesis model capable of generating brain MRI specified by textual imaging metadata from routinely acquired scans. We ensure the reliability of TUMSyn by constructing a brain MRI database comprising 31,407 3D images across 7 MRI modalities from 13 worldwide centers and pre-training an MRI-specific text encoder to process text prompts effectively. Experiments on diverse datasets and physician assessments indicate that TUMSyn-generated images can be utilized along with acquired MRI scan(s) to facilitate large-scale MRI-based screening and diagnosis of multiple brain diseases, substantially reducing the time and cost of MRI in the healthcare system.

Frequency-Calibrated Membership Inference Attacks on Medical Image Diffusion Models

Xinkai Zhao, Yuta Tokuoka, Junichiro Iwasawa, Keita Oda

arxiv logopreprintJun 17 2025
The increasing use of diffusion models for image generation, especially in sensitive areas like medical imaging, has raised significant privacy concerns. Membership Inference Attack (MIA) has emerged as a potential approach to determine if a specific image was used to train a diffusion model, thus quantifying privacy risks. Existing MIA methods often rely on diffusion reconstruction errors, where member images are expected to have lower reconstruction errors than non-member images. However, applying these methods directly to medical images faces challenges. Reconstruction error is influenced by inherent image difficulty, and diffusion models struggle with high-frequency detail reconstruction. To address these issues, we propose a Frequency-Calibrated Reconstruction Error (FCRE) method for MIAs on medical image diffusion models. By focusing on reconstruction errors within a specific mid-frequency range and excluding both high-frequency (difficult to reconstruct) and low-frequency (less informative) regions, our frequency-selective approach mitigates the confounding factor of inherent image difficulty. Specifically, we analyze the reverse diffusion process, obtain the mid-frequency reconstruction error, and compute the structural similarity index score between the reconstructed and original images. Membership is determined by comparing this score to a threshold. Experiments on several medical image datasets demonstrate that our FCRE method outperforms existing MIA methods.

Reaction-Diffusion Model for Brain Spacetime Dynamics.

Li Q, Calhoun VD

pubmed logopapersJun 16 2025
The human brain exhibits intricate spatiotemporal dynamics, which can be described and understood through the framework of complex dynamic systems theory. In this study, we leverage functional magnetic resonance imaging (fMRI) data to investigate reaction-diffusion processes in the brain. A reaction-diffusion process refers to the interaction between two or more substances that spread through space and react with each other over time, often resulting in the formation of patterns or waves of activity. Building on this empirical foundation, we apply a reaction-diffusion framework inspired by theoretical physics to simulate the emergence of brain spacetime vortices within the brain. By exploring this framework, we investigate how reaction-diffusion processes can serve as a compelling model to govern the formation and propagation of brain spacetime vortices, which are dynamic, swirling patterns of brain activity that emerge and evolve across both time and space within the brain. Our approach integrates computational modeling with fMRI data to investigate the spatiotemporal properties of these vortices, offering new insights into the fundamental principles of brain organization. This work highlights the potential of reaction-diffusion models as an alternative framework for understanding brain spacetime dynamics.

PRO: Projection Domain Synthesis for CT Imaging

Kang Chen, Bin Huang, Xuebin Yang, Junyan Zhang, Qiegen Liu

arxiv logopreprintJun 16 2025
Synthesizing high quality CT projection data remains a significant challenge due to the limited availability of annotated data and the complex nature of CT imaging. In this work, we present PRO, a projection domain synthesis foundation model for CT imaging. To the best of our knowledge, this is the first study that performs CT synthesis in the projection domain. Unlike previous approaches that operate in the image domain, PRO learns rich structural representations from raw projection data and leverages anatomical text prompts for controllable synthesis. This projection domain strategy enables more faithful modeling of underlying imaging physics and anatomical structures. Moreover, PRO functions as a foundation model, capable of generalizing across diverse downstream tasks by adjusting its generative behavior via prompt inputs. Experimental results demonstrated that incorporating our synthesized data significantly improves performance across multiple downstream tasks, including low-dose and sparse-view reconstruction. These findings underscore the versatility and scalability of PRO in data generation for various CT applications. These results highlight the potential of projection domain synthesis as a powerful tool for data augmentation and robust CT imaging. Our source code is publicly available at: https://github.com/yqx7150/PRO.

PRO: Projection Domain Synthesis for CT Imaging

Kang Chen, Bin Huang, Xuebin Yang, Junyan Zhang, Qiegen Liu

arxiv logopreprintJun 16 2025
Synthesizing high quality CT images remains a signifi-cant challenge due to the limited availability of annotat-ed data and the complex nature of CT imaging. In this work, we present PRO, a novel framework that, to the best of our knowledge, is the first to perform CT image synthesis in the projection domain using latent diffusion models. Unlike previous approaches that operate in the image domain, PRO learns rich structural representa-tions from raw projection data and leverages anatomi-cal text prompts for controllable synthesis. This projec-tion domain strategy enables more faithful modeling of underlying imaging physics and anatomical structures. Moreover, PRO functions as a foundation model, capa-ble of generalizing across diverse downstream tasks by adjusting its generative behavior via prompt inputs. Experimental results demonstrated that incorporating our synthesized data significantly improves perfor-mance across multiple downstream tasks, including low-dose and sparse-view reconstruction, even with limited training data. These findings underscore the versatility and scalability of PRO in data generation for various CT applications. These results highlight the potential of projection domain synthesis as a powerful tool for data augmentation and robust CT imaging. Our source code is publicly available at: https://github.com/yqx7150/PRO.

GM-LDM: Latent Diffusion Model for Brain Biomarker Identification through Functional Data-Driven Gray Matter Synthesis

Hu Xu, Yang Jingling, Jia Sihan, Bi Yuda, Calhoun Vince

arxiv logopreprintJun 15 2025
Generative models based on deep learning have shown significant potential in medical imaging, particularly for modality transformation and multimodal fusion in MRI-based brain imaging. This study introduces GM-LDM, a novel framework that leverages the latent diffusion model (LDM) to enhance the efficiency and precision of MRI generation tasks. GM-LDM integrates a 3D autoencoder, pre-trained on the large-scale ABCD MRI dataset, achieving statistical consistency through KL divergence loss. We employ a Vision Transformer (ViT)-based encoder-decoder as the denoising network to optimize generation quality. The framework flexibly incorporates conditional data, such as functional network connectivity (FNC) data, enabling personalized brain imaging, biomarker identification, and functional-to-structural information translation for brain diseases like schizophrenia.

Inference of single cell profiles from histology stains with the Single-Cell omics from Histology Analysis Framework (SCHAF)

Comiter, C., Chen, X., Vaishnav, E. D., Kobayashi-Kirschvink, K. J., Ciapmricotti, M., Zhang, K., Murray, J., Monticolo, F., Qi, J., Tanaka, R., Brodowska, S. E., Li, B., Yang, Y., Rodig, S. J., Karatza, A., Quintanal Villalonga, A., Turner, M., Pfaff, K. L., Jane-Valbuena, J., Slyper, M., Waldman, J., Vigneau, S., Wu, J., Blosser, T. R., Segerstolpe, A., Abravanel, D., Wagle, N., Demehri, S., Zhuang, X., Rudin, C. M., Klughammer, J., Rozenblatt-Rosen, O., Stultz, C. M., Shu, J., Regev, A.

biorxiv logopreprintJun 13 2025
Tissue biology involves an intricate balance between cell-intrinsic processes and interactions between cells organized in specific spatial patterns, which can be respectively captured by single cell profiling methods, such as single cell RNA-seq (scRNA-seq) and spatial transcriptomics, and histology imaging data, such as Hematoxylin-and-Eosin (H&E) stains. While single cell profiles provide rich molecular information, they can be challenging to collect routinely in the clinic and either lack spatial resolution or high gene throughput. Conversely, histological H&E assays have been a cornerstone of tissue pathology for decades, but do not directly report on molecular details, although the observed structure they capture arises from molecules and cells. Here, we leverage vision transformers and adversarial deep learning to develop the Single Cell omics from Histology Analysis Framework (SCHAF), which generates a tissue sample's spatially-resolved whole transcriptome single cell omics dataset from its H&E histology image. We demonstrate SCHAF on a variety of tissues--including lung cancer, metastatic breast cancer, placentae, and whole mouse pups--training with matched samples analyzed by sc/snRNA-seq, H&E staining, and, when available, spatial transcriptomics. SCHAF generated appropriate single cell profiles from histology images in test data, related them spatially, and compared well to ground-truth scRNA-Seq, expert pathologist annotations, or direct spatial transcriptomic measurements, with some limitations. SCHAF opens the way to next-generation H&E analyses and an integrated understanding of cell and tissue biology in health and disease.

Exploring the Effectiveness of Deep Features from Domain-Specific Foundation Models in Retinal Image Synthesis

Zuzanna Skorniewska, Bartlomiej W. Papiez

arxiv logopreprintJun 13 2025
The adoption of neural network models in medical imaging has been constrained by strict privacy regulations, limited data availability, high acquisition costs, and demographic biases. Deep generative models offer a promising solution by generating synthetic data that bypasses privacy concerns and addresses fairness by producing samples for under-represented groups. However, unlike natural images, medical imaging requires validation not only for fidelity (e.g., Fr\'echet Inception Score) but also for morphological and clinical accuracy. This is particularly true for colour fundus retinal imaging, which requires precise replication of the retinal vascular network, including vessel topology, continuity, and thickness. In this study, we in-vestigated whether a distance-based loss function based on deep activation layers of a large foundational model trained on large corpus of domain data, colour fundus imaging, offers advantages over a perceptual loss and edge-detection based loss functions. Our extensive validation pipeline, based on both domain-free and domain specific tasks, suggests that domain-specific deep features do not improve autoen-coder image generation. Conversely, our findings highlight the effectiveness of con-ventional edge detection filters in improving the sharpness of vascular structures in synthetic samples.
Page 19 of 26256 results
Show
per page

Ready to Sharpen Your Edge?

Join hundreds of your peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.