Back to all papers

Structure-Semantic Guided MRI-to-PET Synthesis with Spatial-Frequency Discriminator.

May 11, 2026pubmed logopapers

Authors

Song X,Wang K,Li M,Xu S,Liu Q

Abstract

Multi-modal medical imaging plays a vital role in clinical decision-making by providing complementary anatomical and functional information. In particular, the combination of Magnetic Resonance Imaging (MRI) and Positron Emission Tomography (PET) offers synergistic insights for early diagnosis and progression monitoring of Alzheimer's disease (AD). However, PET's clinical utility remains constrained by high cost, radiation exposure, and limited availability. To address these limitations, we propose a novel adversarial framework for synthesizing clinically plausible PET representations from structural T1-weighted MRI. Specifically, we design a Multi-scale Structural Representation Injection (MSRI) module to overcome structural misalignments in diagnostically sensitive regions, which is achieved by hierarchical anatomical encoding integrated with axis-aware attention. Building upon this foundation, the Adaptive Semantic Residual Fusion (ASRF) module bridge the semantic inconsistencies between locally extracted Res2Net features and globally encoded Transformer representations via dual-attention gating. Furthermore, the Direction-Aware Spatial-Frequency Discriminator (DASFD) ensures anatomical fidelity by incorporating reconstruction-guided priors and multi-domain discrimination across spatial, frequency, and patch-level pathways. Extensive experiments demonstrate that the proposed method consistently produces high-fidelity PET (SSIM: 90.66%, PSNR: 26.35 dB), surpassing existing methods in both quantitative accuracy and visual realism.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.