Back to all papers

Deep learning synthesis of virtual T2-weighted fat-suppressed MR images: a multi-center study.

March 15, 2026pubmed logopapers

Authors

Wang C,Xu J,Zhang M,Hao D,Zhang S,Lang N

Affiliations (6)

  • Department of Radiology, State Key Laboratory of Vascular Homeostasis and Remodeling, Peking University Third Hospital, Beijing 100191, China. Electronic address: [email protected].
  • Department of Radiology, State Key Laboratory of Vascular Homeostasis and Remodeling, Peking University Third Hospital, Beijing 100191, China. Electronic address: [email protected].
  • Department of Radiology, the Affiliated Hospital of Qingdao University, Qingdao, Shandong, China. Electronic address: [email protected].
  • Department of Radiology, the Affiliated Hospital of Qingdao University, Qingdao, Shandong, China. Electronic address: [email protected].
  • Department of Biomedical Informatics, State Key Laboratory of Vascular Homeostasis and Remodeling, School of Basic Medical Sciences, Peking University, 38 Xueyuan Road, Beijing 100191, China. Electronic address: [email protected].
  • Department of Radiology, State Key Laboratory of Vascular Homeostasis and Remodeling, Peking University Third Hospital, Beijing 100191, China. Electronic address: [email protected].

Abstract

To develop a Generative Adversarial Network (GAN) for generating virtual T2 fat-suppressed (T2FS) sequences from standard T1- and T2-weighted images, with the clinical objective of reducing MRI scan time without compromising diagnostic value for spinal tumor assessment. This retrospective study included 1,389 consecutive patients with spinal tumors from two institutions, divided into training (n = 1,026; 49.2 ± 16.4 years; 540 males), internal validation (n = 257; 48.2 ± 17.2 years; 140 males), and external test (n = 106; 52.8 ± 17.0 years; 59 males) sets. The model used T1- and T2-weighted images as input to generate T2FS images. Quantitative image fidelity evaluations included mean squared error (MSE), structural similarity index measure (SSIM), and peak signal-to-noise ratio (PSNR). The Dice similarity coefficient (DSC) assessed lesion segmentation. Signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) measured objective quality. Two experienced radiologists independently rated the images on a 5-point scale, evaluating overall quality, tumor detail preservation, fat suppression performance, and artifacts. The external test set exhibited an MSE of 0.0060 ± 0.0038, SSIM of 0.667 ± 0.120, and PSNR of 23.368 ± 4.298 dB. The real and synthetic images agreed strongly in lesion segmentation, with mean DSC of 0.820 ± 0.176 (internal) and 0.807 ± 0.188 (external). SNR and CNR were comparable between real and synthetic images in both datasets. Qualitative assessments indicated equivalent overall image quality and artifacts. Synthetic images showed superior fat suppression, while real images offered better tumor internal detail. The proposed GAN-based method generated diagnostically valuable virtual T2FS images. 1. The proposed deep learning model successfully generated virtual T2FS images from standard T1/T2 MRI, demonstrating favorable quantitative agreement (MSE 0.0060, SSIM 0.667). 2. Synthetic and real images demonstrated strong consistency in lesion segmentation (DSC 0.809-0.824) and comparable SNR/CNR values. 3. While synthetic images provided superior fat suppression, real images maintained slightly better tumor internal detail visualization.

Topics

Journal Article

Ready to Sharpen Your Edge?

Subscribe to join 11k+ peers who rely on RadAI Slice. Get the essential weekly briefing that empowers you to navigate the future of radiology.

We respect your privacy. Unsubscribe at any time.