Unpaired T1-weighted MRI synthesis from T2-weighted data using unsupervised learning.
Authors
Affiliations (4)
Affiliations (4)
- Department of Radiology, Shenzhen Hospital (Futian) of Guangzhou University of Chinese Medicine, Shenzhen, Guangdong 518034, China.
- School of Biomedical Engineering, Guangdong Medical University, Dongguan, Guangdong 523808, China.
- School of Biomedical Engineering, Guangdong Medical University, Dongguan, Guangdong 523808, China; Dongguan Key Laboratory of Medical Electronics and Medical Imaging Equipment, Dongguan, Guangdong 523808, China.
- School of Biomedical Engineering, Guangdong Medical University, Dongguan, Guangdong 523808, China; Dongguan Key Laboratory of Medical Electronics and Medical Imaging Equipment, Dongguan, Guangdong 523808, China. Electronic address: [email protected].
Abstract
Magnetic Resonance Imaging (MRI) is indispensable for modern diagnostics because of its detailed anatomical and functional information without the use of ionizing radiation. However, acquiring multiple imaging sequences - such as T1-weighted (T1w) and T2-weighted (T2w) scans - can prolong scan times, increase patient discomfort, and raise healthcare costs. In this study, we propose an unsupervised framework based on a contrast-sensitive domain translation network with adaptive feature normalization to translate unpaired T2w MRI images into clinically acceptable T1w images. Our method employs adversarial training, along with cycle consistency, identity, and attention-guided loss functions. These components ensure that the generated images not only preserve essential anatomical details but also exhibit high visual fidelity compared to ground truth T1w images. Quantitative evaluation on a publicly available MRI dataset yielded a mean Peak Signal-to-Noise Ratio (PSNR) of 22.403 dB, a mean Structural Similarity Index (SSIM) of 0.775, Root Mean Squared Error (RMSE) of 0.078, and Mean Absolute Error (MAE) of 0.036. Additional analysis of pixel intensity and grayscale distributions further supported the consistency between the generated and ground truth images. Qualitative assessment included visual comparison to assess perceptual fidelity. These promising results suggest that a contrast-sensitive domain translation network with an adaptive feature normalization framework can effectively generate realistic T1w images from T2w inputs, potentially reducing the need for acquiring multiple sequences and thereby streamlining MRI protocols.