The Potency of DCLGAN for Reducing Scan time in PET/CT imaging with different radiotracers of clinical importance.
Authors
Affiliations (2)
Affiliations (2)
- From the Department of Nuclear Medicine (D.K.), Hallym University Sacred Heart Hospital, College of Medicine, Hallym University, Seoul, Republic of Korea; Yonsei University College of Medicine (G.Y.K., S.S., Y.Y.), Department of Computer Science (K.C.), Nuclear Medicine (D.K., S.L., H.L., S.K.), Artificial Intelligence (H.H., D.K.), Yonsei University, Seoul, Republic of Korea.
- From the Department of Nuclear Medicine (D.K.), Hallym University Sacred Heart Hospital, College of Medicine, Hallym University, Seoul, Republic of Korea; Yonsei University College of Medicine (G.Y.K., S.S., Y.Y.), Department of Computer Science (K.C.), Nuclear Medicine (D.K., S.L., H.L., S.K.), Artificial Intelligence (H.H., D.K.), Yonsei University, Seoul, Republic of Korea. [email protected].
Abstract
Shortening PET/CT acquisition without degrading diagnostic or quantitative performance would improve patient comfort and scanner throughput. We evaluated a Dual-Contrastive Learning GAN (DCLGAN) for reconstructing high-quality neuro-PET images from reduced-time acquisitions across three clinically used tracers (18F-florbetaben (FBB), 18F-FDG, and 18F-FP-CIT) benchmarking against representative state-of-the-art deep learning models. In this single-center retrospective study of symptomatic patients undergoing neuro-PET/CT, list-mode data from full-time acquisitions (20 min for FBB; 15 min for FDG and FP-CIT) were reconstructed into 3- and 5-minute images. DCLGAN was trained to translate short-time to full-time appearance; raw images and standard 2.5D U-Net served as comparators. Image quality was quantified by normalized root-mean-square error (NRMSE), peak signal-to-noise ratio (PSNR), and structural similarity (SSIM). Two blinded nuclear medicine physicians graded clinical image quality on a five-point scale; inter-reader agreement used Cohen's kappa. Quantitative agreement of standardized uptake value ratio (SUVR) for FBB/FDG and specific binding ratio (SBR) for FP-CIT between predicted and full-time images was assessed with intraclass correlation coefficients (ICCs) and Bland-Altman analysis. With DCLGAN at 5 minutes, image-quality metrics approached full-time performance across tracers: for FBB, NRMSE 0.018, PSNR 34.721 dB, SSIM 0.963; for FP-CIT, NRMSE 0.008, PSNR 41.889 dB, SSIM 0.982; and for FDG, NRMSE 0.013, PSNR 37.502 dB, SSIM 0.986. Mean reader quality scores for 5-minute DCLGAN images were high-4.48 (FBB), 4.50 (FDG), and 4.55 (FP-CIT)-with strong inter-reader agreement (κ=0.85, 0.83, 0.89, respectively). Quantitative agreement with full-time scans was excellent: ICCs for 5-minute images were 0.997 (FBB), 0.995 (FDG), and 0.996 (FP-CIT). Bland-Altman analyses showed minimal bias (mean bias -0.004 for FBB, 0.012 for FDG, -0.029 for FP-CIT) with narrow limits of agreement. Compared with 3-minute reconstructions, 5-minute DCLGAN images demonstrated significantly higher PSNR/SSIM and lower NRMSE (p<0.001), while maintaining or improving reader scores and quantitative agreement. Across FBB, FDG, and FP-CIT, DCLGAN enabled 5-minute neuro-PET acquisitions that appeared to maintain reader-rated quality and showed high agreement in SUVR/SBR values with full-time scans. These findings suggest that clinically practical scan-time reduction may be feasible while maintaining interpretability and quantitative integrity in this cohort.